Culture

Primitive stem cells point to new bone grafts for stubborn-to-heal fractures

image: A close-up view of the intricate microarchitecture of the pluripotent stem-cell-derived extracellular matrix

Image: 
Texas A&M University College of Engineering

Although most broken bones can be mended with a firm cast and a generous measure of tender loving care, more complicated fractures require treatments like bone grafting. Researchers at Texas A&M University have now created superior bone grafts using primitive stem cells. They found that these cells help create very fertile scaffolds needed for bone to regenerate at the site of repair.

The researchers said these grafts could be used to promote swift and precise bone healing so that patients maximally benefit from the surgical intervention.

"There are several problems that can occur with orthopedic implants, like inflammation and pain. Also, they can loosen, requiring revision surgeries that are often more complicated than the original surgery to put in the implant," Dr. Roland Kaunas, associate professor in the Department of Biomedical Engineering and a corresponding author on the study. "So, by speeding up the bone healing process, our material can potentially reduce the number of these revision surgeries."

The researchers have published their findings in the June issue of the journal Nature Communications.

Each year, around 600,000 people in the United States experience delayed or incomplete bone healing. For some of these cases, physicians turn to surgical procedures that involve transplanting bone tissue to the repair site. These bone grafts have generally come from two sources: the patient's own bone from another location on the body called autografts, or highly-processed human cadaver bones.

However, both types of bone grafts have their share of drawbacks. For example, autografts require additional surgery for bone tissue extraction, increasing the recovery time for patients and sometimes, chronic pain. On the other hand, grafts derived from cadaver bone preclude the need for two surgeries, but these transplants tend to be devoid of many of the biomolecules that promote bone repair.

"Grafts from cadaver bone have some of the physical properties of bone, and even a little bit of the biological essence but they are very depleted in terms of their functionality," said Dr. Carl Gregory, associate professor at the Texas A&M Health Science Center, also a corresponding author on the study. "What we wanted to do was engineer a bone graft where we could experimentally crank up the gears, so to speak, and make it more biologically active."

Previous studies have shown that stem cells, particularly a type called mesenchymal stem cells, can be used to produce bone grafts that are biologically active. In particular, these cells convert to bone cells that produce the materials required to make a scaffolding, or the extracellular matrix, that bones need for their growth and survival.

However, these stem cells are usually extracted from the marrow of an adult bone and are, as a result, older. Their age affects the cells' ability to divide and produce more of the precious extracellular matrix, Kaunas said.

To circumvent this problem, the researchers turned to the cellular ancestors of mesenchymal stem cells, called pluripotent stem cells. Unlike adult mesenchymal cells that have a relatively short lifetime, they noted that these primitive cells can keep proliferating, thereby creating an unlimited supply of mesenchymal stem cells needed to make the extracellular matrix for bone grafts. They added that pluripotent cells can be made by genetically reprogramming donated adult cells.

When the researchers experimentally induced the pluripotent stem cells to make brand new mesenchymal stem cells, they were able to generate an extracellular matrix that was far more biologically active compared to that generated by mesenchymal cells obtained from adult bone.

"Our materials were not just enriched in the biological molecules that are required to make the chunky part of bone tissue but also growth factors that drive blood vessel formation," said Gregory.

To test the efficacy of their scaffolding material as a bone graft, they then carefully extracted and purified the enriched extracellular matrix and then implanted it at a site of bone defects. Upon examining the status of bone repair in a few weeks, they found that their pluripotent stem-cell-derived matrix was five to sixfold more effective than the best FDA-approved graft stimulator.

"Bone repair assays using the gold standard of grafts, like those administered with the powerful bone growth stimulator called bone morphogenic protein-2, can take about eight weeks, but we were getting complete healing in four weeks," said Gregory. "So, under these conditions, our material surpassed the efficacy of bone morphogenic protein-2 by a longshot, indicating that it is a vast improvement of current bone repair technologies."

The researchers also said that from a clinical standpoint, the grafts can be incorporated into numerous engineered implants, such as 3D-printed implants or metal screws, so that these parts integrate better with the surrounding bone. They also noted that the bone grafts will also be easier to produce and hence are advantageous from a manufacturing standpoint.

"Our material is very promising because the pluripotent stem cells can ideally generate many batches of the extracellular matrix from just a single donor which will greatly simplify the large-scale manufacturing of these bone grafts," said Kaunas.

Credit: 
Texas A&M University

Researchers identify potent antibody cocktail to treat COVID-19

Researchers at the University of Maryland School of Medicine (UMSOM) evaluated several human antibodies to determine the most potent combination to be mixed in a cocktail and used as a promising anti-viral therapy against the virus that causes COVID-19. Their research, conducted in collaboration with scientists at Regeneron Pharmaceuticals, was published today in the journal Science. The study demonstrates the rapid process of isolating, testing and mass-producing antibody therapies against any infectious disease by using both genetically engineered mice and plasma from recovered COVID-19 patients.

The antibody cocktail evaluated by UMSOM researchers will be used to treat COVID-19 patients in a clinical trial that was launched last week. The study was funded by Regeneron, a biotechnology company based in Tarrytown, New York.

Antibodies are proteins the immune system naturally makes in response to foreign invaders like viruses and bacteria. Antibody therapies were first tried in the late 19th century when researchers used a serum derived from the blood of infected animals to treat diphtheria.

To produce the so-called monoclonal antibodies for an antibody cocktail to fight COVID-19, the researchers first needed to identify which antibodies fight the novel coronavirus most effectively.

This involved determining which antibodies could bind most effectively to the spike protein found on the surface of SARS-CoV-2, the virus that causes COVID-19. The Regeneron team evaluated thousands of human antibodies from plasma donations from recovered COVID-19 patients. They also generated antibodies from mice genetically engineered to produce human antibodies when infected with the virus.

"The ability of the research team to rapidly derive antibodies using these two methods enabled us screen their selected antibodies against live virus to determine which had the strongest anti-viral effects," said study co-author Matthew Frieman, PhD, Associate Professor of Microbiology and Immunology at the University of Maryland School of Medicine. He has been studying coronaviruses for the past 16 years and has been carefully studying SARS-CoV-2 in his secure laboratory since February.

Dr. Frieman and his UMSOM colleagues evaluated four of the most potent antibodies for to determine the potential of each one to neutralize the SARS-CoV-2 virus. They identified the two that would form the most powerful mix when used in combination.

"An important goal of this research was to evaluate the most potent antibodies that bind to different molecules in the spike protein so they could be mixed together as a treatment," said study co-author Stuart Weston, PhD, a post-doctoral research fellow in the Department of Microbiology and Immunology.

The cocktail containing the two antibodies is now being tested in a new clinical trial sponsored by Regeneron that will investigate whether the therapy can improve the outcomes of COVID-19 patients (both those who are hospitalized and those who are not). It will also be tested as a preventive therapy in those who are healthy but at high risk of getting sick because they work in a healthcare setting or have been exposed to an infected person.

"Our School of Medicine researchers continue to provide vital advances on all fronts to help fight the COVID-19 pandemic and ultimately save lives," said Dean E. Albert Reece, MD, PhD, MBA, who is also Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor, University of Maryland School of Medicine. "This particular research not only contributes to a potential new therapy against COVID-19 but could have broader implications in terms of the development of monoclonal antibody therapies for other diseases."

Credit: 
University of Maryland School of Medicine

Premature epigenomic aging acts like a 'sleeper cell' that is awaken by Western-style diet

image: Dr. Cheryl Walker, one of the corresponding authors of the work

Image: 
Baylor College of Medicine

The epigenome is sometimes referred to as the "software" or "operating system" of the genome. It comprises small chemical modifications to DNA and the proteins that make up our chromosomes, and controls the activity of all the genes within the genome.

During early life, as our organs develop, the epigenome guides and changes along with normal developmental milestones. Exposure to endocrine-disrupting chemicals (EDCs) during this process can cause widespread reprogramming of this "software," and this reprogramming persists for the life of the individual. Depending on the organ, the window of vulnerability for this reprogramming may be anytime from development in the womb to childhood and adolescence, depending on how long normal development lasts.

"In this study, we found that even brief exposure to certain chemicals while the liver is developing, prematurely aged the liver epigenome. Exposure to these EDCs caused the young liver to acquire an adult epigenomic signature," said Dr. Cheryl Walker, professor and director of the Center for Precision Environmental Health at Baylor and lead author on the study. "However, this premature aging of the epigenome did not have an effect on health until later in life and after exposure to a high-fat diet."

In a healthy liver, the epigenome goes through a normal aging process. In this study, after exposure to an EDC, the researchers saw that this process accelerated. So, a 6-day old rat had the same epigenome they would normally see in an adult rat.

"The effect of this change on metabolic function wasn't immediate; instead, it was like a ticking time bomb, which was only ignited when we switched the animals to a Western-style diet, high in fat, sugar and cholesterol," said Walker, who also is the Alkek Presidential Chair in Environmental Health at Baylor.

Rats that were exposed early to EDC and later to a Western-style diet were found to be more susceptible to metabolic dysfunction than those that had the same EDC exposure, but were kept on a healthy diet. Those that remained on a healthy diet, despite the fact their epigenome had been reprogrammed, did not show the same changes in expression of genes that control metabolism, or accumulation of lipids in their serum, seen in rats on the high fat, sugar and cholesterol diet.

This study shows us how environmental exposures affect our health and disease susceptibility, both early and later in life," Walker said. "It also shows us that some people may be more adversely affected by a high-fat diet as adults than others due to environmental exposures they experienced earlier in their life."

Read all the details of this work in Nature Communications.

Featured as the June paper of the month by the National Institute of Environmental Health Sciences (NIEHS), this work was supported as part of the NIEHS multi-phased Toxicant Exposures and Responses by Genomic and Epigenomic Regulators of Transcription (TaRGET) Program.  While these findings are only in lab models at this time, researchers hope this and similar studies being conducted by the TARGET II Consortium can lead the way to identifying biomarkers to help better predict who is at risk for metabolic dysfunction such as fatty liver disease, diabetes or heart disease, and allow for more precise and early interventions.

Credit: 
Baylor College of Medicine

The nexus between economic inequality and social welfare

Equity (or, its counterpart, inequity) plays a fundamental role in the evaluation of the different dimensions of social welfare. But how can we consider and compare its different dimensions? These issues are in fact traditionally considered and compared across individuals - be it within national boundaries or across countries, but also over time, when we consider the distribution of resources over time and the related questions of savings, intergenerational distribution stemming from capital dynamics or the intertemporal use of natural resources. Finally, there's a third dimension ("states of the world" or future worlds) that takes into account the presence of uncertainty affecting the realizations of random variables. While the economics research has historically considered the fundamentally different dimensions of individuals, time, and states of the world separately, it is now clear that different potential dimensions of "inequity" (i.e., unequal distribution of resources in a particular dimension) are potentially closely intertwined: inequality between contemporaneous individuals might be correlated with intergenerational inequity between generations, uncertainty might affect individuals differently, and so on. Focusing on one dimension of inequity in isolation therefore runs the risk of neglecting potentially important interaction effects.

A new paper just published in the Journal of Economic Surveys revisits the concept of inequity - in the sense of unequal distributions - across individuals, time, and states of the world using a unified framework that generalizes the standard approach typically used to aggregate the different dimensions of social welfare. The study, co-authored by Johannes Emmerling, senior scientist at the CMCC Foundation and head of the Integrated Assessment Modeling Unit at EIEE, proposes a general measure of welfare as "equity equivalents" and a corresponding inequity index.

This generalized framework enables researchers to gather different concepts that have been investigated separately in previous researches.

"Unequal distribution of consumption or income", explains Johannes Emmerling, "comes in different 'dimensions': spatial, or across individuals within a country or in different countries; temporal between different generations, or in different 'states of the world' or uncertain worlds in which we could possibly live in the future. The aggregation of and comparison between individuals in these dimensions is crucial for studying issues with global, uncertain, and long-term consequences, such as climate change. Our study shows how inequity in these dimensions can be treated in a similar and analytical equivalent way. Moreover, we allowed for different preferences towards inequality in different dimensions, and found out that the order of aggregation across them matters for the evaluation of economic and environmental policies."

The study highlights that people tend to evaluate inequality differently in different dimensions: people tend to be very much concerned about the future (so we have a strong preference for giving something to the future generations), while are less concerned about current inequality (e.g., people living in different countries with different income levels). Moreover, people tend to have a higher degree of inequity aversion in terms of uncertainty compared to inequality and intertemporal distribution.

Climate change is a classic example that combines the three dimensions of individuals, time, and states together: issues that have been raised in this context include intergenerational inequity (e.g., the social discount rate), the notion of inequality and distributional justice, and the role of (deep) uncertainty together with the related idea of a precautionary principle. The common feature across these seemingly unrelated concepts is that losses and benefits of given policies need to be compared along different dimensions. "It's not obvious" adds J. Emmerling, "how taking into account inequality in climate change evaluation, but our research underlines the importance of inequality in the evaluation of long-term climate policies."

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Experts analyze options for treating multiple sclerosis-related cognitive impairment

image: Dr. DeLuca, an expert in cognitive rehabilitation research, is senior vice president for Research and Training at Kessler Foundation. He is professor of Neurology and Physical Medicine and Rehabilitation at Rutgers New Jersey Medical School, and president-elect of the National Academy of Neuropsychology.

Image: 
Kessler Foundation

East Hanover, NJ. June 16, 2020. Experts in cognitive research evaluated the status of available treatments as well as promising strategies for treating cognitive deficits in multiple sclerosis. The article, "Treatment and management of cognitive dysfunction in patients with multiple sclerosis", was published in Nature Reviews 2020 May 05. (doi: 10.1038/s41582-020-0355-1) The authors are John DeLuca, PhD, and Nancy Chiaravalloti, PhD, of Kessler Foundation, and Brian Sandroff, PhD, of the University of Alabama at Birmingham.

Cognitive dysfunction is a common, disabling symptom of multiple sclerosis, affecting two-thirds of patients. These individuals can have difficulties managing finances, performing household tasks, and functioning in the community and the workplace. Although the impact on daily life may be profound, the diagnosis and management of cognitive dysfunction in this population remains inadequate. The authors provided detailed analyses of different approaches to treatment, including cognitive rehabilitation, exercise training, and pharmacotherapy, and the important contributions of brain neuroimaging to advances in this field.

Over the past decade, research activity in cognitive rehabilitation has increased in the population with MS. There is greater emphasis on cognitive screening and assessment, and some standardized treatment protocols are available. "Evidence suggests that cognitive rehabilitation is effective in MS-related cognitive dysfunction, and may confer long-lasting effects," said John DeLuca, PhD, senior VP of Research and Training at Kessler Foundation, and a co-author of the article. "Access to cognitive rehabilitation therapy is likely to increase as remote options for delivery become more widely accepted, such as programs for home computers and telerehabilitation services."

Exercise training is an active area of MS research that shows promise for enhancing cognitive function and effecting positive change in the everyday lives of people living with MS. With improvements in methodology, this line of research will support consideration of exercise as the standard of care for individuals with MS. "It is critical that larger scale studies include participants with MS, including progressive MS, and target select cognitive outcomes," Dr. DeLuca noted. To develop treatment protocols, the timing, dosage and duration of exercise need to be determined. "As studies continue to evolve, clinical applications of exercise recommendations are likely to be implemented within the next ten years," he predicted.

The authors found that current pharmacotherapeutic approaches were of limited benefit for the cognitive symptoms of MS. To date, none of the available medications or disease-modifying therapies for MS are indicated for the treatment of cognitive deficits. Dr. DeLuca addressed the fundamental challenge to trials of pharmaceutical agents: "To determine the efficacy of a pharmacologic intervention for cognitive dysfunction, randomized controlled trials need to include cognition among their primary outcomes."

Credit: 
Kessler Foundation

Off-the-shelf tool for making mouse models of COVID-19

image: Researchers led by Paul McCray and Stanley Perlman at the University of Iowa have created an off-the-shelf tool that allows labs to create their own COVID-19 mouse model within a matter of days.

Image: 
University of Iowa Health Care

Until there are effective treatments or vaccines, the COVID-19 pandemic will remain a significant threat to public health and economies around the world. A major hurdle to developing and testing new anti-viral therapies and vaccines for COVID-19 is the lack of good, widely available animal models of the disease.

Researchers at the University of Iowa Carver College of Medicine and Medical University, Guangzhou, in China, have developed a simple tool to overcome that bottleneck. The researchers have created a gene therapy approach that can convert any lab mouse into one that can be infected with SARS-CoV-2 and develops COVID-like lung disease. The international team, led by Paul McCray, MD, and Stanley Perlman, PhD, at the UI, and Jincun Zhao, PhD, at Medical University, Guangzhou, have made their gene therapy vector freely available to any researchers who want to use it.

"There is a pressing need to understand this disease and to develop preventions and treatments," says McCray, UI professor of pediatrics, and microbiology and immunology. "We wanted to make it as easy as possible for other researchers to have access to this technology, which allows any lab to be able to immediately start working in this area by using this trick."

The "trick" is the use of an adenovirus gene therapy vector that is inhaled by the mice to deliver the human ACE2 protein into mouse airway cells. This is the protein that SARS-CoV-2 uses to infect cells. Once the mouse airway cells express the hACE2 protein, the mice become susceptible to infection with SARS-CoV-2 and they develop COVID-19-like lung symptoms. Although the disease is not fatal in the mice, the animals do get sick, losing weight and developing lung damage. Importantly, the vector is readily adaptable to any strain of mice (and other lab animals), which means research teams can rapidly convert mice with specific genetic traits into animals that are susceptible to SARS-Cov-2, allowing them to test whether those traits influence the disease.

Reporting in Cell, the researchers showed that mice treated with this gene therapy could be used to evaluate a vaccine and several potential COVID-19 therapies, including a preventative strategy known as poly I:C, which boosts the innate immune response, convalescent plasma from recovered COVID-19 patients, and the anti-viral drug remdesivir. In each case, the therapies prevented weight loss, reduced lung disease, and increased the speed of virus clearance in the mice. The team also showed mice are useful for studying important immune responses involved in clearing the SARS-CoV-2 virus.

Mice are the most commonly used experimental animal for studying human disease in the lab because they are accessible, inexpensive, and easy to use. They are also one of the easier animal models to use in biosafety level three environments, which are needed for work on COVID-19. However, due to differences between the human and mouse ACE2 protein, wild-type mice are not susceptible to the SARS-Cov-2 virus.

The gene therapy vector is essentially an off-the-shelf tool that allows labs to create their own COVID-19 mouse model within a few days. McCray, Zhao, and Perlman developed this approach in 2014, when Zhao was a postdoctoral researcher in Perlman's UI lab, to create mouse models of MERS.

"You can create these mice very quickly. You don't have to breed the strain, which is very time consuming and expensive," McCray explains. "We think this technology will be useful for investigating COVID-19 lung disease and rapidly testing interventions that people think are promising for treating or preventing COVID-19."

Credit: 
University of Iowa Health Care

COVID-19 news from Annals of Internal Medicine

Below please find a summary and link(s) of new coronavirus-related content published today in Annals of Internal Medicine. The summary below is not intended to substitute for the full article as a source of information. A collection of coronavirus-related content is free to the public at http://go.annals.org/coronavirus.

1. Promoting Better Clinical Trials and Drug Information as Public Health Interventions for the COVID-19 Emergency in Italy

Authors from the Italian Medicines Agency (AIFA) suggest that misinformation spread by non-peer-reviewed articles and press releases of small clinical trials, coupled with the general amplification and uncritical reporting of "potential cures," led physicians to use many drugs off label during the early phases of the COVID-19 pandemic with high expectations of their potential benefit. The authors describe lessons learned to counteract misleading information and urge the research community to do high-quality, informative multi-group trials. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-3775.

Media contacts: A PDF for this article is not yet available. Please click the link to read full text. The lead author, Antonio Addis, PhD, can be reached directly at agmaddis@gmail.com.

2. COVID-19 Clinical Trials: Improving Research Infrastructure and Relevance

The COVID-19 pandemic is frequently cited as an event that will permanently change the way we do many things, such as educate, work, and provide medical care. According to authors from the Perelman School of Medicine, University of Pennsylvania, it also affords an opportunity to rethink the way we do clinical research. The authors provide recent examples of how coordinated efforts may benefit research into COVID-19. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-2959.

Media contacts: A PDF for this article is not yet available. Please click the link to read full text. The lead author, Stephen Kimmel, MD, MSCE, can be reached directly at stevek@pennmedicine.upenn.edu.

Credit: 
American College of Physicians

Thinking small: New ideas in the search for dark matter

image: This Hubble Space Telescope composite image shows a ghostly "ring" of dark matter in a galaxy cluster.

Image: 
NASA, ESA, M.J. Jee and H. Ford (Johns Hopkins University)

Since the 1980s, researchers have been running experiments in search of particles that make up dark matter, an invisible substance that permeates our galaxy and universe. Coined dark matter because it gives off no light, this substance, which constitutes more than 80 percent of matter in our universe, has been shown repeatedly to influence ordinary matter through its gravity. Scientists know it is out there but do not know what it is.

So researchers at Caltech, led by Kathryn Zurek, a professor of theoretical physics, have gone back to the drawing board to think of new ideas. They have been looking into the possibility that dark matter is made up of "hidden sector" particles, which are lighter than particles proposed previously, and could, in theory, be found using small, underground table-top devices. In contrast, scientists are searching for heavier dark matter candidates called WIMPs (weakly interacting massive particles) using large-scale experiments such as XENON, which is installed underground in a 70,000-gallon tank of water in Italy.

"Dark matter is always flowing through us, even in this room" says Zurek, who first proposed hidden sector particles over a decade ago. "As we move around the center of the galaxy, this steady wind of dark matter mostly goes unnoticed. But we can still take advantage of that source of dark matter, and design new ways to look for rare interactions between the dark matter wind and the detector."

In a new paper accepted for publication in the journal Physical Review Letters, the physicists outline how the lighter-weight dark matter particles could be detected via a type of quasiparticle known as a magnon. A quasiparticle is an emergent phenomenon that occurs when a solid behaves as if it contains weakly interacting particles. Magnons are a type of quasiparticle in which electron spins--which act like little magnets--are collectivity excited. In the researchers' idea for a table-top experiment, a magnetic crystalized material would be used to look for signs of excited magnons generated by dark matter.

"If the dark matter particles are lighter than the proton, it becomes very difficult to detect their signal by conventional means," says study author Zhengkang (Kevin) Zhang, a postdoctoral scholar at Caltech. "But, according to many well-motivated models, especially those involving hidden sectors, the dark matter particles can couple to the spins of the electrons, such that once they strike the material, they will induce spin excitations, or magnons. If we reduce the background noise by cooling the equipment and moving it underground, we could hope to detect magnons generated solely by dark matter and not ordinary matter."

Such an experiment is only theoretical at this point but may eventually take place using small devices housed underground, likely in a mine, where outside influences from other particles, such as those in cosmic rays, can be minimized.

One telltale sign of a dark matter detection in the table-top experiments would be changes to the signal that depend on the time of day. This is due to the fact that the magnetic crystals that would be used to detect the dark matter can be anisotropic, meaning that the atoms are naturally arranged in such a way that they tend to interact with the dark matter more strongly when the dark matter comes in from certain directions.

"As Earth moves through the galactic dark matter halo, it feels the dark matter wind blowing from the direction into which the planet is moving. A detector fixed at a certain location on Earth rotates with the planet, so the dark matter wind hits it from different directions at different times of the day, say, sometimes from above, sometimes from the side," says Zhang.

"During the day, for example, you may have a higher detection rate when the dark matter comes from above than from the side. If you saw that, it would be pretty spectacular and a very strong indication that you were seeing dark matter."

The researchers have other ideas about how dark matter may reveal itself, in addition to through magnons. They have proposed that the lighter dark matter particles could be detected via photons as well as with another type of quasiparticle called a phonon, which is caused by vibrations in a crystal lattice. Preliminary experiments based on photons and phonons are underway at UC Berkeley, where the team was based prior to Zurek joining the Caltech faculty in 2019. The researchers say that the use of these multiple strategies to look for dark matter is crucial because they complement each other and would help confirm each other's results.

"We're looking into new ways to look for dark matter because, given how little we know about dark matter, it's worth considering all the possibilities," says Zhang.

Credit: 
California Institute of Technology

Irregular findings common in knees of young competitive alpine skiers

image: Sagittal proton density-weighted MRI scans in two 15-year-old female control participants without a distal femoral cortical irregularity demonstrate measurement of the tendon attachment position. Images show the craniocaudal position (white line) of the tendon attachment, which is defined as the distance from the level of the joint space to the most cranial attachment of tendon slips (arrowheads) for the (a) medial and (b) lateral head of the gastrocnemius muscle (*). Note that the tendon of the medial gastrocnemius head in a shows two prominent tendon slips.

Image: 
Radiological Society of North America

OAK BROOK, Ill. - Bony lesions on the lower part of the thigh bone near the knee are a common but benign finding on MRI in young alpine skiers and should not be confused with more serious conditions, according to a study from Switzerland published in the journal Radiology.

Routine knee MRI of adolescents often reveals tumor-like irregularities in the bone of the distal femur, the part of the thigh bone right above the knee. These irregularities affect the cortical bone, the dense outer surface of the bone. For that reason, they are referred to as distal femoral cortical irregularities (DFCI). DFCI often lead to diagnostic uncertainty because they can be confused with more serious conditions like cancer or infection.

With an increasing number of MRI examinations performed in young patients, DFCI detection is likely to increase, as MRI has a higher sensitivity for detecting lesions than X-rays do, according to study lead author Christoph Stern, M.D., a musculoskeletal radiologist at Balgrist University Hospital in Zürich, Switzerland.

"Our goal was to better understand the pathogenesis of DFCI and to reduce uncertainty in the diagnosis of this benign condition, which should not be mistaken for malignancy," he said.

In the first study of its kind, Dr. Stern and colleagues investigated the prevalence of DFCI in youth competitive alpine skiers compared to a control group of young adults. Competitive skiers are exposed to periodical high physical activity and loading patterns to the knee joint that are likely to affect the bones and connective tissue of the joint.

The researchers compared the knee MRIs of 105 youth competitive alpine skiers with those of 105 controls of the same age group collected from 2014 to 2019. They looked for the presence of DFCI at two tendon-bone attachment areas: the ones between the gastrocnemius--the major muscle of the calf--and the femur, and the ones between the adductor magnus muscle of the inner thigh and the femur.

A DFCI was found in more than half of the youth competitive alpine skiers (58.1%), compared with slightly more than a quarter of the controls (26.7%).

DFCIs were observed at the attachment sites of tendons, predominantly at the inside head of the gastrocnemius muscle for both skiers and controls.

"DFCI are benign lesions, and occurrence around the knee joint is associated with repetitive mechanical stress to the attachment sites of tendons into bone," Dr. Stern said. "DFCI should not be mistaken for malignancy and are not associated with intraarticular damage."

The most common theory behind the DFCI occurrence is that of a "tug lesion" as a result of repetitive mechanical stress where the tendons attach to the bone.

"According to histologic results of DFCI, which have identified a reactive process, we assume there must be increased bone remodeling with resorption and fibrous proliferation at the site of the tendon attachment that is pronounced in young competitive athletes," Dr. Stern said.

Similar findings are likely to be found in other groups of athletes who place repetitive strain of the gastrocnemius muscles, such as those whose sports involve jumping, Dr. Stern said.

"This would apply to basketball and volleyball players," Dr. Stern said. "It might also apply for weightlifters who are exposed to similar pretension states and eccentric loading of the gastrocnemius muscles as skiers during exercise."

The researchers recommend that invasive diagnostic procedures such as biopsies be avoided in these patients due to the benign, usually asymptomatic, and over time self-limiting character of typical DFCIs at the attachment sites of tendons. However, follow-up with MRI might be justified, they said, especially in the rare case where the DFCI causes pain and other symptoms.

Credit: 
Radiological Society of North America

Hunting in savanna-like landscapes may have poured jet fuel on brain evolution

video: Prof. Malcolm MacIver explains how Goldilocks-levels of clutter helped give rise to planning circuits in the brain.

Image: 
Northwestern University

New simulations show that living among land's rich topography forced animals to plan rather than act based on habits

Planning abilities increased survival rates, so evolution might have selected for brain circuitry that allowed animals to consider different futures

'It could explain why we can go out for seafood, but seafood can't go out for us'

EVANSTON, Ill. -- Ever wonder how land animals like humans evolved to become smarter than their aquatic ancestors? You can thank the ground you walk on.

Northwestern University researchers recently discovered that complex landscapes -- dotted with trees, bushes, boulders and knolls -- might have helped land-dwelling animals evolve higher intelligence than their aquatic ancestors.

Compared to the vast emptiness of open water, land is rife with obstacles and occlusions. By providing prey with spaces to hide and predators with cover for sneak attacks, the habitats possible on land may have helped give rise to planning strategies -- rather than those based on habit -- for many of those animals.

But the researchers found that planning did not give our ancestors the upper hand in all landscapes. The researchers' simulations show there is a Goldilocks level of barriers -- not too few and not too many -- to a predator's perception, in which the advantage of planning really shines. In simple landscapes like open ground or packed landscapes like dense jungle, there was no advantage.

"All animals -- on land or in water -- had the same amount of time to evolve, so why do land animals have most of the smarts?" asked Northwestern's Malcolm MacIver, who led the study. "Our work shows that it's not just about what's in the head but also about what's in the environment."

And, no, dolphins and whales do not fall into the category of less intelligent sea creatures. Both are land mammals that recently (evolutionarily speaking) returned to water.

The paper will be published June 16 in the journal Nature Communications.

It is the latest in a series of studies conducted by MacIver that advance a theory of how land animals evolved the ability to plan. In a follow-up study now underway with Dan Dombeck, a professor of neurobiology at Northwestern, MacIver will put the predictions generated by this computational study to the test through experiments with small animals in a robotic reconfigurable environment.

MacIver is a professor of biomedical and mechanical engineering in Northwestern's McCormick School of Engineering and a professor of neurobiology in the Weinberg College of Arts and Sciences. Ugurcan Mugan, a Ph.D. candidate in MacIver's laboratory, is the paper's first author.

Simulating survival

In previous work, MacIver showed that when animals started invading land 385 million years ago, they gained the ability to see around a hundred times farther than they could in water. MacIver hypothesized that being a predator or a prey in the context of being able to see so much farther might require more brain power than hunting through empty, open water. However, the supercomputer simulations for the new study (35 years of calculations on a single PC) revealed that although seeing farther is necessary to advantage planning, it's not sufficient. Instead, only a combination of long-range vision and landscapes with a mix of open areas and more densely vegetated zones resulted in a clear win for planning.

"We speculated that moving onto land poured jet fuel on the evolution of the brain as it may have advantaged the hardest cognitive operation there is: Envisioning the future," MacIver said. "It could explain why we can go out for seafood, but seafood can't go out for us."

To test this hypothesis, MacIver and his team developed computational simulations to test the survival rates of prey being actively hunted by a predator under two different decision-making strategies: Habit-based (automatic, such as entering a password that you have memorized) and plan-based (imagining several scenarios and selecting the best one). The team created a simple, open world without visual barriers to simulate an aquatic world. Then, they added objects of varying densities to simulate land.

Survival of the smartest

"When defining complex cognition, we made a distinction between habit-based action and planning," MacIver said. "The important thing about habit is that it is inflexible and outcome independent. That's why you keep entering your old password for a while after changing it. In planning, you have to imagine different futures and choose the best potential outcome."

In the simple aquatic and terrestrial environments examined in the study, survival rate was low both for prey that used habit-based actions and those that had the capability to plan. The same was true of highly packed environments, such as coral reefs and dense rainforests.

"In those simple open or highly packed environments, there is no benefit to planning," MacIver said. "In the open aquatic environments, you just need to run in the opposite direction and hope for the best. While in the highly packed environments, there are only a few paths to take, and you are not able to strategize because you can't see far. In these environments, we found that planning does not improve your chances of survival."

The Goldilocks landscape

When patches of vegetation and topography are interspersed with wide open areas similar to a savanna, however, simulations showed that planning results in a huge survival payoff compared to habit-based movements. Because planning increases the chance of survival, evolution would have selected for the brain circuitry that allowed animals to imagine future scenarios, evaluate them and then enact one.

"With patchy landscapes, there is an interplay of transparent and opaque regions of space and long-range vision, which means that your movement can hide or reveal your presence to an adversary," MacIver said. "Terra firma becomes a chess board. With every movement, you have a chance to unfurl a strategy.

"Interestingly," he noted, "when we split off from life in the trees with chimpanzees nearly seven million years ago and quickly quadrupled in brain size, paleoecology studies point to our having invaded patchy landscapes, similar to those our study highlights, as giving the biggest payoff for strategic thinking."

Credit: 
Northwestern University

Working in the sun -- heating of the head may markedly affect safety and performance

image: Spanish construction worker during water break

Image: 
Andreas Flouris

Approximately half of the global population live in regions where heat stress is an issue that affects the ability to live healthy and productive lives. It is well known that working in hot conditions, and the associated hyperthermia (rise in body temperature), may impair the ability to perform physically demanding manual work. However, the effects on cognitively dominated functions, and specifically the influence from sunlight exposure on human brain temperature and function have not been documented.

This new study shows clear negative effects of prolonged exposure of the head to sunlight, implying that we may have underestimated its true effects, as previous studies have traditionally been conducted in the laboratory, without accounting for the marked effect that sun radiation may have - in particular, when the head is exposed for a prolonged period.

"The novelty of the study is that we provide evidence that direct exposure to sunlight - especially to the head - impairs motor and cognitive performance," says professor Lars Nybo, the project coordinator from Department of Nutrition, Exercise and Sports, UCPH. He continues, "Adding to this, the decline in motor and cognitive performance was observed at 38.5 degrees, which is a 1 degree lower body temperature than previous studies have shown, which is a substantial difference."

Direct sunlight to the head may affect productivity

Many workers in agriculture, construction and transport are at risk from being affected by exposure to strong sunlight, such as we experience in Europe the summer months. Postdoc Jacob Piil and professor Lars Nybo from UCPH headed this study in collaboration with colleagues from Thessaly University in Greece and they are convinced that the finding have implications not only for the workers' health, but also for their work performance and safety:

"Health and performance impairments provoked by thermal stress are societal challenges intensifying with global warming and that is a prolonged problem we must try to mitigate. But we must also adapt solution to prevent the current negative effects when e.g. workers are exposed and this study emphasize that it is of great importance that people working or undertaking daily activities outside should protect their head against sunlight. The ability to maintain concentration and avoid attenuation of motor-cognitive performance is certainly of relevance for work and traffic safety as well as for minimizing the risks of making mistakes during other daily tasks," says associate professor Andreas Flouris from FAME Laboratory in Greece.

Taken together, these results suggest that science may have underestimated the true impact of heat stress, for example during a heat wave, as solar radiation has not been investigated before. Future studies should incorporate sunlight, as this seems to have a selective effect on the head and the brain.

These findings highlight the importance of including the effect of sunlight radiative heating of the head and neck in future scientific evaluations of environmental heat stress impacts, and specific protection of the head to minimize harmful effects.

Credit: 
University of Copenhagen - Faculty of Science

Tomography studies of coins shed light on the history of Volga Bulgaria

image: Photographs of the investigated coins of ancient Volga Bulgaria: the Samanid multidirham (a) and the Bulat-Timur dirham (b). For the images of each coin, the corresponding scales are presented. On the photograph of the multidirham, symbols "Ed1", "C", "Ed2", and "Ed3" refer to the parts of the coin for which the neutron-diffraction data were obtained.

Image: 
Kazan Federal University

Kazan Federal University, Joint Institute for Nuclear Research (Dubna, Russia), and Khalikov Institute of Archeology (Tatarstan Academy of Sciences, Kazan, Russia) are working together to study the physical properties of the coins found on the territory of former Volga Bulgaria.

This medieval state occupied a convenient position on the Volga river amidst continental trade routes. Volga Bulgaria had a developed economy and extensive trade relations with neighboring regions. Thus, the local coinage was developed, and coins minted in other regions were in wide circulation as well. A large number of numismatic finds at the excavation sites of cities and settlements confirms this. The coins found contain valuable historical information. The chemical composition of the coins can provide important information about the silver ore deposits from which the coins are made, matching a specific historical period and identifying fakes. The internal structure of coins can give new info on the technical aspects of coin minting, such as metal cooling rate, minting methods, and surface silvering.

"We determine the chemical composition of the coins, as well as their internal structure: the presence of coatings, layers, and other compositional heterogeneities," explains co-author Bulat Bakirov. "To do this, we use the methods of neutron diffraction and neutron tomography. Neutron methods are completely non-destructive and have a number of unique features; therefore, they can be used to study especially rare and valuable archaeological artifacts."

Studies of silver coins by modern scientific methods are very beneficial. Firstly, this provides a better understanding of the structure and historical stages of the economic and technical development of medieval Volga Bulgaria. Secondly, there is the possibility of reconstructing medieval technologies, which is of great importance both for historical science and for educational purposes. Thirdly, such studies are useful for the development of the methodology of restoration and preservation of archaeological finds.

In this particular paper, a 10th century Samanid multidirham and a 14th century Bulat-Timur dirham were thoroughly investigated. It was established that both studied coins consist of a copper-silver alloy. However, the Samanid multidirham is characterized by a very high copper content--on average, about 50% of the entire volume of the coin material. The spatial distribution of silver and copper in this coin is uneven, which can be associated with both the peculiarities of the initial ore and with the processes that occur during its coining. No outcrops of a high silver concentration on the coin surface, which are characteristic of the liquation processes, were revealed. In contrast to the multidirham, the Bulat-Timur dirham consists almost completely of silver. The volume content of copper in this coin is extremely low--5.2%. The data on this coin composition are in good agreement with the results of investigations of the Golden Horde coins of that epoch

In the near future, the team plans to expand the range of studied samples. Of great interest are other products from ferrous and non-ferrous metals, in particular, ingots and blacksmithing waste. Such studies will significantly increase the knowledge of metallurgy in this historical region.

Credit: 
Kazan Federal University

As many as six billion Earth-like planets in our galaxy, according to new estimates

image: Artist's conception of Kepler telescope observing planets transiting a distant star.

Image: 
NASA Ames/ W Stenzel.

To be considered Earth-like, a planet must be rocky, roughly Earth-sized and orbiting Sun-like (G-type) stars. It also has to orbit in the habitable zones of its star--the range of distances from a star in which a rocky planet could host liquid water, and potentially life, on its surface.

"My calculations place an upper limit of 0.18 Earth-like planets per G-type star," says UBC researcher Michelle Kunimoto, co-author of the new study in The Astronomical Journal. "Estimating how common different kinds of planets are around different stars can provide important constraints on planet formation and evolution theories, and help optimize future missions dedicated to finding exoplanets."

According to UBC astronomer Jaymie Matthews: "Our Milky Way has as many as 400 billion stars, with seven per cent of them being G-type. That means less than six billion stars may have Earth-like planets in our Galaxy."

Previous estimates of the frequency of Earth-like planets range from roughly 0.02 potentially habitable planets per Sun-like star, to more than one per Sun-like star.

Typically, planets like Earth are more likely to be missed by a planet search than other types, as they are so small and orbit so far from their stars. That means that a planet catalogue represents only a small subset of the planets that are actually in orbit around the stars searched. Kunimoto used a technique known as 'forward modelling' to overcome these challenges.

"I started by simulating the full population of exoplanets around the stars Kepler searched," she explained. "I marked each planet as 'detected' or 'missed' depending on how likely it was my planet search algorithm would have found them. Then, I compared the detected planets to my actual catalogue of planets. If the simulation produced a close match, then the initial population was likely a good representation of the actual population of planets orbiting those stars."

Kunimoto's research also shed more light on one of the most outstanding questions in exoplanet science today: the 'radius gap' of planets. The radius gap demonstrates that it is uncommon for planets with orbital periods less than 100 days to have a size between 1.5 and two times that of Earth. She found that the radius gap exists over a much narrower range of orbital periods than previously thought. Her observational results can provide constraints on planet evolution models that explain the radius gap's characteristics.

Previously, Kunimoto searched archival data from 200,000 stars of NASA's Kepler mission. She discovered 17 new planets outside of the Solar System, or exoplanets, in addition to recovering thousands of already known planets.

Credit: 
University of British Columbia

Black hole's heart still beating

image: A black hole including the heartbeat signal observed in 2007 and 2018

Image: 
JIN Chichuan and NASA/GSFC

The first confirmed heartbeat of a supermassive black hole is still going strong more than ten years after first being observed.

X-ray satellite observations spotted the repeated beat after its signal had been blocked by our Sun for a number of years.

Astronomers say this is the most long lived heartbeat ever seen in a black hole and tells us more about the size and structure close to its event horizon - the space around a black hole from which nothing, including light, can escape.

The research, by National Astronomical Observatories of Chinese Academy of Sciences, China, and Durham University, UK, was published in the journal Monthly Notices of the Royal Astronomical Society.

The black hole's heartbeat was first detected in 2007 at the center of a galaxy called RE J1034+396, which is approximately 600 million light years from Earth.

The signal from this galactic giant repeated every hour and this behavior was seen in several snapshots taken before satellite observations were blocked by our Sun in 2011.

In 2018, the European Space Agency's XMM-Newton X-ray satellite was able to finally re-observe the black hole and to scientists' amazement the same repeated heartbeat could still be seen.

Matter falling on to a supermassive black hole as it feeds from the accretion disc of material surrounding it releases an enormous amount of power from a comparatively tiny region of space, but this is rarely seen as a specific repeatable pattern like a heartbeat.

The time between beats can tell us about the size and structure of the matter close to the black hole's event horizon.

Prof. Chris Done, in Durham University's Centre for Extragalactic Astronomy, collaborated on the findings with colleague Prof. Martin Ward, Temple Chevallier Chair of Astronomy.

"The main idea for how this heartbeat is formed is that the inner parts of the accretion disc are expanding and contracting," said Prof. Done. "The only other system we know which seems to do the same thing is a 100,000 times smaller stellar-mass black hole in our Milky Way, fed by a binary companion star, with correspondingly smaller luminosities and timescales. This shows us that simple scalings with black hole mass work even for the rarest types of behavior."

"This heartbeat is amazing! It proves that such signals arising from a supermassive black hole can be very strong and persistent. It also provides the best opportunity for scientists to further investigate the nature and origin of this heartbeat signal," said Dr. JIN Chichuan from the National Astronomical Observatories of the Chinese Academy of Sciences, lead author of the study.

The next step in the research is to perform a comprehensive analysis of this intriguing signal, and compare it with the behavior of stellar-mass black holes in our Milky Way.

Credit: 
Chinese Academy of Sciences Headquarters

NUS researchers uncover mysterious tanaids

image: These two tanaidacean species new to science -- Unispinosus eopacificus (top), named after the location it was discovered, and Portaratrum birdi (bottom), named in honour of a leading tanaidacean taxonomist -- were collected by Mr Chim and Ms Tong from a depth between 4,041 to 4,227 metres during a deep-sea expedition to the Pacific Ocean in 2015.

Image: 
National University of Singapore

Tanaids are one of the most underappreciated animals in the world. These small crustaceans can be found in virtually all marine benthic habitats, from mangroves, rocky shores and coral reefs along the coasts to mud volcanoes, cold seeps and trenches in the deepest oceans. They even inhabit the shell surfaces of sea turtles, live inside gastropod shells like hermit crabs, and reside under the skin of deep-sea sea cucumbers.

When present, tanaids are often one of the dominant animals in the community. Due to their sheer number, tanaids are likely to play important ecological roles but information on their biology remains elusive. The knowledge gaps include answers to the most basic questions -- How many species are there? What are their names? Experts have estimated that there could be up to 57,000 tanaidacean species worldwide. Currently, however, less than 1,500 species have been described, and the majority of these are in the temperate environments.

Research Associate Mr Chim Chee Kong and Research Assistant Ms Samantha Tong from the Tropical Marine Science Institute at the National University of Singapore (NUS) are on a quest to discover more of these nameless taxa, specifically in the relatively species-rich but poorly studied tropical Indo-Pacific.

Both researchers recently described two new species found in the abyssal polymetallic nodule fields in the eastern Pacific Ocean during a 2015 expedition. One of them was named Unispinosus eopacificus after the locality of where it was discovered, and the other was named Portaratrum birdi in honour of a leading tanaidacean taxonomist. They also erected the genus Unispinosus in the same paper, which was published in the journal Zootaxa on 31 March 2020.

The discovery of these two new species are of significant importance for environmental management because they were found in the Clarion-Clipperton Zone (CCZ), an understudied area in the middle of the Pacific Ocean characterised by polymetallic nodule fields. These fields contain commercially valuable metals such as nickel, copper, and rare earth elements that were formed over millions of years.

"Data on the biodiversity in this resource-rich region can allow the International Sea Authority to make well-informed decisions on whether to prioritise certain areas for conservation," explained Ms Tong.

Many tanaids new to science were also uncovered during another deep-sea expedition in South Java in 2018, and are in the process of being described by Mr Chim and Ms Tong.

With access to a large amount of local material, primarily collected during the Comprehensive Marine Biodiversity Survey conducted in 2013, the two NUS researchers have also been able to identify more tanaids in local waters. To date, Mr Chim has identified more than 20 species of tanaids from local waters and, as a result, raised the current knowledge of our natural heritage. Prior to this study, only one tanaid species had been formally recorded from Singapore waters, based on specimens that were collected in the 1900s.

Last year, he and Ms Tong described an unusual tanaidacean species found in Singapore that were living inside dead barnacles, which is a novel microhabitat recorded for this group of crustaceans. The findings on the newly named Xenosinelobus balanocolus were reported in the journal Zootaxa on 8 July 2019.

"Taxonomic studies are extremely time-consuming, especially for microscopic animals such as tanaids, but at the same time, they are very rewarding as the results provide the strong foundation for further scientific hypotheses to build upon," said Mr Chim, who is also a part-time doctoral student at the Department of Biological Sciences at the NUS Faculty of Science.

Credit: 
National University of Singapore