Culture

Association between chain restaurant advertising, obesity in adults

What The Study Did: Researchers examined whether changes in chain restaurant advertising spending were associated with weight changes among adults across 370 counties in the United States.

Authors: Sara N. Bleich, Ph.D., of the Harvard T.H. Chan School of Public Health in Boston, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2020.19519)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Exercise intensity not linked to mortality risk in older adults, finds trial

Exercise intensity appears to make no difference to risk of mortality among older adults, suggests a randomised controlled trial from Norway published by The BMJ today.

Physical activity has been highlighted as one of the most important actions people of all ages can engage in to improve health, and data from observational studies show that early death is significantly reduced in physically active compared with inactive individuals.

Yet high quality clinical trial evidence on a potential direct (causal) relation between current advice on physical activity levels and longevity is lacking.

So an international research team set out to evaluate the effect of five years of supervised exercise training compared with recommendations for physical activity on mortality in older adults (70-77 years).

The trial involved 1,567 participants (790 women and 777 men) living in Trondheim, Norway, with an average age of 73 years. In total, 87.5% of participants reported overall good health and 80% reported a medium or high level of physical activity at the start of the trial.

Of these 1,567 participants, 400 were assigned to two weekly sessions of high intensity interval training (HIIT), 387 were assigned to moderate intensity continuous training (MICT), and 780 to follow the Norwegian guidelines for physical activity (control group), all for five years.

After five years, the overall mortality rate was 4.6% (72 participants).

The researchers found no difference in all cause mortality between the control group (4.7%, 37 participants) and combined HIIT and MICT group (4.5%, 35 participants).

They also found no differences in cardiovascular disease or cancer between the control group and the combined HIIT and MICT group.

For example, the total proportion of participants with cardiovascular disease after five years was 15.6%, with 16% (125 participants) in the control group, 15% (58 participants) in the MICT group, and 15.3% (61 participants) in the HIIT group.

The researchers point to some limitations. For example, highly active participants in the control group could have hampered finding differences between groups, and many participants were healthier than expected at the start of the study, which may have limited the potential to increase activity levels further.

However, strengths include the large number of older adults, and the long intervention period and monitoring throughout the study.

"This study suggests that combined MICT and HIIT has no effect on all cause mortality compared with recommended physical activity levels," write the researchers.

Credit: 
BMJ Group

Pregnancy complications linked to heightened risk of heart disease and stroke in later life

Pregnancy complications such as miscarriage, pre-eclampsia, diabetes in pregnancy (gestational diabetes) and pre-term birth are linked to a heightened risk of heart disease in later life, suggests an overarching (umbrella) analysis of data published by The BMJ today.

Several other factors related to fertility and pregnancy also seem to be associated with subsequent cardiovascular disease, say the researchers, including starting periods early, use of combined oral contraceptives, polycystic ovary syndrome, and early menopause.

However, a longer length of breastfeeding was associated with a reduced risk of cardiovascular disease.

Previous research has suggested that risk factors specific to women may be linked to cardiovascular disease and stroke, but clarity on the quality of the evidence is lacking and on how the findings can be translated into public health and clinical practice.

So a team of UK researchers searched relevant research databases for published systematic reviews and meta-analyses that investigated links between reproductive factors in women of reproductive age and their subsequent risk of cardiovascular disease.

A total of 32 reviews were included, evaluating multiple risk factors over an average follow-up period of 7-10 years.

The researchers found that several factors, including starting periods early (early menarche), use of combined oral contraceptives, polycystic ovary syndrome, miscarriage, stillbirth, pre-eclampsia, diabetes during pregnancy, pre-term birth, low birth weight, and early menopause were associated with an up to twofold risk of cardiovascular outcomes.

Pre-eclampsia was associated with a fourfold risk of heart failure.

Possible explanations for these associations include family medical history, genetics, weight, high blood pressure and cholesterol levels, and chemical imbalances from use of hormonal contraceptives.

However, no association was found between cardiovascular disease outcomes and current use of progesterone only contraceptives, use of non-oral hormonal contraceptive agents, or fertility treatment.

What's more, breastfeeding was associated with a lower risk of cardiovascular disease.

The researchers point to some limitations, such as missing data and the fact that reviews were largely based on observational evidence, so they cannot rule out the possibility that other unmeasured (confounding) factors may have had an effect.

Nevertheless, they say the evidence reported in this umbrella review suggests that, from menarche to menopause, the reproductive profile of women is associated with their future risk of cardiovascular disease.

It also provides clarity on the quality of the evidence, identifies gaps in evidence and practice, and provides recommendations that could be incorporated into guidelines, such as incorporating reproductive risk factors as part of the risk assessment for cardiovascular disease, they conclude.

Credit: 
BMJ Group

Neanderthals already had their characteristic barrel-shaped rib cages at birth

Neanderthal babies were born with the characteristic barrel-shaped rib cage shape previously identified in adult specimens, according to an analysis of digitally reconstructed rib cages from four Neanderthal infants. The findings suggest that Neanderthals' rib cages were already shorter and deeper than that of modern humans at birth, rather than shifting their shape later in development. While scientists have known that adult Neanderthals were heavier than modern humans, requiring significant differences in skeletal shape, there have been few studies that have compared the earliest postnatal developmental stages of Neanderthals and modern humans, due to a lack of well-preserved fossil remains of Neanderthal children. To investigate whether the shape of this hominid's thorax changed shape between birth and adulthood, Daniel García-Martínez and colleagues scanned and virtually reconstructed ribcages from 4 young Neanderthals estimated to be about 1 to 2 weeks old, less than 4 months old, 1.5 years old, and 2.5 years old. The most complete Neanderthal specimen (the 1.5-year-old) also revealed the species had relatively longer mid-thoracic ribs compared to its uppermost and lowermost ribs and a spine folded inward toward the center of the body, forming a cavity on the outside of the back. The researchers compared rib cage development in these specimens with a baseline for modern human development in the first three years of life, which they derived from a forensic assessment of remains from 29 humans. The Neanderthal specimens had consistently shorter spines and deeper rib cages, regardless of their age at death. García-Martínez et al. conclude that the bulky Neanderthal ribcage may have been genetically inherited, at least in part, from early Pleistocene ancestors.

Credit: 
American Association for the Advancement of Science (AAAS)

COVID-19 has a prolonged effect for many during pregnancy

Symptoms for pregnant women with COVID-19 can be prolonged, lasting two months or longer for a quarter of the women who participated in a national study led by UC San Francisco and UCLA.

In the largest study to date of COVID-19 among non-hospitalized pregnant women, researchers analyzed the clinical course and outcomes of 594 women who tested positive for the SARS-CoV-2 virus during pregnancy.

They found that the most common early symptoms for pregnant women were cough, sore throat, body aches, and fever. Half of the participants still had symptoms after 3 weeks and 25 percent had symptoms after 8 weeks. Findings appear Oct. 7, 2020, in Obstetrics & Gynecology.

"We found that pregnant people with COVID-19 can expect a prolonged time with symptoms," said senior author Vanessa L. Jacoby, MD, MAS, vice chair of research in the Department of Obstetrics, Gynecology, and Reproductive Sciences at UCSF, and co-principal investigator of the national pregnancy study. "COVID-19 symptoms during pregnancy can last a long time, and have a significant impact on health and wellbeing."

The PRIORITY study (Pregnancy CoRonavIrus Outcomes RegIsTrY) is an ongoing study in the United States for women who are pregnant or up to 6 weeks after pregnancy and have a confirmed or suspected case of COVID-19. It launched March 22, 2020.

While previous research on SARS-CoV-2 infection in pregnancy has primarily centered on hospitalized patients, the new analysis focused on ambulatory patients, who represent the overwhelming majority of adults with the virus.

Study participants tested positive between March 22 and July 10, and had a mean age of 31 years. Health care workers made up nearly a third of the cases, and participants were geographically diverse: 34 percent lived in the Northeast, 25 percent in the West, 21 percent in the South, and 18 percent in the Midwest.

Thirty-one percent of the participants were Latina, and 9 percent were Black. The average gestational age at the time of enrollment in the study was approximately 24 weeks.

The researchers found several common symptoms of COVID-19, but also that symptoms related to the virus were complicated by overlapping symptoms of normal pregnancy, including nausea, fatigue and congestion. Their findings included the following:

Primary first symptoms were cough (20 percent), sore throat (16 percent), body aches (12 percent), and fever (12 percent); by comparison, fever occurs in 43 percent of non-pregnant hospitalized patients;

Loss of taste or smell was the first symptom in 6 percent of pregnant women;

Other symptoms included shortness of breath, runny nose, sneezing, nausea, sore throat, vomiting diarrhea, or dizziness;

60 percent of women had no symptoms after 4 weeks of illness, but for 25 percent, symptoms persisted, lasting 8 or more weeks;

The median time for symptoms to resolve was 37 days;

Medical conditions for some participants included hypertension, pregestational diabetes, asthma, cardiac disease, thyroid disease, anxiety and depression.

The authors said that data on the clinical evolution of the virus are critical in order to assess risk and guide treatment during pregnancy.

"The majority of participants in our study population had mild disease and were not hospitalized," said first author Yalda Afshar, MD, PhD, assistant professor in the Division of Maternal Fetal Medicine, Department of Obstetrics and Gynecology, at the David Geffen School of Medicine at UCLA. "Even so, it took a median of 37 days for symptoms to ease."

"Despite the potential risks of COVID-19 for pregnant people and their newborns, there are large gaps in our knowledge on the course of the disease and the overall prognosis," she said. "Our results can help pregnant people and their clinicians better understand what to expect with COVID-19 infection."

Credit: 
University of California - San Francisco

Teens diagnosed with depression show reduction in educational achievement

Teenagers who receive a depression diagnosis during their school career show a substantial decline in attainment in Year 11, new King's College London research has found.

The researchers suggest that targeted educational support for children struggling with depression might particularly benefit boys and those from deprived backgrounds, who were especially vulnerable subgroups in this study, although all children with depression might benefit from such support.

The researchers, funded by the NIHR, carried out a historical, longitudinal cohort study linking data from health and education records.

They made use of an innovative data resource held at South London and Maudsley NHS Foundation Trust, which links together child mental healthcare records and the Department for Education school records. From this they identified the primary and secondary educational attainment of young people who received a clinical depression diagnosis under the age of 18.

In their sample of 1,492 children and adolescents the median age at depression diagnosis was 15 years. The researchers compared attainment in this sample against a local group of pupils in Year 2, Year 6 and Year 11.

Study findings - results decline between school Years 6 and 11

Among the group who received a depression diagnosis, 83 percent met the expected attainment threshold of level 2 or above in Year 2, and 77 percent met the expected attainment threshold of level 4 or above in Year 6. This was similar to local levels.

However, only 45 per cent met the expected threshold of five A*-C GCSE or equivalent grades (including English and maths) in Year 11, much lower than the proportion meeting this threshold in the local reference population (53 per cent), and also in national estimates (53 per cent).

Mental health and educational support needed

Alice Wickersham, NIHR Maudsley Biomedical Research Centre PhD Student, Department of Psychological Medicine, Institute of Psychiatry, Psychology & Neuroscience, King's College London, first author said, "Previous research has found that, in general, depression in childhood is linked to lower school performance. But what we've observed is that a group of children and adolescents who developed depression at secondary school had performed quite well when they were in primary school. It is only when they sat their GCSEs that they tended to show a drop in their school performance, which also happened to be around the time that many of them were diagnosed. This pattern appears to be quite consistent across different genders, ethnicities, and socioeconomic groups."

"While it's important to emphasise that this won't be the case for all teenagers with depression , it does mean that many may find themselves at a disadvantage for this pivotal educational milestone. It highlights the need to pay close attention to teenagers who are showing early signs of depression. For example, by offering them extra educational support in the lead up to their GCSEs, and working with them to develop a plan for completing their compulsory education."

Dr Johnny Downs, Senior Clinical Lecturer (Honorary Consultant) in Child & Adolescent Psychiatry, Institute of Psychiatry, Psychology & Neuroscience, King's College London, one of the Senior authors adds, "The majority of young people with emotional disorders, such as depression, do not receive treatment from mental health professionals, and so this study has two important policy implications: it demonstrates just how powerful depression can be in reducing young people's chances at fulfilling their potential, and provides a strong justification for how mental health and educational services need to work to detect and support young people prior to critical academic milestones."

"It also highlights the importance of secure data-sharing partnerships between health and educational organisations, without which we would not be able conduct these important studies and also conduct future work testing whether changes in health and education policies improve young people's lives."

Credit: 
NIHR Maudsley Biomedical Research Centre

Study confirms genetic link in cerebral palsy

image: Emeritus Professor Alastair MacLennan with Mathew Reinersten, from Adelaide, who is an ambassador for the group's cerebral palsy research.

Image: 
University of Adelaide

An international research team including the University of Adelaide has found further evidence that rare gene mutations can cause cerebral palsy, findings which could lead to earlier diagnosis and new treatments for this devastating movement disorder.

In the study published in the journal Nature Genetics researchers employed gene sequencing to examine the DNA of 250 cerebral palsy families, and compared this to a control group of almost 1800 unaffected families. They then demonstrated the impact rare gene mutations can have on movement control using a fruit fly model.

The findings have important clinical implications. They will provide some answers to parents, as well as guide healthcare and family planning such as counselling for recurrence risk - often quoted as around 1 per cent but could be as high as 10 per cent when factoring in genetic risks.

Co-author of the research, Emeritus Professor Alastair MacLennan, AO, at the University of Adelaide, says the new study confirms the pioneering work of the Australian Collaborative Cerebral Palsy Research Group based at the Robinson Research Institute at the University of Adelaide.

"Cerebral palsy is a non-progressive developmental movement disorder impacting motor function, which affects approximately one in every 700 births in Australia and a similar number worldwide.

Symptoms range from mild to severe and can include intellectual disability," Emeritus Professor MacLennan said.

"Historically, cerebral palsy was considered largely the result of perinatal asphyxia - decreased oxygen to the baby's brain at birth, however this has only been in found in 8-10 per cent of cases.

"Eliminating other known causes, including premature birth and trauma at birth, this leaves a large number of cases - as many as 40 per cent in some studies - with an unknown origin."

Researchers at the University of Adelaide over many years have advocated that cerebral palsy is often caused by rare genetic variations (or mutations) which disrupt a child's control of movement and posture.

"Where previous studies have indicated underlying genetic causes in cerebral palsy, this study is the largest to date and includes in-depth statistical modelling and new controls to overcome limitations of earlier research," Emeritus Professor MacLennan said.

Co-author Professor Jozef Gecz, Channel 7 Children's Research Foundation Chair for the Prevention of Childhood Disability and Head of Neurogenetics at the University of Adelaide, says as a conservative estimate, 14 per cent of cerebral palsy families in the study had an excess of damaging genetic mutations and inherited recessive gene variations.

"Genes don't like to change; as soon as a gene is altered in any way you disturb its programing, and it can no longer perfectly perform what it's designed to do," said Professor Gecz, who is South Australia's Scientist of the Year 2019.

"Our US collaborators were able to disturb the same genes in fruit fly as found in humans, and in three out of four instances it severely altered the movement of either fruit fly larvae or adults or both.

"The gene mutations were mostly spontaneous new variants occurring in the sperm or an egg of the parents, who are otherwise not affected."

In some cases, identification of specific gene variations in individuals in the study led to new recommendations for patient health management, including treatments that would not have been initiated otherwise.

"As little as 30 years ago we were very limited in treatments for cerebral palsy, and the outlook for anyone diagnosed was grim," said Professor Gecz.

"As we come to recognise the role of genetics in cerebral palsy, we open the door for new treatments, earlier diagnosis and intervention, which could lead to greatly improved quality of life."

Other benefits of the study include a potential reduction in litigation, and evidence for further research to identify other damaging genetic variants in human DNA.

"The more we understand about the role of genetics in causing cerebral palsy, the closer we get to learning how to prevent it," Professor Gecz said.

Credit: 
University of Adelaide

The world's first successful identification and characterization of in vivo senescent cells

image: The research team generate a p16-Cre ERT2 - tdTomato mouse model to uncover the in vivo dynamics and properties of p16high cells. Single-cell RNA-seq analyses of various tissues from early middle-aged p16-CreERT2-tdTomato mice reveal that p16high cells exhibit heterogenous senescence-associated phenotypes, while elimination of p16high cells ameliorates steatosis and inflammation in a NASH model.

Image: 
©Makoto Nakanishi

Cell senescence is a state of permanent cell cycle arrest that was initially defined for cells grown in cell culture. It plays a key role in age-associated organ dysfunction and age-related diseases such as cancer, but the in vivo pathogenesis is largely unclear.

A research team led by Professor Makoto Nakanishi of the Institute of Medical Science, the University of Tokyo, generated a p16-Cre ERT2 -tdTomato mouse model (*1) to characterize in vivo p16 high cells (*2) at the single-cell level.

They found tdTomato-positive p16 high cells detectable in all organs, which were enriched with age. They also found that these cells failed to proliferate and had half-lives ranging from 2.6 to 4.2 months, depending on the tissue examined.

Single-cell transcriptomics in the liver and kidneys revealed that p16 high cells were present in various cell types, though most dominant in hepatic endothelium and in renal proximal and distal tubule epithelia, and that these cells exhibited heterogeneous senescence-associated phenotypes.

Further, elimination of p16 high cells ameliorated nonalcoholic steatohepatitis-related hepatic lipidosis and immune cell infiltration.

These results were published in Cell Metabolism on September 18, 2020.

There were a variety of senescent cells in the kidney, lung, liver, heart, brain

According to the research team, tamoxifen (TAM?*3) was administered to middle-aged mice to investigate the location of senescent cells. What they found was that they could detect these cells in all organs they investigated such as kidney, lung, liver, heart, brain...etc.

In addition, they investigated how senescent cell presence changed with age, and found that individual senescent cells did not proliferate, but the number of senescent cells in all organs increased significantly with aging.

It was also shown that non-alcoholic steatohepatitis (NASH?*4) was significantly improved when senescent cells were removed from the liver and kidneys. This is an interesting result from the perspective of NASH prevention and treatment.

For details of the research, please see the paper.

Contribution to the further elucidation of the causes of human aging and the development of anti-aging therapies

These results have shown that senescent cells in vivo are diverse depending on the type of progenitor cell and the stimulus.

And their new mouse model and single-cell analysis provide a powerful resource to enable the discovery of previously unidentified senescence functions in vivo.

Lead Scientist Professor Nakanishi said " These are the first results in the world showing the comprehensive transcriptome profiles of individual senescent cells in vivo, and we hope that it will contribute to the further elucidation of the causes of human aging and the development of anti-aging therapies".

Credit: 
The Institute of Medical Science, The University of Tokyo

Cortex-wide variation of neuronal cellular energy levels depending on the sleep-wake states

image: Intracellular ATP levels reflect the balance between energy synthesis and consuming activities in the brain. During REM sleep, brain energy synthesis activities (measured as cerebral blood flow) increased while neuronal ATP levels significantly decreased, suggesting a great increase in neuronal energy-consuming activities.

Image: 
TMIMS

It is assumed that the brain has homeostatic mechanisms to prevent the depletion of cellular energy, required for all cellular activities. For example, the blood flow increases, and oxygen and glucose are actively delivered in the brain region in which neural firing activity occurs. Besides, the cerebral blood flow and glucose uptake into the cells fluctuate accompanying the variations of cellular activities in the brain across the sleep-wake states of animals. Under these brain energy homeostatic mechanisms, it is assumed that the cellular energy status in the brain could be maintained constant in all physiological conditions including across the sleep-wake states of animals. However, this has not been experimentally proven.

To investigate whether the cellular energy status in the brain of living animals is always constant or variated, the researchers measured the neuronal intracellular concentration of adenosine 5'-triphosphate (ATP), the major cellular energy metabolite, using a fluorescent sensor in the brain of living mice. Using an optical fiber and wide-field microscopy, they showed a cortex-wide variation of cytosolic ATP levels in the cortical neurons depending on the sleep-wake states of animals: The ATP levels were high during the waking state, decreased during non-REM sleep, and profoundly decreased during REM sleep. On the other hand, cerebral blood flow, as a metabolic parameter for energy supply, slightly increased during non-REM sleep and greatly increased during REM sleep, compared with the waking state. The reduction in neuronal ATP levels was also observed under general anesthesia in mice and response to local brain electrical stimulation for neuronal activation, whereas the hemodynamics was simultaneously enhanced.

Since the neuronal ATP levels increase throughout the cortex in the waking state, which is when the cellular energy demand increases, brain mechanisms for energy modulation could increase the neuronal ATP levels in a cortex-wide manner in response to the sleep-to-wake transition of animals. Meanwhile, the great reduction of neuronal ATP levels during REM sleep despite a simultaneous increase of cerebral hemodynamics for energy supply suggests negative energy balance in neurons, which could be due to REM sleep-specific promotion of energy-consuming activities such as heat production. The significant reduction of ATP levels in the cortical neurons during REM sleep is expected to use as a novel biomarker of REM sleep. Eventually, cerebral energy metabolism may not always meet neuronal energy demands, consequently resulting in physiological fluctuations of intracellular ATP levels in neurons.

Credit: 
Tokyo Metropolitan Institute of Medical Science

The propagation of admixture-derived evolutionary potential

image: Examples where hybridization-derived genetic variation can and cannot spread to different areas. (a) Two regions (lakes) are connected by a single path (river). In this case, genetic variation generated through hybridization in one lake cannot spread to the other lake because natural selection by the river environment removes a large fraction of genetic variation from the population expanding along the river to the second lake. (b) Two lakes are connected by two geographically isolated rivers. While genetic variation is lost in both populations expanding along two rivers, they can form genetically distinct sub-lineages. This is because different sets of alleles can fix in the two populations if the same optimal phenotype in the river environment can be generated by multiple different combinations of genes. The secondary admixture between sub-lineages in the second lake can restore old genetic variation that had originally been generated in the other lake through the past hybridization event.

Image: 
Tohoku University

Adaptive radiation - the rapid evolution of many new species from a single ancestor - is a major focus in evolutionary biology. Adaptive radiations often show remarkable repeatability where lineages have undergone multiple episodes of adaptive radiation in distant places and at various points in time - implying their extraordinary evolutionary potential.

Now, researchers from the Swiss Federal Institute of Aquatic Science and Technology and Tohoku University have developed a novel "individual-based model" that simulates the evolution of an ecosystem of virtual organisms. This model reveals additional information about recurrent adaptive radiation and the role that hybridization plays in that process.

Hybridization - the interbreeding of different species - generates extraordinary genetic variation by mixing and recombining genetic materials from different species. Then the enriched genetic variation can facilitate rapid adaptive radiation into various unoccupied habitats if available. However, hybridization generates large genetic variation locally and for a short period, meaning the simultaneous coexistence of hybridization and unoccupied habits is rare. Because of this, hybridization seemed unlikely to explain the recurrent adaptive radiation in the same lineage.

Yet, a recent genomics study on adaptive radiations of East African cichlid fish caused researchers to reevaluate what they previously thought. The study discovered that a genetic variation generated through an ancient hybridization event permanently increased evolutionary potential of the descendant lineage and facilitated a recurrent adaptive radiation of the lineage in several geographically distant lakes - one of which started over 100000 years after the hybridization event.

To address this theoretical conundrum, Kotaro Kagawa and his colleague Ole Seehausen used their individual-based model to simulate the evolutionary dynamics caused by hybridization under various geographic, ecological, and historical scenarios. Results from over 15000 simulations provided two theoretical findings. First, simulations showed that hybridization-derived genetic variation geographically spreads and persists for long periods only if the hybrid population becomes separated into isolated sub-lineages. Subsequent secondary hybridization of the sub-lineages can potentially reestablish genetic polymorphisms from the ancestral hybridization in places far from the birthplace of the hybrid-clade and long after the ancestral hybridization event. This leads to the second finding: genetic variation generated through a single hybridization event could lead to multiple independent episodes of adaptive radiation far apart in location and time when ecological and geographic conditions promote the temporal isolation and subsequent admixture of sub-lineages.

"These findings provide not only an explanation for the recurrent adaptive radiation of African cichlids but also a novel insight that exceptional genetic variation, once generated through a rare hybridization event, may significantly influence clade-wide macroevolutionary trends ranging over large spatial and temporal scales," said Dr. Kagawa.

Credit: 
Tohoku University

Molecular mechanism of cross-species transmission of primate lentiviruses

image: Cross-species transmission of primate lentiviruses leading to the emergence of HIV.

Image: 
Kei Sato

Humans are exposed continuously to the menace of viral diseases such as those caused by the Ebola virus, Zika virus and coronaviruses. Such emerging/re-emerging viral outbreaks can be triggered by cross-species viral transmission from wild animals to humans.

To achieve cross-species transmission, new hosts have to be exposed to the virus from the old host. Next, the viruses acquire certain mutations that can be beneficial for replicating in the new hosts. Finally, through sustained transmission in the new host, the viruses adapt further evolving as a new virus in the new host (Figure 1). However, at the outset of this process, the viruses have to overcome "the species barriers", which hamper viral cross-species transmission. Mammals including humans have "intrinsic immunity" mechanisms that have diverged enough in evolution to erect species barriers to viral transmission.

HIV-1 most likely originated from related precursors found in chimpanzees and gorillas

HIV-1, the causative agent of AIDS, most likely originated from related precursors found in chimpanzees (SIVcpz) and gorillas (SIVgor), approximately 100 years ago (Figure 2).

Additionally, SIVgor most likely emerged through the cross-species jump of SIVcpz from chimpanzees to gorillas (Figure 2).

However, it remains unclear how primate lentiviruses successfully transmitted among different species. To limit cross-species lentiviral transmission, cellular "intrinsic immunity", including APOBEC3 proteins potentially inhibit lentiviral replication. In contrast, primate lentiviruses in this evolutionary "arms race" have acquired their own "weapon", viral infectivity factor (Vif), to antagonize the antiviral effect of restriction factors.

Suggesting that a great ape APOBEC3 protein can potentially restrict the cross-species transmission of great ape lentiviruses

A research group at The Institute of Medical Science, The University of Tokyo (IMSUT) showed that gorilla APOBEC3G potentially plays a role in inhibiting SIVcpz replication. Intriguingly, the research group demonstrated that an amino acid substitution in SIVcpz Vif, M16E, is sufficient to overcome gorilla APOBEC3G-mediated restriction.

"To our knowledge, this is the first report suggesting that a great ape APOBEC3 protein can potentially restrict the cross-species transmission of great ape lentiviruses and how lentiviruses overcame this species barrier. Moreover, this is the first investigation elucidating the molecular mechanism by which great ape lentiviruses achieve cross-species transmission", said the lead scientist, Kei Sato, Associate Professor (Principal Investigator) in the Division of Systems Virology, Department of Infectious Disease Control, IMSUT.

Credit: 
The Institute of Medical Science, The University of Tokyo

Moon's magnetic crust research sees scientists debunk long-held theory

New international research into the Moon provides scientists with insights as to how and why its crust is magnetised, essentially 'debunking' one of the previous longstanding theories.

Australian researcher and study co-author Dr Katarina Miljkovic, from the Curtin Space Science and Technology Centre, located within the School of Earth and Planetary Sciences at Curtin University, explained how the new research, published by Science Advances, expands on decades of work by other scientists.

"There are two long term hypotheses associated with why the Moon's crust might be magnetic: One is that the magnetisation is the result of an ancient dynamo in the lunar core, and the other is that it's the result of an amplification of the interplanetary magnetic field, created by meteoroid impacts," Dr Miljkovic said.

"Our research is a deep numerical study that challenges that second theory - the impact-related magnetisation - and it essentially 'debunks' it. We found that meteoroid impact plasmas interact much more weakly with the Moon compared to the magnetisation levels obtained from the lunar crust.

"This finding leads us to conclude that a core dynamo is the only plausible source of the magnetisation of the Moon's crust."

To carry out her portion of the research, Dr Miljkovic provided the team with numerical estimates of the vapour formation that occurred during large meteoroid impact bombardment on the Moon approximately 4 billion years ago.

"When we look at the Moon with the naked eye, we can see these large craters caused by ancient meteoroid impacts. They are now filled with volcanic maria, or seas, causing them to look darker on the surface," Dr Miljkovic said.

"During these impact events, the meteoroids hit the Moon at a very high speed, causing displacement, melting, and vaporisation of the lunar crust.

"My work calculated the mass and thermal energy of the vapour emitted during these impacts. That was then used as input for further calculations and investigation of the behaviour of the ambient magnetic field at the Moon, following these large impact events.

"Basically, we made a much more inclusive, high fidelity and high-resolution investigation that led to debunking of the older hypothesis."

The study's lead researcher Dr Rona Oran, a research scientist in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at the Massachusetts Institute of Technology (MIT), said the impact simulations, combined with plasma simulations, harness the latest developments in scientific codes and computing power and allowed the team to perform the first simulations that could realistically capture and test this long-proposed mechanism.

Using such tools was key to allowing the team to look at many different scenarios, and in this way to rule out this mechanism under any feasible conditions that could have existed during the impact. This refutation could have important implications to determine what did magnetise the Moon, and even other objects in the solar system with unexplainable magnetised crusts.

"In addition to the Moon, Mercury, some meteorites, and other small planetary bodies all have a magnetic crust. Perhaps other equivalent mechanical dynamo mechanisms, such as those we now believe to have been in operation on the Moon, could have been in effect on these objects as well," Dr Oran said.

Credit: 
Curtin University

Native milkweed cultivars planted by the public can support monarch butterflies and bees

image: A monarch butterfly and a bee visit a milkweed plant.

Image: 
Jim Hudgins, U.S. Fish and Wildlife Service

Millions of people plant pollinator gardens in an effort to provide monarch butterflies with food along their annual migration route from overwintering sites in the highland forests of central Mexico to summer breeding grounds in the United States and southern Canada. For the first time, entomologists studied how effective native milkweed cultivars in small gardens are at attracting and supporting monarchs - their results suggest that this can be a valuable additional food source.

Plant cultivars are natural variants of native plants that have been deliberately collected, selected, cross-bred or hybridized for desirable traits that can be maintained through propagation. Although experts generally discourage using cultivars for ecological restoration in natural habitats such as forests and wetlands, consumers find them attractive when seeking new plants that combine the attributes of natives and ornamentals. For the nursery industry, cultivars open the door to new introductions and vast market potential. Indeed, most native plants sold at most garden centers are available only as cultivars as opposed to true or ''wild type'' native species.

Researchers Adam Baker and Daniel Potter from the University of Kentucky College of Agriculture, Food and Environment set out to study whether or not native milkweed cultivars, planted by 'citizen ecologists', were effective in helping to support the declining monarch butterfly populations.

"Native milkweed cultivars including those selected for novel floral display, longer blooming duration, compact growth form and other consumer-attractive traits are increasingly available in wholesale nurseries and at local garden centers," said Baker. "It is important to determine whether these cultivars have the same ability as wild-type native plants to attract monarchs and bees and contribute to effective ecological gardens."

In the study, Baker, Potter and their collaborators planted six urban gardens each containing two species of native wild-type milkweeds (swamp and butterfly), along with three cultivars of each species. They monitored them for monarch and bee visitation for two summers. In both years, monarchs laid many more eggs on swamp milkweed than on butterfly milkweed, despite both species being equally suitable food for the caterpillars. Importantly, in both milkweed species, they found the native cultivars were just as attractive and suitable for monarchs as the respective wild-type counterparts from which those cultivars were derived.

"Previous research has shown that monarch butterflies tend to lay more eggs on taller milkweeds than on shorter ones," Potter said. "We think that is what has happened in our study because swamp milkweed is significantly taller than butterfly milkweed, and probably easier for the egg-laying female monarchs to find."

Milkweeds produce a lot of nectar, so besides monarchs they also attract and help to sustain native bees, honeybees, various butterflies and many other nectar-feeding insects. In the study, the scientists identified more than 2,400 bees, representing five bee families and 17 genera, visiting the milkweeds while they were in bloom. They found that swamp milkweed and its cultivars attracted proportionately more large bees, including bumble bees, carpenter bees, and honeybees, whereas butterfly milkweeds attract proportionately more small native bees. Importantly, within each native milkweed species, the cultivars attracted similar bees as their native counterparts.

These findings suggest that the efforts of individual gardeners to plant milkweed, either wild-type native plants or native cultivars, can be helpful in supporting the declining populations of both monarch butterflies and other insects.

Credit: 
PeerJ

Study identifies brain cells most affected by epilepsy and new targets for their treatment

Epilepsy is one of the most common neurological diseases. It is caused by a malfunction in brain cells and is usually treated with medicines that control or counteract the seizures.

Scientists from the Faculty of Health and Medical Sciences, University of Copenhagen and Rigshospitalet have now identified the exact neurons that are most affected by epilepsy. Some of which have never been linked to epilepsy before. The newfound neurons might contribute to epileptogenesis - the process by which a normal brain develops epilepsy - and could therefore be ideal treatment targets.

'Our findings potentially allows for the development of entirely new therapeutic approaches tailored towards specific neurons, which are malfunctioning in cases of epilepsy. This could be a breakthrough in personalized medicine-based treatment of patients suffering from epileptic seizures,' says Associate Professor Konstantin Khodosevich from Biotech Research & Innovation Center (BRIC), Faculty of Health and Medical Sciences.

A major step towards more effective drugs

It is the first time a study investigates how every single neuron in the epileptic zone of the human brain is affected by epilepsy. The researchers have analyzed more than 117,000 neurons, which makes it the largest single cell dataset for a brain disorder published so far.

Neurons have been isolated from tissue resected from patients being operated as part of the Danish Epilepsy Surgery Programme at Rigshospitalet in Copenhagen.

'These patients continue to have seizures despite the best possible combination of anti-seizure drugs. Unfortunately, this is the case for 30-40 % of epilepsy patients. Active epilepsy imposes serious physical, cognitive, psychiatric and social consequences on patients and families. A more precise understanding of the cellular mechanism behind epilepsy could be a major step forward for developing drugs specifically directed against the epileptogenic process compared to the current mode of action reducing neuronal excitability in general throughout the brain' says associate professor Lars Pinborg, head of the Danish Epilepsy Surgery Program at Rigshospitalet.

From 'neuronal soup' to single cell analysis

The study from the Khodosevich Group differs from previous work by using single cell analysis. Earlier studies on neuronal behavior in regards to epilepsy have taken a piece of the human brain and investigated all the neurons together as a group or a 'neuronal soup'. When using this approach, diseased cells and healthy cells are mixed together, which makes it impossible to identify potential treatment targets.

'By splitting the neurons into many thousands of single cells, we can analyze each of them separately. From this huge number of single cells, we can pinpoint exactly what neurons are affected by epilepsy. We can even make a scale from least to most affected, which means that we can identify the molecules with the most promising potential to be effective therapeutic targets', says Konstantin Khodosevich.

Next step is to study the identified neurons and how their functional changes contribute to epileptic seizures. The hope is to then find molecules that can restore epilepsy related neuronal function back to normal and inhibit seizure generation.

Expanding knowledge on underlying mechanisms of epilepsy

The study confirms expression from key genes known from a number of previous studies, but is also a dramatic expansion of knowledge on the subject. Previously, gene expression studies have identified a couple of hundred genes that changes in epilepsy.

'We show that the complexity of gene expression in epilepsy is much larger than previously known. It is not a matter of a handful or a few hundred genes changing. Our study proves that thousands of genes in different neurons change their expression in epilepsy. From these thousands of gene expression changes, we identified those that most likely contribute to epileptogenesis. Now it is time to prove it functionally,' says Konstantin Khodosevich.

Credit: 
University of Copenhagen - The Faculty of Health and Medical Sciences

Faster COVID-19 testing with simple algebraic equations

A mathematician from Cardiff University has developed a new method for processing large volumes of COVID-19 tests which he believes could lead to significantly more tests being performed at once and results being returned much quicker.

Dr Usama Kadri, from the University's School of Mathematics, believes the new technique could allow many more patients to be tested using the same amount of tests tubes and with a lower possibility of false negatives occurring.

Dr Kadri's technique, which has been published in the journal Health Systems, uses simple algebraic equations to identify positive samples in tests and takes advantage of a testing technique known as 'pooling'.

Pooling involves grouping a large number of samples from different patients into one test tube and performing a single test on that tube.

If the tube is returned negative then you know that everybody from that group does not have the virus.

Pooling can be applied by laboratories to test more samples in a shorter space of time, and works well when the overall infection rate in a certain population is expected to be low.
If a tube is returned positive then each person within that group needs to be tested once again, this time individually, to determine who has the virus.

In this instance, and particularly when it is known that infection rates in the population are high, the savings from the pooling technique in terms of time and cost become less significant.

However, Dr Kadri's new technique removes the need to perform a second round of tests once a batch is returned positive and can identify the individuals who have the virus using simple equations.

The technique works with a fixed number of individuals and test tubes, for example 200 individuals and 10 test tubes, and starts by taking a fixed number of samples from a single individual, for example 5, and distributing these into 5 of the 10 test tubes.

Another 5 samples are taken from the second individual and these are distributed into a different combination of 5 of the 10 tubes.

This is then repeated for each of the 200 individuals in the group so that no individual shares the same combination of tubes.

Each of the 10 test tubes is then sent for testing and any tube that returns negative indicates that all patients that have samples in that tube must be negative.

If only one individual has the virus, then the combinations of the tubes that return positive, which is unique to the individual, will directly indicate that individual.

However, if the number of positive tubes is larger than the number of samples from each individual, in this example 5, then there should be at least two individuals with the virus.

The individuals that have all of their test tubes return positive are then selected.

The method assumes that each individual that is positive should have the same quantity of virus in each tube, and that each of the individuals testing positive will have a unique quantity of virus in their sample which is different to the others.

From this, the method then assumes that there are exactly two individuals with the virus and, for every two suspected individuals, a computer is used to calculate any combination of virus quantity that would return the actual overall quantity of virus that was measured in the tests.

If the right combination is found then the selected two individuals have to be positive and no one else. Otherwise, the procedure is repeated but with an additional suspected individual, and so on until the right combination is found.

"Applying the proposed method allows testing many more patients using the same number of testing tubes, where all positives are identified with no false negatives, and no need for a second round of independent testing, with the effective testing time reduced drastically," Dr Kadri said.

So far, the method has been assessed using simulations of testing scenarios and Dr Kadri acknowledges that lab testing will need to be carried out to increase confidence in the proposed method.

Moreover, for clinical use, additional factors need to be considered including sample types, viral load, prevalence, and inhibitor substances.

Credit: 
Cardiff University