Body

New mathematical model can more effectively track epidemics

As COVID-19 spreads worldwide, leaders are relying on mathematical models to make public health and economic decisions.

A new model developed by Princeton and Carnegie Mellon researchers improves tracking of epidemics by accounting for mutations in diseases. Now, the researchers are working to apply their model to allow leaders to evaluate the effects of countermeasures to epidemics before they deploy them.

"We want to be able to consider interventions like quarantines, isolating people, etc., and then see how they affect an epidemic's spread when the pathogen is mutating as it spreads," said H. Vincent Poor, one of the researchers on this study and Princeton's interim dean of engineering.

The models currently used to track epidemics use data from doctors and health workers to make predictions about a disease's progression. Poor, the Michael Henry Strater University Professor of Electrical Engineering, said the model most widely used today is not designed to account for changes in the disease being tracked. This inability to account for changes in the disease can make it more difficult for leaders to counter a disease's spread. Knowing how a mutation could affect transmission or virulence could help leaders decide when to institute isolation orders or dispatch additional resources to an area.

"In reality, these are physical things, but in this model, they are abstracted into parameters that can help us more easily understand the effects of policies and of mutations," Poor said.

If the researchers can correctly account for measures to counter the spread of disease, they could give leaders critical insights into the best steps they could take in the face of pandemics. The researchers are building on work published March 17 in the Proceedings of the National Academy of Sciences. In that article, they describe how their model is able to track changes in epidemic spread caused by mutation of a disease organism. The researchers are now working to adapt the model to account for public health measures taken to stem an epidemic as well.

The researchers' work stems from their examination of the movement of information through social networks, which has remarkable similarities to the spread of biological infections. Notably, the spread of information is affected by slight changes in the information itself. If something becomes slightly more exciting to recipients, for example, they might be more likely to pass it along or to pass it along to a wider group of people. By modeling such variations, one can see how changes in the message change its target audience.

"The spread of a rumor or of information through a network is very similar to the spread of a virus through a population," Poor said. "Different pieces of information have different transmission rates. Our model allows us to consider changes to information as it spreads through the network and how those changes affect the spread."

"Our model is agnostic with regard to the physical network of connectivity among individuals," said Poor, an expert in the field of information theory whose work has helped establish modern cellphone networks. "The information is being abstracted into graphs of connected nodes; the nodes might be information sources or they might be potential sources of infection."

Obtaining accurate information is extremely difficult during an ongoing pandemic when circumstances shift daily, as we have seen with the COVID-19 virus. "It's like a wildfire. You can't always wait until you collect data to make decisions - having a model can help fill this void," Poor said.

"Hopefully, this model could give leaders another tool to better understand the reasons why, for example, the COVID-19 virus is spreading so much more rapidly than predicted, and thereby help them deploy more effective and timely countermeasures," Poor said.

Credit: 
Princeton University, Engineering School

The YAP signal plays a crucial role in head-and-neck cancer onset

image: This research group developed mice with MOB1 deletions in their tongues through Tamoxifen application. Seven days after the drug application, approximately a third of the mice developed early onset head-and-neck cancer (intraepithelial tongue cancer), with all mice developing it by day 14. By day 28, all mice had invasive tongue cancer, making this the world's fastest model of cancer development in mice.

Image: 
Kobe University

Joint research between Kobe University and National Hospital Organization Kyushu Cancer Center has revealed that mice with mutations in the YAP signal pathway develop head-and-neck cancer over an extremely short period of time (world's fastest cancer onset mouse model), indicating that this pathway plays a crucial role in the onset of these cancers. This discovery may shed light on the development of new drugs for head-and-neck cancer.

This research resulted from a collaboration between a research group led by Professor SUZUKI Akira and Associate Professor MAEHAMA Tomohiko at Kobe University Graduate School of Medicine, and Dr. MASUDA Muneyuki's team at Kyushu Cancer Center.

These results were published in the American scientific journal 'Science Advances' on March 18.

Main Points:

>Deletion of MOB1 (*1, which represses YAP) in mouse tongues causes strong activation of YAP (*2), leading to the early onset of cancer (in about 1 week).

>In humans, the expression of YAP increases during the development of dysplasia (pre-cancerous lesions), prior to the onset of head-and-neck cancer. YAP continues to increase with the development and progression of cancer. This high YAP activation is linked to poor patient prognosis.

>The onset and progression of head-and-neck cancer in the mice in this study, and the proliferation of stem cells in this cancer in humans, are dependent on YAP.

>These results suggest that cancer develops when the YAP activation exceeds a threshold. YAP may play a fundamental role in head-and-neck cancer onset and progression. These conclusions represent a paradigm shift in the understanding of these cancers.

>The mouse model developed in this study can be used in research to develop new drugs for head-and-neck cancer and, in addition, provides a beneficial resource for cancer research in general.

>By inhibiting YAP, the development and progression of head-and-neck cancer can be suppressed. Thus, the YAP pathway provides a good target for head-and-neck cancer treatments.

Research Background

Head-and-neck cancer in humans

Head-and-neck cancer is the sixth most common type of cancer in the world, affecting 600,000 people annually. In Japan there are around 22,500 new cases every year. This 'head and neck' includes the oral cavity and areas of the throat (pharynx and larynx). Among these, mouth cancers (especially tongue cancer) are the most prevalent.

It is understood that exposure to carcinogens, such as those found in cigarettes and alcohol, as well as mechanical irritation of the mucous membranes in the mouth, tooth decay and improperly fitted dentures, are risk factors for the development of head-and-neck cancer.

In addition, 15% of head-and-neck cancer is caused by Human Papillomavirus (HPV), which in particular causes oropharynx cancer.

The prognosis for patients who are HPV-positive is relatively good. Conversely, prognosis is poor for HPV negative patients and in most cases, mutations are found in the tumor suppressor gene TP53 (p53). However, mutations in this gene alone are not sufficient to cause head-and-neck cancer. It has been thought that changes in other molecules are also necessary for cancer development, however these causes remain elusive.

From comprehensive cancer genome analyses, it is known that PTEN/P13K (46%), FAT1 (32%), EGFR (15%) gene mutations are also found in HPV-negative head-and-neck cancer. However, the genetic pathway of these molecules in relation to head-and-neck cancer development has not been sufficiently understood.

Mouse models of cancer

Up until now, research using mouse models of head-and-neck cancer has discovered that if both the p53 and Akt genes are mutated, 50% of mice will develop this type of cancer about 9 months after the mutation (the average mouse lifespan is 2 years).

The onset of cancer begins after many genetic mutations have accumulated (multistep carcinogenesis). Mice with a mutation in one important molecule usually develop cancer within 4 to 24 months (with the majority showing signs between 6 to 12 months).

The YAP pathway

The function of the transcriptional co-activator YAP is to turn 'on' the transcription of gene clusters related to cell growth. The LATS/MOB1 complex phosphorylates YAP, thereby excluding YAP from the nucleus, leading to the subsequent degradation of YAP proteins. In other words, MOB1 and LATS act as a 'brake' (tumor suppressor) to inhibit cell proliferation facilitated by YAP. It has been reported that in 8% of human head-and-neck cancer cases, the YAP gene is amplified and there is a connection between YAP activation, cancer progression and poor prognosis.

This research group produced mice with MOB1 deletion in their tongues (so that YAP would be intrinsically activated) in order to perform a detailed analysis in vivo of the role that the YAP pathway plays in head-and-neck cancer.

Research Methodology

Mice with MOB1 deletion exhibit rapid onset tongue cancer

This research group developed mice with MOB1 deletion in their tongues by applying the drug tamoxifen to their tongues and then modifying them genetically using the Cre-loxP system (*4).

Three days after applying tamoxifen, the amount of MOB1 had barely decreased, however by day 7, the vast majority of these proteins had disappeared. At this point, a third of the mice demonstrated rapid onset head-and-neck cancer (intraepithelial tongue cancer), with all mice developing the disease by day 14. The cancer had progressed in all mice by day 28 (invasive tongue cancer). The team succeeded in developing the world's fastest mouse model of cancer onset (Figure 1). Both domestic and international patents for this model have been applied for.

This mouse model showed that head-and-neck cancer develops quickly (within a week) when the YAP pathway is strongly activated, suggesting that this pathway plays an extremely important role in head-and-neck cancer onset.

YAP activation and tumorigenic properties of the tongue epithelium in MOB1 deletion mice.

The epithelial cells (on the surface of the tongues) of MOB1 deletion mice exhibited the following properties characteristic of tumor development: increased cell proliferation and cell saturation density, impaired cell polarity, low levels of apoptosis (cell death), increase in undifferentiated cells, and chromosomal instability (characterized by increases in aneuploid cells (*5)), multipolar spindles (*6) and micronucleated cells). On a biochemical level, activation of YAP and a decrease in LATS proteins was evident due to MOB1 deletion.

The epithelial cells acquired the characteristics of tumor cells due to the YAP activation caused by the deletion of MOB1.

YAP activation in the stages of tongue cancer in humans

The development of human tongue cancer can be divided to the following stages; the normal stage, the dysplasia stage, the intraepithelial cancer stage (*8) and the invasive cancer stage (*9).

If we look at YAP activation across all these stages, we can see that YAP is enhanced in the dysplasia stage which proceeds the onset of cancer. YAP activation shows continued increase during the subsequent stages of cancer progression. In cases where YAP is highly activated, overall survival is decreased and the likelihood of cancer relapse is high.

In other words, YAP increases before the onset of cancer and continues to increase as the cancer develops and progresses. Accumulation of YAP is linked to poor patient prognosis.

Cancer formation is dependent on YAP when MOB1 is deleted

Invasive cancer occurred in MOB1 deletion mice. However, when both YAP and MOB1 are deleted from mice, cancer onset is halted at the dysplasia stage, showing that the onset of head-and-neck cancer is dependent on YAP (Figure 2).

Among current YAP pathway inhibitors, the SRC inhibitor Dasatinib (*10) was shown to be the most effective (SRC has been previously shown to activate YAP both directly and indirectly). Dasatinib was shown to prevent the onset of intraepithelial head-and-neck cancer in the MOB1 deletion mice. It also suppressed the development of invasive cancer in MOB1 deletion mice that had reached the intraepithelial head-and-neck cancer stage.

In human head-and-neck cancer stem cells, it is possible to suppress cell proliferation either by inhibiting YAP gene expression or by adding YAP inhibitors. Cisplatin, which is commonly used to treat head-and-neck cancer, is augmented when YAP is suppressed.

In mice, head-and-neck cancer onset and progression was suppressed when YAP was inhibited. In the same way, it was shown that in human tongue cancer stem cells, cell proliferation was also suppressed when YAP was inhibited.

Known genetic mutations in human head-and-neck cancer and YAP activation

Genetic mutations in p53, PTEN/PI3K, FAT1, and EGFR have been identified in HPV-negative head-and-neck cancer.

This research group showed that EGF signal activation and mutations in p53, PTEN and FAT1 each play a role in YAP activation. Furthermore, YAP activation gradually increases as these genetic mutations accumulate.

Normally, cancer takes time to develop as it is a multistep process. However, in this study, intraepithelial head-and-neck cancer rapidly developed just from highly strengthening YAP activation.

In conclusion, this study raises the possibility that the following process for head-and-neck cancer development takes place: A. Cancer develops when the YAP activation exceeds a threshold due to the accumulation of genetic mutations in p53, PTEN/PI3K, FAT1 and EGFR (Figure 3). B. Subsequently, YAP continues to accumulate after cancer has developed, resulting in cancer progression.

Conclusion and Further Developments

YAP is frequently activated in cancer cells although genetic mutations in the YAP pathway are not frequently found. It is thought that this is why the importance of the YAP pathway in the onset of head-and-neck cancer was unclear until now.

1. YAP activation levels are high before the onset of head-and-neck cancer in humans.
2. YAP is further activated as the cancer progresses.
3. The high frequency of mutations in p53, PTEN/PI3K, FAT1 and EGFR all activate YAP.
4. The accumulation of these molecular mutations gradually leads to high YAP activation:
A. The accumulation of genetic mutations in p53, PTEN/PI3K, FAT1 and EGFR cause YAP to reach its threshold, culminating the onset of cancer.
B. YAP continues to accumulate after the cancer onset, resulting in further cancer progression.

It is necessary to consider YAP as a basis for head-and-neck cancer onset and progression. This represents a paradigm shift in our understanding of these cancers.

In addition, it has also been shown that risk factors for head-and-neck cancer, such as cigarette smoking, mechanical irritation of mucous membranes and HPV infection, also play a part in YAP activation.

The mouse model in this study: 1. Is the fastest mouse model in the world for showing the natural onset of cancer. 2. Can be used to visualize cancer onset and progression. 3. Allows cancers to be developed naturally at the same time. 4. Allows cancer onset and progression to be analyzed in mice immediately after birth, allowing drug tests to be conducted in a shorter period of time and in small quantities. The results suggest that this mouse model would be ideal, not only for research into developing new treatments for head-and-neck cancer, but also for cancer research in general.

It is expected that the YAP pathway will provide a good target for drugs used in the treatment of head-and-neck cancer because inhibiting YAP not only suppresses the cancer onset but can also prevent its progression.

Researchers from all over the world, including this research group, are currently trying to find new drugs that target the YAP pathway. We have shown one factor that is effective against head-and-neck cancer. It is also expected that the mouse model will become an indispensable tool for evaluating their results and for head-and-neck cancer research.

Credit: 
Kobe University

Global surgical guidelines drive cut in post-surgery deaths -- study

International surgical guidelines launched today (24/3 US - 25/3 UK) will help to save thousands of lives in Low- and Middle-income Countries (LMIC) countries by standardising and improving practice in surgery.

An international collaboration led by the University of Birmingham sets out nine essential recommendations that should be implemented as a priority across all hospitals world-wide in the fight against Surgical Site Infection (SSI).

SSI is the most common complication following abdominal surgery, affecting 9% of patients in high-income countries and 17% of patients in low- and middle-income countries (LMICs) - causing patients to experience pain and delays return to normal activities such as work.

At least 4·2 million people worldwide die within 30 days of surgery each year, and half of these deaths occur in LMICs. This number of postoperative deaths accounts for 7·7% of all deaths globally, making it the third greatest contributor to deaths, after ischaemic heart disease and stroke.

More people die within 30 days of surgery annually than from all causes related to HIV, malaria, and tuberculosis combined (2·97 million deaths). It is estimated that failure to improve surgical care will cost the world economy $12.3 trillion in lost GDP by 2030.

Additional SSI-related health costs can cause financial hardship, particularly for the most vulnerable patients in LMICs. SSI is associated with a three-fold increase in the risk of death after surgery. Treatment of SSI is increasingly challenging due to the rise of antibiotic resistance, which occurs in up to 46% of LMIC patients. This places a strong focus on preventing SSI from occurring in the first place.

Published in the British Journal of Surgery, the new Global Surgery Guideline for the Prevention of Surgical Site Infection will support surgeons in putting into practice key interventions that are proven to reduce the SSI risk.

Expert surgeons representing 14 countries across Africa, Europe, Latin America, and South Asia identified nine evidence-based interventions which can be feasibly implemented worldwide at low cost.

Mr. Aneel Bhangu, Consultant Surgeon and Senior Lecturer at the NIHR Global Health Research Unit on Global Surgery at the University of Birmingham commented: "We've estimated that around 20 million patients develop surgical site infections worldwide each year following abdominal surgery, including 14.7 million LMIC patients.

"The Global Surgery Guideline for the Prevention of Surgical Site Infection has identified practical steps that all hospitals should urgently take to both reduce avoidable infections and the spread of antimicrobial resistance."

Dr. Adewale Adisa, Senior Lecturer in Surgery at the Obafemi Awolowo University in Ile-Ife, Nigeria and co-lead author commented: "High rates of SSI and antimicrobial resistance are a real worry for surgeons, particularly in LMICs. Although guidelines for prevention of SSI have previously been published, they were developed in high income countries with little thought for the specific needs of LMIC patients.

"Many of their recommendations were impractical for resource-limited hospitals, and few LMIC surgeons put them in to practice. This is the first guideline to have been led by LMIC surgeons and I believe our recommendations can be implemented immediately to benefit all patients across the world."

The recommendations encourage medical professionals to boost patient safety by:

Ensuring patients have had a full body wash with clean water and soap before operation.

Selecting antibiotic prophylaxis according to published antibiotic prescribing guidelines.

Administering antibiotic prophylaxis to all patients undergoing clean-contaminated, contaminated or dirty surgery.

Administering antibiotic prophylaxis intravenously within 60 minutes before skin incision.

Administering a repeat dose of antibiotic prophylaxis if the duration of operation is longer than the half-life of the antibiotic given.

Not routinely continuing prophylactic antibiotics beyond 24 hours after operation.

Ensuring scrub teams decontaminate their hands before surgery using antiseptic surgical solution.

Preparing the skin at the surgical site immediately before incision, using antiseptic preparation

Providing supplemental oxygen during surgery under general anaesthetic

In addition, a further three 'desirable' recommendations are made in the guideline. It is recognised that worldwide some hospitals may lack the necessary resources to immediately implement these interventions, in which case they should plan strategies to introduce these interventions in the future.

Credit: 
University of Birmingham

Infections still responsible for 1 in 5 childhood deaths in England and Wales

Infections are still responsible for one in five childhood deaths in England and Wales, with respiratory infections topping the league table of known causes, reveals an analysis of the most up to date figures, published online in Archives of Disease in Childhood.

This is despite sharp declines in overall childhood death rates over the past decade, helped in part by the introduction of new vaccination programmes, suggest the researchers.

The UK has one of the highest childhood death rates in Europe, and the researchers wanted to find out if anything had changed since they last analysed data on childhood deaths for 2003-5.

They drew on electronic death registrations for England and Wales, covering children from the ages of 28 days up to 15 years, for the period 2013 to 2015 inclusive.

In all, 5088 children died during 2013-15, equivalent to an annual rate of 17.6/100,000 children. This compares with 6897 deaths in 2003-5, equivalent to just under 24/100,000 in 2003-5, representing a drop of 26% in 10 years.

The proportion of deaths caused by infections fell by 31%, overall, from 1368 of the total in 2003-5 to 951 in 2013-15, equivalent to a rate of 3.3/100,000 children. In over half these deaths (55%; 523/951) the children had an underlying condition.

Compared with 2003-5, infection-related deaths fell by 45% in infants, and by 50% in young children, but they increased in older children by 22%.

Where recorded, respiratory tract infection was the most commonly reported presenting problem, accounting for 374 out of 876 cases (just under 43%).

Nearly two thirds of deaths caused by infection had a bacterial cause (63%), while around a third were viral (34%), and 2.5% were fungal.

The average age of the children who died of an infection was just over 12 months: around 40% of these deaths occurred in infants; 26.5% in younger children; and a third (33%) in older children.

By way of explanation for their findings, the researchers note: "In healthy children, there were large reductions in pneumococcal and meningococcal deaths. The UK implemented a childhood pneumococcal immunisation programme in 2006, which led to large declines in childhood pneumococcal disease."

And a meningococcal B vaccination was included in the national infant immunisation programme in August 2015.

But they highlight that Group A Streptococcus (GAS) "has emerged as a major pathogen responsible for bacterial-related deaths during 2013-15, reflecting a sharp increase in disease incidence since 2014 and reaching 33.2 cases/100 000 person years by 2016, the highest rate in almost 50 years."

Some 15%-30% of cases are associated with varicella infection, they point out. Countries that have included varicella vaccination into their national childhood immunisation programmes have observed reductions of up to 70% in invasive infections.

"Varicella vaccination is currently not included in the UK national immunisation programme, but could potentially reduce the burden of chickenpox and GAS infections and deaths in children," they suggest.

Credit: 
BMJ Group

Study shows commonly used mouthwash could make saliva significantly more acidic

The first study looking at the effect of chlorhexidine mouthwash on the entire oral microbiome has found its use significantly increases the abundance of lactate-producing bacteria that lower saliva pH, and may increase the risk of tooth damage.

A team led by Dr Raul Bescos from the University of Plymouth's Faculty of Health gave a placebo mouthwash to subjects for seven days, followed by seven days of a chlorhexidine mouthwash.

At the end of each period, the researchers carried out an analysis of the abundance and diversity of the bacteria in the mouth - the oral microbiome - as well as measuring pH, saliva buffering capacity (the ability to neutralise acids in the mouth), lactate, glucose, nitrate and nitrite concentrations.

The research, published in Scientific Reports today, found using chlorhexidine mouthwash over the seven days led to a greater abundance of species within the families of Firmicutes and Proteobacteria, and fewer Bacteroidetes, TM7 and Fusobacteria. This change was associated with an increase in acidity, seen in lower salivary pH and buffering capacity.

Overall, chlorhexidine was found to reduce microbial diversity in the mouth, although the authors cautioned more research was needed to determine if this reduction in diversity itself increased the risk of oral disease.

One of the primary roles of saliva is to maintain a neutral pH in the mouth, as acidity levels fluctuate as a result of eating and drinking. If saliva pH gets too low, damage can occur to the teeth and mucosa - tissue surrounding the teeth and on the inside of the mouth.

The research also confirmed findings from previous studies indicating that chlorhexidine disrupted the ability of oral bacteria to turn nitrate into nitrite, a key molecule for reducing blood pressure. Lower saliva and blood plasma nitrite concentrations were found after using chlorhexidine mouthwash, followed by a trend of increased systolic blood pressure. The findings supported earlier research led by the University that showed the blood pressure-lowering effect of exercise is significantly reduced when people rinse their mouths with antibacterial mouthwash rather than water.

Dr Bescos said: "There is a surprising lack of knowledge and literature behind the use of these products. Chlorhexidine mouthwash is widely used but research has been limited to its effect on a small number of bacteria linked to particular oral diseases, and most has been carried out in vitro.

"We believe this is the first study to look at the impact of 7-day use on the whole oral microbiome in human subjects."

Dr Zoe Brookes and Dr Louise Belfield, Lecturers in the Peninsula Dental School at the University of Plymouth, are co-authors of the study.

Dr Belfield said: "We have significantly underestimated the complexity of the oral microbiome and the importance of oral bacteria in the past. Traditionally the view has been that bacteria are bad and cause diseases. But we now know that the majority of bacteria - whether in the mouth or the gut - are essential for sustaining human health."

Dr Brookes added: "As dental clinicians, we need more information on how mouthwashes alter the balance of oral bacteria, so we can prescribe them correctly. This paper is an important first step in achieving this.

"In the face of the recent COVID-19 outbreak many dentists are now using chlorhexidine as a pre-rinse before doing dental procedures. We urgently need more information on how it works on viruses"

Credit: 
University of Plymouth

The Lancet Infectious Diseases: Singapore modelling study estimates impact of physical distancing on reducing spread of COVID-19

First of its kind modelling study in Singapore indicates that quarantining of people infected with the new coronavirus and their family members, school closures plus quarantine, and workplace distancing plus quarantine, in that order, are effective at reducing the number of COVID-19 cases, with a combination of all three being most effective in reducing cases

Authors investigated potential outcomes for three infection reproduction values [R0=1.5, R0=2.0, R0=2.5], based on low, moderate or likely, and high infection transmissibility, informed by Wuhan case data [1], and found that prevention and suppression become more challenging at higher R0 values

Authors note several limitations of the study, but in particular, the transmission and infectivity of the new coronavirus (SARS-CoV-2) remain uncertain, so the authors informed their model based on the virus that causes SARS

A new modelling study conducted in a simulated Singapore setting has estimated that a combined approach of physical distancing [2] interventions, comprising quarantine (for infected individuals and their families), school closure, and workplace distancing, is most effective at reducing the number of SARS-CoV-2 cases compared with other intervention scenarios included in the study.

While less effective than the combined approach, quarantine plus workplace measures presented the next best option for reducing SARS-CoV-2 cases, followed by quarantine plus school closure, and then quarantine only. All intervention scenarios were more effective at reducing cases than no intervention.

The study, published in The Lancet Infectious Diseases journal, is the first of its kind to investigate using these options for early intervention in Singapore using simulation. Despite heightened surveillance and isolation of individuals suspected to have COVID-19 and confirmed cases, the risk is ongoing, with the number of cases continuing to increase in Singapore. Schools have not been closed, and workplace distancing is recommended, but it is not national policy [correct as of 23.03.2020].

The study found that the combined approach could prevent a national outbreak at relatively low levels of infectivity (basic reproductivity value (R0) = 1.5), but at higher infectivity scenarios (R0 = 2.0 (considered moderate and likely) and R0 = 2.5 (considered high)), outbreak prevention becomes considerably more challenging because although effective at reducing infections, transmission events still occur.

Dr Alex R Cook, National University of Singapore, said: "Should local containment measures, such as preventing disease spread through contact tracing efforts and, more recently, not permitting short-term visitors, be unsuccessful, the results of this study provide policy makers in Singapore and other countries with evidence to begin the implementation of enhanced outbreak control measures that could mitigate or reduce local transmission rates if deployed effectively and in a timely manner." [3]

To assess the potential impact of interventions on outbreak size, should local containment fail, authors developed an individual-based influenza epidemic simulation model, which accounted for demography, individual movement, and social contact rates in workplaces, schools, and homes, to estimate the likelihood of human-to-human transmission of SARS-CoV-2. Model parameters included how infectious an individual is over time, the proportion of the population assumed to be asymptomatic (7.5%), the cumulative distribution function for the mean incubation period (with the virus that causes SARS and the virus that causes COVID-19having the same mean incubation period of 5.3 days), and the duration of hospital stay after symptom onset (3.5 days).

Using this model, authors estimated the cumulative number of SARS-CoV-2 infections at 80 days, after detection of 100 cases of community transmission. Three values for the basic reproduction number (R0) were chosen for the infectiousness parameter, including relatively low (R0=1.5), moderate and likely (R0=2.0), and high transmissibility (R0=2.5). The basic reproduction numbers were selected based on analyses of data from people with COVID-19 in Wuhan, China [1].

In addition to a baseline scenario, which included no interventions, four intervention scenarios were proposed for implementation after failure of local containment: 1) isolation of infected individuals and quarantine of their family members (quarantine); 2) quarantine plus immediate school closure for 2 weeks; 3) quarantine plus immediate workplace distancing, in which 50% of the workforce is encouraged to work from home for 2 weeks; 4) a combination of quarantine, immediate school closure, and workplace distancing. These interventions follow some policy options currently being undertaken (quarantine and some workforce distancing) by the Singaporean Ministry of Health, as standard interventions for respiratory virus control.

For the baseline scenario, when R0 was 1.5, the median cumulative number of infections at day 80 [4] was 279,000, corresponding to 7.4% of the resident population of Singapore. The median number of infections increased with higher infectivity: 727,000 cases when R0 was 2.0, corresponding to 19.3% of the Singaporean population, and 1,207,000 cases when R0 was 2.5, corresponding to 32% of the Singaporean population.

Compared with the baseline scenario, the combined intervention was the most effective, reducing the estimated median number of infections by 99.3% when R0 was 1.5 (resulting in an estimated 1,800 cases). However, at higher infectivity scenarios, outbreak prevention becomes considerably more challenging. For the combined approach scenario, a median of 50,000 cases were estimated at R0 of 2.0 (a reduction of 93.0% compared to baseline) and 258,000 cases at R0 of 2.5 (a reduction of 78.2% compared to baseline).

Authors also explored the potential impact if the proportion of asymptomatic cases in the population was greater than 7.5% (the proportion of people who are able to transmit despite having no or mild symptoms). Even at a low infectivity (when the R0 was 1.5 or lower), a high asymptomatic proportion presents challenges. Assuming increasing asymptomatic proportions up to 50·0%, up to 277,000 infections were estimated to occur at day 80 with the combined intervention, relative to 1,800 for the baseline at R0 = 1.5.

Dr Alex R Cook added: "If the preventive effect of these interventions reduces considerably due to
higher asymptomatic proportions, more pressure will be placed on the quarantining and treatment of infected individuals, which could become unfeasible when the number of infected individuals exceeds the capacity of health-care facilities. At higher asymptomatic rates, public education and case management become increasingly important, with a need to develop vaccines and existing drug therapies." [3]

The authors note several limitations in their study, including dated census population data, impact of migrant movement, the impact of seeding of imported cases (transmissions originating from outside of Singapore) the dynamics of contact patterns between individuals, and other unforeseen factors. Of note, epidemiological characteristics of COVID-19 remain uncertain in terms of the transmission and infectivity profile of the virus; therefore, estimates of the time between symptom onset and admission to hospital, how infectious an individual is over time, and the asymptomatic rate were based on SARS-CoV.

Writing in a linked Comment, Joseph A Lewnard, University of California, Berkeley, USA, and Nathan C Lo, University of California, San Francisco, USA, say: "Although the scientific basis for these interventions
might be robust, ethical considerations are multifaceted. Importantly, political leaders must enact quarantine and social-distancing policies that do not bias against any population group. The legacies of social and economic injustices perpetrated in the name of public health have lasting repercussions. Interventions might pose risks of reduced income and even job loss, disproportionately affecting the most disadvantaged populations: policies to lessen such risks are urgently needed. Special attention should be given to protections for vulnerable populations, such as homeless, incarcerated, older, or disabled individuals, and undocumented migrants. Similarly, exceptions might be necessary for certain groups, including people who are reliant on ongoing medical treatment."

Credit: 
The Lancet

How personalization and machine learning can improve cancer outreach ROI

Researchers from Texas A&M, Iowa State University, Rice, and University of Texas Southwestern Medical Center published a new paper in the Journal of Marketing, which examines the efficacy of patient marketing.

The study, forthcoming in the May issue of the Journal of Marketing, is titled "Improving Cancer Outreach Effectiveness Through Targeting and Economic Assessments: Insights from a Randomized Field Experiment" and is authored by Yixing Chen, Ju-Yeon Lee, Shrihari Sridha, Vikas Mittal, and Amit G. Singal.

In 2018, over 1.7 million new cases of cancer were diagnosed in the United States and the cost of cancer care surpassed $147 billion. Many of these cases could have been prevented through regular cancer screening tests that open the door for early detection, more cost-effective treatment options, and better recovery prognosis. For example, regular screening reduces mortality rates for lung cancer by 28%, breast cancer by 24%, and liver cancer by 37%. Moreover, cancer screening can reduce the annual treatment cost for a patient by nearly $5,000.

Healthcare institutions rely on marketing interventions--or direct-to-patient outreach--to increase screening completion among at-risk patients. As an example, Johns Hopkins Hospital's cancer center uses emails, letters, seminars, and community events to encourage screening completion among patients. Yet, according to a recent article in the LA Times, "just 4.2% of patients in the United States who are at high risk for lung cancer get screened for it -- a figure seen as alarmingly low by those who work in the area of prevention." Could it be that the 1.7 million outreach interventions launched in 2015 and $123 million spent on prevention and education efforts go to waste?

The research team used a multi-period randomized field experiment with at-risk patients for hepatocellular carcinoma (HCC), the most common type of primary liver cancer. Patients were randomly assigned (1:1:1) to three different conditions--usual care, outreach alone, or outreach with patient navigation. Usual care is the baseline condition where physicians offer preventive care recommendations at their discretion during a patient's usual care visits. Outreach alone and outreach with patient navigation provide two different levels of direct marketing efforts based on outreach mails, outreach calls, and customized motivational education by trained patient navigators. Patients' demographics, health status, visit history, health system accessibility, neighborhood socioeconomic status, and prior screening compliance were included in the data.

By using causal forests, a state-of-the-art development in machine learning, the researchers discovered that the effectiveness of outreach programs varies widely over time and across patients. For example, outreach programs in general are more effective for patients who are female, minority, in better health status, have a more frequent visit history, covered by medical-assistance insurance, reside in closer proximity to clinics, and reside in a more populated neighborhood. Outreach alone is more effective for patients who are younger, commute faster, and reside in a neighborhood with more public insurance coverage. In contrast, outreach with patient navigation is more effective for patients who are older and reside in a higher-income neighborhood.

By using these patient-level differences in their responsiveness to outreach interventions and a well-established scheme of cost-benefit calculation, the study shows that a targeted outreach program that matches each patient to the optimal outreach type improves the return on the outreach program by 74%-96%, resulting in a gain of $1.6 million to $2 million.

This research helps healthcare practitioners in the following ways:

It provides an understanding of how outreach effectiveness varies over time and by patient characteristics that are theoretically relevant and easily accessible to practitioners in their patient database.

It offers a cost-benefit analysis approach for assessing individual-level return on marketing investments in the context of liver cancer screening outreach.

It offers a tool to recommend the most suitable intervention for each patient.

The research combines the well-known marketing principle that all customers are different with advanced machine learning to show that personalized cancer outreach can save both lives and money.

In mid-March, the author team presented their research in a live webinar where they discussed the methodology. The authors even extrapolated how their research may be used to promote testing for viruses like COVID-19. An on-demand recording is available for free at https://www.ama.org/jm-webinar-personalized-cancer-outreach-saves-both-lives-and-money/.

Credit: 
American Marketing Association

But you don't look sick? How broad categories like autoimmune impact patient experience

When your disease is hard to name and doesn't have visible symptoms, it can be hard for others to understand that you are sick. And, when people don't know much about your disease, it can be hard to explain it to family and friends.

This sentiment is particularly true for the some 50 million people in the United States living with autoimmune diseases, like lupus or multiple sclerosis (MS) -- where the condition is chronic but achieving a specific diagnosis may take time, the diagnosis may change, symptoms may not be overtly apparent, and, in many cases, both a cause and a "cure" are unknown.

Patients with autoimmune diseases often have an illness experience riddled with symptom ambiguities and shifting diagnoses. A new Drexel University study found that one way patients and physicians can work through the difficulty and frustration of communicating about these conditions is to use both broad diagnostic terms, like "autoimmune disease," as well as narrow ones, such as "lupus or MS."

Kelly Joyce, PhD, a professor in Drexel's College of Arts and Sciences and a member of the Center for Science, Technology & Society, studies the cultural dimensions of medicine. Her research investigates the experiences of people diagnosed with autoimmune illnesses. In analyzing how people live with autoimmune illnesses, Joyce and former Drexel graduate student Melanie Jeske found that the use of a broad category - like autoimmune - provides continuity, certainty and even community for patients who struggle to convey their often-inconsistent illness experiences with clinicians, family and friends.

Drawing on 45 in-depth interviews with people who live with autoimmune illnesses, Joyce's research showed that both broad diagnostic classifications and narrow diagnostic classifications are integral to diagnostic work and illness experiences.

Talking About Illness:

Researchers found that participants, regardless of gender, age or specific disease diagnosis, tended to use the broad category "autoimmune" in addition to a specific diagnosis, like Celiac disease or Rheumatoid arthritis, to talk about their health.

Some of the reasons they used the terminology were to describe what's happening in their bodies, and to make it easier to provide continuity, even when there was a change in their specific diagnosis.

"Although friends and families may not understand the precise mechanisms of Lupus or Rheumatoid arthritis, for example, they could understand the general autoimmune process in which the body's immune systems attacks healthy tissue and cells," Joyce said.

The broad term also simplified the process of talking about the disease to friends and family, even as the specific diagnosis might change over time.

"Use of the category 'autoimmune' meant participants did not have to put their lives on hold even as aspects of their specific diagnosis changed from ulcerative colitis to Crohn's disease, from lupus to mixed connective tissue disease (MCT), from one type of MS or lupus to another type of MS or lupus, and from having MS to not having MS to having MS," said Joyce. "Autoimmune, although an umbrella or broad category, is productive for those experiencing illness, lending legitimacy to the symptoms that a person will experience."

It can also help to distinguish their affliction from others that are more stigmatized. One specific example of this was that participants who live with type 1 diabetes -- which is an autoimmune disease -- who use the broad terminology to distinguish their illness from type 2 diabetes -- a chronic condition caused by the body's inability to metabolize sugar -- as a way avoiding the stigma and blame often associated with the latter.

Finding Community:

Because people can experience the same autoimmune disease differently, participants noted that using "autoimmune" allows them to see similarities between themselves and others-- creating a sense of community and shared experience.

"Many participants in our study stressed the heterogeneity of autoimmune illnesses, often saying things like 'My MS is not like her MS,' or 'No two people are alike,'" Joyce said. "While most participants knew others, who shared their specific diagnosis, it did not mean that their experience of symptoms, their triggers for symptoms, or their responses to particular treatments were similar."

Raising Awareness:

Research has shown that people who are ill can benefit from social support when their disease is widely recognized. For example, there is often an outpouring of support during the various cancer and disease awareness months and efforts - both broadly in society and at an individual level. This unifying support can be difficult for illnesses like autoimmune disease that is not as well understood in society.

The researchers suggest that recognizing that autoimmune can be a range of diseases and disorders -- similar to the way we think about the autism spectrum -- could aid our collective understanding of these diseases and support for those who are suffering from it.

Why Broad Categories are Important:

More than 80 illnesses are considered to be autoimmune or autoimmune-related. Though the illnesses under the umbrella vary widely, the common thread is an immune response that attacks healthy cells, tissue and/or organs. The study suggests that the label autoimmune provides, at minimum, some understanding and a scientific explanation as to what is happening to patients, though an exact diagnosis may be a moving target.

While this research focuses specifically on autoimmune illnesses, it does signal that broad and narrow categories may be important to medicine more generally.

"Within medicine, clinicians and researchers use the language of lumping and splitting to distinguish between two valuable diagnostic classification practices," said Joyce. "The process of lumping creates broad categories and emphasizes connections. In contrast, splitting emphasizes the differences between illnesses - creating categories that tend to be narrow and more specialized, prioritizing difference rather than similarity."

Sociologists study how clinical encounters and medical practice are social practices, that is, practices imbued with values, beliefs, and institutional and policy incentives. Yet, many sociologists who study diagnostic practices have yet to acknowledge the importance of broad categories in diagnostic work, according to Joyce.

"They focus on how clinicians and patients use narrow diagnostic labels, missing the importance of broad categories," she said. "Sociologists who study how people live with illnesses tend to focus on life after a specific diagnosis, so they have also paid little attention to the importance of broad categories in medical practice."

Now That We Know:

In light of her findings, Joyce suggests clinicians should consider presenting patients with both broad and narrow disease classifications when discussing autoimmune diagnoses initially and over time.

The use of the broad category may provide continuity and certainty in doctor-patient communications even as narrow disease diagnoses change or when symptoms do not map neatly into diagnostic tests or markers.

Some health care organizations are taking the lead and reorganizing the delivery of services in recognition of the changing diagnoses and, at times, unknowable, dimensions of autoimmune illnesses. As an example of this reorganization, West Penn Hospital in Pittsburgh, Pennsylvania opened the first institute dedicated to autoimmune illnesses in February 2018.

In the excitement over precision medicine, Joyce notes this study shows the importance of maintaining the use of broad categories in the experience and treatment of illness as well as using narrow diagnostic labels.

Credit: 
Drexel University

Researchers develop language test for people with Fragile X syndrome

WHAT:

Researchers have developed a test to measure the expressive language skills of people with Fragile X syndrome, a genetic disorder that may result in intellectual disability, cognitive impairment and symptoms of autism spectrum disorder. Expressive language refers to the use of words to convey meaning to others. The work was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, part of the National Institutes of Health.

The researchers developed the test to provide an objective way to measure improvement in language, which may help increase the participation of people with Fragile X in clinical studies aimed at improving intellectual and cognitive functioning. The researchers were led by Leonard Abbeduto, Ph.D., of the University of California, Davis MIND Institute. It appears in the Journal of Neurodevelopmental Disorders.

The test consists of two parts. Participants first engage in a conversation with the test administrator on topics that interest them, such as their favorite activities. In the second part, participants narrate the events in a wordless picture book. Their responses are audio-recorded and the transcripts are scored on aspects of expressive language such as talkativeness, how they arrange words to form sentences and the diversity of their vocabulary.

Researchers administered the test to 106 people ages 6 to 23 years with Fragile X syndrome and intellectual disability. Participants took the test once and then again four weeks later. Test scores were consistent across both versions, which suggests that the test is reliable. In addition, except for participants under 12, test scores were consistent with expressive language scores on other tests the participants also took, indicating the validity of the test measurements. The researchers concluded that test measures were appropriate for people with Fragile X syndrome over age 12 and for younger individuals who were less impaired.

Credit: 
NIH/Eunice Kennedy Shriver National Institute of Child Health and Human Development

Study shows key factors for reducing brain damage from cardiac arrest

image: Probability of favorable neurological outcome by low flow duration of ECPR according to the documented cardiac rhythm during CPR

Image: 
Osaka University

Osaka, Japan - People who suffer cardiac arrest usually have low likelihood of survival, especially if it happens out of the hospital. Those who do survive can have neurological damage due to the lack of oxygen-rich blood reaching their brain. Cardiopulmonary resuscitation (CPR) can help maintain this blood flow, but it's not always successful. Extracorporeal CPR (ECPR) may be an option, but it can be costly and it's not always clear which patients it will benefit.

Now, Osaka University-led research may have uncovered how to more effectively use ECPR for better outcomes. The researchers reported their findings in the journal, Circulation.

"Standard CPR uses chest compressions to manually stimulate blood flow to vital organs, which can help limit long-term neurological damage," explains Tasuku Matsuyama, the study's lead author. "With ECPR, blood is removed from a vein and oxygenated blood is pumped into an artery. This is a more effective way to maintain tissue function until normal heart rhythms can be restored."

Right now, however, there is little evidence-based guidance on which patients will show the most neurological benefit from ECPR.

The researchers sought this evidence through a multicenter clinical study of people who had suffered out-of-hospital cardiac arrest (OHCA). In what was called the CRITICAL study, the aim was to understand the factors that predict post-ECPR outcomes.

"We aimed to see whether low-flow duration — the length of time from when a patient is given standard CPR to when they receive ECPR — impacts the neurological outcome for OHCA patients," says Taro Irisawa, who led CRITICAL. "We also wanted to understand if there were any differences in ECPR benefit for patients with certain types of heart rhythm that respond to defibrillation."

The researchers prospectively followed 256 OHCA patients at 14 hospitals in Osaka. These patients had initially been given CPR either by bystanders or EMS personnel before receiving ECPR and in-hospital treatment.

The study found that as the time to receiving ECPR decreased, the chance of maintaining brain function went considerably up. Also, when undergoing the same amount of time before receiving ECPR, those who had heart rhythms that responded to defibrillation had much better odds of maintaining brain function than those who did not.

"Our study strongly indicates that reducing the time to ECPR can significantly improve the likelihood of OHCA patients preserving their neurological function, especially those who respond to defibrillation," Irisawa concludes. "We expect that our findings in the CRITICAL study can inform future revisions to international CPR guidelines. This will improve outcomes for these patients."

Credit: 
Osaka University

Clinicians should consider screening for TSH-R-Abs before pregnancy in patients with history of autoimmune thyroid disorders

1. Clinicians should consider screening for TSH-R-Abs before pregnancy in patients with history of autoimmune thyroid disorders

Abstract: http://annals.org/aim/article/doi/10.7326/L19-0818
URL goes live when the embargo lifts

Clinicians should consider screening for thyroid-stimulating hormone receptor antibodies (TSH-R-Abs) before pregnancy when the patient has a history of autoimmune thyroid disease or a history of radioactive iodine treatment or thyroidectomy. This is important because Graves' disease, an autoimmune disease caused by TSH-R-Abs, or its treatment, can lead to irreversible impairment in fetal neurodevelopment. A case report is published in Annals of Internal Medicine.

The antibodies produced in case of Graves' disease can be stimulating (TSAb), neutral, or blocking (TBAb). The net antibody activity determines the clinical phenotype. Typically, TSAbs dominate, resulting in hyperthyroidism. Rarely, TBAbs prevail, resulting in hypothyroidism. While treatment of Graves' disease dominated by TBAbs with L-T4 is straightforward, pregnancy complicates its management, as fetal hypothyroidism is a risk.

Clinicians cite the case of a 28-year-old woman with hypothyroidism caused by non-classical Graves' disease that was referred to a team at the University Hospitals Leuven in Belgium. Initially, she presented with hyperthyroidism due to a classical Graves' disease. After being treated for 36 months with high antithyroid drug doses along with L-T4 replacement to avoid hypothyroidism, she became overtly hypothyroid. TBAbs were shown to be prevailing. Two years later, she became pregnant, and the TBAbs persisted during pregnancy. Fetal development was normal with no evidence of hypothyroidism. In the postpartum period, the newborn developed transient subclinical hypothyroidism that spontaneously resolved during the next 3 months along with disappearance of maternal TSH-R-Abs. A second pregnancy followed 3 years later. The newborn developed a low normal free T4 and required treatment with L-T4 from post-natal day 18 until day 73. Both children have developed normally. The physicians caution that screening for TSH-R-Abs in such a patient is important because Graves' disease with TBAbs is rare and easy to overlook. In conclusion, this case presentation emphasizes the clinical relevance and clinical utility of measuring functional TSH-R-Ab prior, as well as during pregnancy in all patients with a history of autoimmune thyroid disease.

Media contacts: For an embargoed PDF please contact Lauren Evans at laevans@acponline.org. To speak with the lead author, Brigitte Decallonne, MD, PhD, please contact sarah.cordie@uzleuven.be.

2. Interleukin-1β inhibition with canakinumab reduces incident anemia and improves hemoglobin levels among patients with prevalent anemia

Abstract: http://annals.org/aim/article/doi/10.7326/M19-2945
Editorial: http://annals.org/aim/article/doi/10.7326/M20-0887
URL goes live when the embargo lifts

Compared with placebo, interleukin-1β (IL-1β) inhibition with canakinumab reduces incident anemia and improves hemoglobin levels among patients with prevalent anemia. A pronounced effect was seen in participants with the most anti-inflammatory response. A secondary analysis from a multinational, randomized, double-blind, placebo-controlled trial is published in Annals of Internal Medicine.

Inflammation contributes to the development of anemia in patients with chronic disease or infection and may play a role in anemia onset in older adults. Several factors contribute to the anemia that accompanies chronic inflammation, and inflammatory cytokines, such as IL-1β and IL-6 participate in its pathogenesis. It is not clear if anti-inflammatory therapy targeting IL-1β can reverse these effects.

Researchers from Brigham and Women's Hospital, Harvard Medical School analyzed data from the CANTOS (Canakinumab Anti-inflammatory Thrombosis Outcomes Study) randomized controlled trial to determine whether IL-1β inhibition with canakinumab reduces incident anemia and improves hemoglobin levels among those with prevalent anemia. They found that IL-1β inhibition reduced the incidence of anemia by 16 percent, with a pronounced effect in participants with the most robust anti-inflammatory response. According to the authors, these hypothesis-generating data highlight the role of IL-1β /IL-6 pathway signaling in anemia onset in a large population with chronic inflammation and motivates the design of prospective confirmatory studies to identify populations that might benefit from anti-inflammatory therapies for anemia.

Media contacts: For an embargoed PDF please contact Lauren Evans at laevans@acponline.org. To speak with the lead author, Paul M Ridker, MD, please contact Haley Bridger at hbridger@bwh.harvard.edu.

3. Iron chelation therapy shows clinical benefit in patients with lower risk for myelodysplastic syndromes

Abstract: http://annals.org/aim/article/doi/10.7326/M19-0916
URL goes live when the embargo lifts

Iron chelation therapy with deferasirox had clinical benefit in infusion-dependent patients with lower-risk myelodysplastic syndromes (MDS). Findings from a randomized, placebo-controlled trial are published in Annals of Internal Medicine.

MDS comprise a group of disorders characterized by bone marrow failure, cytogenetic and molecular alterations, and potential for progression to acute myeloid leukemia. In patients with low risk for MDS, treatment goals primarily involve management of cytopenias, including anemia. Anemia is often treated with red blood cell transfusions that can lead to iron overload, which may have a negative impact on organs and progression to leukemia. ICT therapy for these patients has not been evaluated in randomized studies.

Researchers, including those from IRCCS Ospedale Policlinico San Martino, conducted a multicenter, randomized, double-blind, placebo-controlled trial (TELESTO) to evaluate event-free survival (EFS) and safety of ICT in iron-overloaded patients with lower risk for MDS. Participants were randomly assigned to either deferasirox dispersible tablets (n=149) or matching placebo (n=76) and followed to determine EFS, or death, whichever occurred first. Median time on treatment was 1.6 years in the deferasirox group and 1.0 year in the placebo group. The researchers found that median EFS was prolonged by approximately 1 year with deferasirox versus placebo and adverse events occurred in 97.3 percent of deferasirox recipients compared to 90.8 percent in placebo recipients. Deferasirox had a clinically manageable safety profile similar to placebo, except for a non-severe, manageable increase in serum creatinine with deferasirox. However, most adverse events were likely related to the underlying disease and/or the iron overloaded state of patients. The researchers note that TELESTO is the first prospective, randomized, placebo-controlled study demonstrating clinical benefit of ICT.

Media contacts: For an embargoed PDF please contact Lauren Evans at laevans@acponline.org. To speak with the lead author, Emanuele Angelucci, MD, please contact Emma Houghton at emma.houghton@mudskipper.biz.

Credit: 
American College of Physicians

Study suggests men more likely to develop type 2 diabetes if they go through puberty early

Boys who enter puberty at an early age are more likely to develop type 2 diabetes as adults than later developing boys, irrespective of their weight in childhood, according to an observational study following more than 30,600 Swedish men born between 1945 and 1961, published in Diabetologia (the journal of the European Association for the Study of Diabetes).

Specifically, researchers found that boys who had their pubertal growth spurt at age 9.3 to 13.4 years (the youngest group) were around twice as likely to develop early type 2 diabetes (aged 57 years or younger), than those who had the growth spurt at the age of 14.8 to 17.9 years (the oldest group), when the data was adjusted for the children's body-mass index (BMI) (see table 3 full paper).

In addition to an increased risk of early type 2 diabetes, boys who went through puberty in the youngest group also had a 27% increased risk of late type 2 diabetes (after age 57 years), not as pronounced as for early type 2 diabetes. The age of 57.2 years was the cut-off point for 'early' and 'late' since it was the median age of developing diabetes in the study, with an equal number of those who developed diabetes doing so both before and after this age. The associations between early puberty and early and late type 2 diabetes were also maintained after adjustment for a range of factors including birth year, country of birth, birthweight, and education level.

An elevated BMI in adulthood is a well-known risk factor for type 2 diabetes. Previous studies have found that boys who are overweight as children, or gain excessive weight during puberty, are more likely to develop type 2 diabetes as adults. In addition, evidence suggests a link between early puberty onset in girls (start of menstruation) and higher diabetes risk, but retrospective studies in boys have been hampered by the lack of easily available markers of puberty. The purpose of this new study was to determine whether puberty timing can be linked to diabetes risk in men, even after adjusting for changes in BMI, using an objective measure of puberty timing.

In this study, Associate Professor Jenny Kindblom and Professor Claes Ohlsson of the University of Gothenburg, Sweden, and colleagues, analysed the health records of 30,697 Swedish men born between 1945 and 1961 who had BMI measured in both childhood (age 8 years) and young adulthood (age 20 years) as part of the BMI Epidemiology Study Gothenburg--a population-based study in Sweden. Timing of puberty was calculated using age at peak height velocity (PHV), the time when boys grow the fastest during their adolescent growth spurt, which occurs around 2 years after entering puberty. Average age at puberty was 14 years.

Participants´ data on height and weight during childhood and adolescence were linked with national registers and followed to the end of 2016 or until they were diagnosed with type 2 diabetes, emigrated, or died. During an average 30.7 years of follow-up (from age 30), 1,851 men were diagnosed with type 2 diabetes, at an average (median) age of 57.2 years.

For each year earlier that the pubertal growth spurt occurred, the risk of developing early diabetes increased by 28%, while the risk of late diabetes rose 13%. These associations were similar after adjusting for BMI at 8 years of age (24% vs 11%).

However, when researchers adjusted the analyses for BMI after puberty (i.e. at 20 years of age), the associations were attenuated and the significant association with late type 2 diabetes was lost. Importantly, early puberty was still significantly associated with early type 2 diabetes, also after adjustment for BMI at 20 years, with a 16% increased risk of early diabetes per year earlier that puberty took place. In contrast, a pubertal growth spurt at a late age (after 15 years of age, the latest 20%) was associated with a 30% reduced risk of developing early diabetes (table 5) when compared with those going through puberty at average age (the 'middle' 20% of men who went through puberty at 13.8 to 14.3 years of age).

Men who had early pubertal growth spurt were also more likely to require insulin treatment if they went on to develop type 2 diabetes (a 25% increased risk of developing insulin-dependent type 2 diabetes for each year earlier of going through puberty).

"Our findings suggest that early puberty could be a novel independent risk factor for type 2 diabetes in men", says Assoc Prof Kindblom. "Given the apparent higher risk among boys who start puberty before the average age of 14 years, we estimate that 15% fewer men who were diagnosed during the study would have developed type 2 diabetes had they not started puberty early."

She adds: "Although the mechanisms for the link between early puberty and higher risk of type 2 diabetes are unclear, it is possible that starting puberty at an earlier age may lead to the accumulation of excess abdominal fat, which in turn increases cardiometabolic risk factors such as high blood pressure, diabetes, and abnormal lipid levels."

She concludes: "These findings strengthen the concept that early puberty is part of an adverse trajectory during childhood and adolescence, and that a high BMI both before and after puberty contributes. A continuous monitoring of height and weight development during not only childhood but also adolescence is of importance and might help identify individuals with increased risk."

The authors acknowledge that their findings show observational associations, and do not prove that early puberty causes the increase in type 2 diabetes. They note several limitations of the study including that it did not include data on BMI in later life or control for family history of diabetes and smoking, which are known to increase the risk of diabetes. They also point out that most participants were white, which could limit the generalisability of the findings to other ethnicities with a higher prevalence of type 2 diabetes.

Credit: 
Diabetologia

A new tool for performing cancer liquid biopsies

image: NYUAD Assistant Professor of Mechanical and Biomedical engineering Mohammad A. Qasaimeh and the first author of the study and the Research Scientist of Engineering at NYUAD, Muhammedin Deliorman

Image: 
NYU Abu Dhabi

Fast facts:

Cancer is a leading cause of death worldwide - when caught in its early stages, before metastasis (which is the growth of a secondary tumor separate to the primary site of cancer), it is known that cancer survival rates are higher

NYUAD researchers have developed a new fluid analyzing platform that allows for the isolation of circulating tumor cells (CTCs), which are formed during metastasis

The new technology, featured in the Nature journal Microsystems and Nanoengineering, enables the effective isolation of CTCs from prostate cancer patients' blood samples, and allows for high precision analysis of CTCs' mechanical properties at the single cell level

The new platform could potentially constitute an important test for monitoring and studying metastasis, and can also be used to monitor other types of the disease such as breast and lung cancers

Abu Dhabi, UAE, March 23, 2020: Cancer is one of the leading causes of death worldwide, and has been found to have a significantly higher rate of survival when diagnosed at early stages, before metastasis. During metastasis, a secondary growth is initiated by circulating tumor cells (CTCs) that shed from the primary tumor into the bloodstream to spread the cancer. A team of engineering researchers from NYU Abu Dhabi, led by NYUAD Assistant Professor of Mechanical and Biomedical Engineering Mohammad A. Qasaimeh, has developed a microfluidic platform that is compatible with cutting-edge procedures of atomic force microscopy (AFM).

The developed platform is used to capture CTCs from blood samples of prostate cancer patients, followed by AFM mechanical characterizations of CTCs, at the nanoscale, in search for new metastatic mechano-biomarkers.

While the general lifecycle of CTCs is somewhat recognized, the lifespan and interactions of CTCs while circulating in the bloodstream is still unknown. CTCs are very rare and hard to isolate from the background of billions of healthy blood cells, and thus their biological and mechanical phenotypes are still to be explored.

The developed new tool allows for isolation and characterization of CTCs and thus holds the potential to assist in the early detection of cancer. It also can be used as a tool to more effectively track and monitor cancer progression and metastasis.

In the paper titled AFM-compatible Microfluidic Platform for Affinity-based Capture and Nanomechanical Characterization of Circulating Tumor Cells published in the Microsystems and Nanoengineering journal , the researchers presented the microfluidic technology they developed to separate CTCs from blood samples for further analysis. The CTCs were isolated from other blood cells by noting the differences in their affinity to different monoclonal antibodies. The developed tool is a combined microfluidic-AFM platform, where AFM analyses can be followed to investigate the elasticity and adhesive properties of the captured CTCs cancer cells.

CTCs represent a biomarker for the detection, diagnosis, and prognosis of cancer. As CTCs spread through the bloodstream, using their discovered characteristics, they can be removed through a liquid biopsy procedure. Isolated CTCs can be potentially used for drug testing and molecular profiling for precision cancer therapies. Far less invasive than a traditional tissue biopsy, liquid biopsy procedures have the potential to replace traditional tissue biopsies in the near future, and be performed in local clinics or pharmacies.

"We expect that this platform could constitute a potentially very powerful tool for cancer diagnosis and prognosis, by identifying CTCs mechanical and biological phenotypes at the single cell level," said Mohammad A. Qasaimeh.

"With slight customizations, the platform can also be adapted to other types of cancers including breast and lung," said the first author of the study and the Research Scientist of Engineering at NYUAD, Muhammedin Deliorman.

The authors hope that using the developed tool, nanomechanical properties of captured CTCs could help in the future to identify aggressive cancer CTCs phenotypes for developing more effective therapies.

Credit: 
New York University

3D genetic structure in blood cancer important beyond DNA code changes

Children with aggressive blood cancers have differences -- not just in the DNA code of their blood cells -- but also in the heavily twisted protein superstructure that controls access to genes.

Led by researchers at NYU Grossman School of Medicine, a new study showed that whether T cell acute lymphoblastic leukemia takes off or worsens depends on structural changes in the layout of protein bundles called chromosomes. Upon receiving the right signal, this arrangement changes to expose the gene-reading machinery to only those bits of DNA code needed for the job at hand in each cell.

The new work builds on the discovery that DNA chains exist, not in vast tangles of chromosomes, but in organized "neighborhoods" called topographically associated domains, or TADs. Specifically, DNA snippets, called enhancers, are known to turn up or down the action of genes, but normally only those housed only in their own TADs. Within TAD boundaries, DNA is free to fold back on itsself in 3D loops, bringing together enhancers and other elements (e.g., promoter DNA) that must interact for a given stretch of code to be read.

The new study showed that key TAD boundaries are lost in this form of leukemia, enabling parts of DNA to interact with enhancers from the wrong neighborhoods, turning up the action of the wrong genes and encouraging cancer growth and spread. Researchers say their findings suggest that these 3D changes in chromosome structure are as important as changes in the order of molecular letters making up the DNA code itself (mutations), with both mechanisms encouraging cancer onset and progress.

"Our study is the first to show that the naturally 'looped' structure of genetic material in blood cells is changing in T cell leukemia," says study co-lead investigator Palaniraja Thandapani, PhD, a postdoctoral fellow at NYU Langone Health and its Perlmutter Cancer Center. "With this in mind, the most effective treatment for this type of leukemia may be a combination of a drug that targets the disease's cancer-causing, genetic mutations and another that counters any changes to chromosomal 3D structure."

In childhood leukemia, the most common code changes or mutations or changes in activity occur in two genes, NOTCH1 and MYC, says study co-senior investigator Iannis Aifantis, PhD, the Hermann M. Biggs Professor and chair of the Department of Pathology at NYU Langone and Perlmutter.

Existing drug therapies designed to block NOTCH1 and MYC, he says, work well but are not foolproof. When testing them in blood cell samples from people undergoing therapy, the research team found that part of the explanation may reside in the failure of single-drug therapies to correct the epigenetic changes that come with the disease.

Experiments with one drug that successfully blocked NOTCH1 activity showed that it did not effectively block access to the exposed MYC neighborhood, which could explain, Aifantis says, why NOTCH1 inhibitors do not work for most patients.

However, a second experimental drug (targeting molecular, or epigenetic, changes in these DNA neighborhoods) effectively corrected DNA looping in the MYC neighborhood, restoring normal chromosomal structure and gene regulation, and dramatically decreasing MYC action and cancer spread.

The findings, publishing in the journal Nature Genetics online March 23, were made possible by advanced genetic and imaging techniques developed in recent years. These include such experimental methods as RNA sequencing and Hi-C that lets researchers track step-by-step genetic activity in cancer cells and reveal the 3D architecture of chromosomes by comparing small fragments of genetic material to each other.

For the new study, researchers compared the genetic material in blood samples from eight children between the ages of 1 and 16, including some with advanced-stage disease, to blood samples of healthy children.

Co-senior investigator Aristotelis Tsirigos, PhD, says the changes in DNA looping observed in these blood cells were "quite unique" to this severe form of leukemia and its related mutations. This suggests that looping alterations may be different in other cancers that are closely tied to different mutations.

Moving forward, Tsirigos says, the team has plans to describe the changes in chromosomal looping involved in other blood cancers, such as lymphoma, as well as for other subtypes of leukemia.

"Once these 3D genetic changes are fully described, we should be able to test existing and new drugs based on their ability to correct any malformations and better predict the chances for patient survival from cancer," says Tsirigos, an associate professor at NYU Langone and Perlmutter. Tsirigos also serves as director of NYU Langone's applied bioinformatics laboratories, where the computer analysis is performed

The American Cancer Society estimates that more than 1,500 Americans, mostly children, die each year from T cell acute lymphoblastic leukemia. This type of cancer accounts for roughly one-quarter of all leukemias.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

Covid-19, a prominent role for UniTrento in ultrasound diagnosis

The article presents the results of the diagnostic protocol for Covid-19 that Libertario Demi developed with a dozen Italian clinical teams that are working very hard in emergency settings.

&laquoFor the first time, the scientific validity of the technique we proposed is accepted. We hope our work can help tackle the pandemic», commented Paolo Giorgini, director of the Department of Information Engineering and Computer Science of the University of Trento, which hosts the laboratory.

Libertario Demi told us that he has been in contact with a colleague from the German society of ultrasound in medicine who asked permission to adopt the protocol to implement it in Germany, and explained that Policlinico Gemelli in Rome has already provided some training to medical staff so that they can use these techniques: &laquoWe are available to train health care workers and to further develop algorithms that can help them manage the pandemic», affirmed Demi.

Meanwhile, a new wireless probe provided by ATL-Ecografi Wireless Milano was delivered to the University of Trento, on which the software required to make a further step forward to facilitate the diagnosis of Covid-19 will soon be installed and tested. It is a race against the clock: the effectiveness of these new instruments to contain the contagion and improve patients' outcomes also depends on when they will be used in hospitals.

Ultrasound imaging (ultrasonography) examines specific patterns to diagnose patients, determine the seriousness of their condition and hence choose the most appropriate treatment. Ultrasound waves, in other words, are used to 'take a picture' of the lungs and reveal any alterations.

Credit: 
Università di Trento