Culture

Autism rate rises 43 percent in New Jersey, Rutgers study finds

image: Walter Zahorodny, an associate professor of pediatrics at Rutgers New Jersey Medical School who directed the New Jersey portion of the study, called the results 'consistent, broad and startling.'

Image: 
Photo: Nick Romanenko / Rutgers University

A new report by the Centers for Disease Control and Prevention, which uses research by Rutgers University, shows a significant increase in the percentage of 4-year-old children with autism spectrum disorder in New Jersey.

The study found the rate increased 43 percent from 2010 to 2014 in the state.

The report, released April 11, found that about one in 59 children has autism. New Jersey's rate was the highest of the states studied: one in 35. That puts the national rate of autism at 1.7 percent of the childhood population and New Jersey's autism rate at 3 percent.

New Jersey is known for excellent clinical and educational services for autism spectrum disorder, so the state's higher rates are likely due to more accurate or complete reporting based on education and health care records, the researchers said. Similar studies were conducted in Arizona, Colorado, Missouri, North Carolina, Utah and Wisconsin.

Walter Zahorodny, an associate professor of pediatrics at Rutgers New Jersey Medical School who directed the New Jersey portion of the study, called the results "consistent, broad and startling." The analysis of this young group of children shows U.S. autism rates are continuing to rise without plateauing.

"It's very likely that the next time we survey autism among children, the rate will be even higher," he said.

The researchers analyzed information from the health and special education records of 129,354 children who were 4 years old between 2010 to 2014 and 128,655 children who were 8 years old in that time period. They used the guidelines for autism spectrum disorder in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders-IV for their primary findings.

Across the network, the researchers found the prevalence of autism spectrum disorders ranged from a low of 8 per 1,000 children in Missouri to a high of 28 per 1,000 children in New Jersey. The average was 13 per 1,000 children. The disorder is about two times more common among boys than girls and white children are more often diagnosed than black or Hispanic children.

Although the estimates are not representative of the country as a whole, they are considered the benchmarks of autism spectrum disorder prevalence, Zahorodny said.

The age that children received their first evaluation ranged from 28 months in North Carolina to 39 months in Wisconsin. The researchers discovered that children with an intellectual disability or other condition were more likely to be evaluated earlier than age 4, which gives them an advantage.

"Children who are evaluated for autism early - around their second birthday - often respond better to treatment than those who are diagnosed later," Zahorodny said. "However, it appears that only the most seriously affected children are being evaluated at the crucial time, which can delay access to treatment and special services."

The average age of diagnosis - 53 months - has not changed in 15 years.

"Despite our greater awareness, we are not effective yet in early detection," he said. "Our goal should be systematic, universal screening that pediatricians and other health providers provide at regular visits starting at 18 months to identify autism as soon as possible."

The researchers can't explain why autism rates have increased across the United States. Factors associated with a higher risk include advanced parental age (children of parents over age 30 have heightened risk), maternal illness during pregnancy, genetic mutations, birth before 37 weeks gestation and multiple births.

"These are true influences exerting an effect, but they are not enough to explain the high rate of autism prevalence," said Zahorodny. "There are still undefined environmental risks that contribute to this significant increase, factors that could affect a child in its development in utero or related to birth complications or to the newborn period. We need more research into non-genetic triggers for autism."

Credit: 
Rutgers University

More Michigan students taking, passing advanced math

Michigan high school students are going above and beyond the required math curriculum, likely an effect of the state's graduation requirements, finds new research from Michigan State University.

The Michigan Merit Curriculum, which went into effect with the class of 2011 and requires students to take four years of math, at least up to algebra 2, also seems to be influencing more students to enroll in college.

"Our research indicates that the policy is working in terms of providing more opportunities to the most disadvantaged students," said Soobin Kim, author of the study and a researcher in the MSU College of Education. "It has been successful in equalizing access to algebra 2, which is a well-established predictor for postsecondary readiness."

The researchers, including MSU faculty members Barbara Schneider and Ken Frank, found students from low-income schools were particularly affected by the curriculum, completing on average a full semester more math.

Although 28 states now have similar requirements for math course-taking in high school, the MSU researchers' work is among the first to explore whether the policies improve course-taking patterns and college enrollment outcomes.

Like other states that have passed similar policies, Michigan's set of course-taking expectations are intended to make learning opportunities more equitable and prepare young people for success in college and the workforce.

The study, published in Educational Evaluation and Policy Analysis in March, also found Michigan students are now more likely to enroll in four-year colleges.

And those effects were greatest for students who were already better prepared based on their eighth-grade test scores.

The researchers used transcripts from a representative sample of 129 Michigan high schools to analyze patterns of course-taking for more than 300,000 students over 10 years -- allowing for comparison before and after the policy change. They also matched their data to college enrollment information for each student from the National Student Clearinghouse.

Among subject areas, the research team focused on math because students tend to follow a standard sequence of courses with less variation from school to school.

Previous research found the Michigan Merit Curriculum had little to no impact on students' ACT scores or graduation rates.

Both studies were conducted by the Michigan Consortium for Education Research, a research partnership between MSU, University of Michigan and the Michigan Department of Education, which has received $6 million in grant funding from the U.S. Department of Education.

The massive dataset, created through the partnership, will continue to generate new insights.

Kim said more information is needed about how higher expectations lead to changes for students, and that requires looking in classrooms -- at factors such as academic preparation prior to high school, class size and characteristics of classmates and teachers.

"This is just the beginning," he said. "One policy will not change outcomes for all students in the same way. It takes time to explore each factor but it's our job to study them with diligence."

Credit: 
Michigan State University

Keeping the taste, reducing the salt

image: While humans need the salt in snacks like potato chips, Americans consume significantly more salt than is necessary or even healthy.

Image: 
Public Domain (<a target="_blank" href="https://en.wikipedia.org/wiki/Potato_chip#/media/File:Potato-Chips.jpg">https://en.wikipedia.org/wiki/Potato_chip#/media/File:Potato-Chips.jpg</a>)

PULLMAN, Wash. - Washington State University researchers have found a way to make food taste salty but with less of the sodium chloride tied to poor health.

"It's a stealth approach, not like buying the 'reduced salt' option, which people generally don't like," said Carolyn Ross, a Food Science professor at WSU. "If we can stair-step people down, then we increase health while still making food that people want to eat."

In a paper published in the Journal of Food Science, Ross and colleagues looked at salt blends that use less sodium chloride and include other salts like calcium chloride and potassium chloride.

Both of those salts have no adverse health effects on people, Ross said. Potassium can actually help reduce blood pressure. Unfortunately, they aren't very tasty.

"Potassium chloride, especially, tastes really bitter and people really don't like it," Ross said.

The researchers used tasting panels and WSU's electronic tongue to see just how much they could add of the replacement salts for standard sodium chloride before people found the food unacceptable to eat.

Some tasting panels tested a variety of salt solutions, or salt in water, while others tested different salt combinations in tomato soup.

Using the e-tongue and panels, they found that a blend using approximately 96.4 percent sodium chloride with 1.6 percent potassium chloride and 2 percent calcium chloride was the ideal reduction.

They had a higher reduction when they added only calcium chloride, getting acceptable rates with a combination of 78 percent sodium chloride and 22 percent calcium chloride.

"This combination of the two salts did not significantly differ compared to 100 percent sodium chloride," Ross said. "But when we added potassium chloride, consumer acceptance decreased."

While humans need salt, Americans consume significantly more than is necessary or even healthy. According to the U.S. Office of Disease Prevention and Health Promotion, the recommended maximum amount of salt consumed per day is less than 2,300 mg. The average American adult female consumes 2,980 mg per day, while males average over 4,000 mg per day.

Recent findings have suggested that gradual reductions in salt over a period of years is the best way to reduce salt consumption. Using one of the new blends for a specified time frame could lead to greater reductions down the road.

Credit: 
Washington State University

New non-antibiotic strategy for the treatment of bacterial meningitis

image: Seen here are the white blood cells (neutrophils) in the spinal fluid during bacterial meningitis. The nuclear material DNA is stained blue and proteins in neutrophils are stained red. The large aggregates as represented by overlapping areas of blue (DNA) and red (protein) portrays the extracellular DNA-protein networks called neutrophil extracellular traps (NETs). NETs are released as a defence response against invading bacteria in bacterial meningitis.

Image: 
Tirthankar Mohanty

With the increasing threat of antibiotic resistance, there is a growing need for new treatment strategies against life threatening bacterial infections. Researchers at Lund University in Sweden and the University of Copenhagen may have identified such an alternative treatment for bacterial meningitis, a serious infection that can lead to sepsis. The study is published in Nature Communications.

Our immune system has several important defenders to call on when an infection affects the central nervous system. The researchers have mapped what happens when one of them, the white blood cells called neutrophils, intervene in bacterial meningitis.

If there is an infection, the neutrophils deploy to the infected area in order to capture and neutralise the bacteria. It is a tough battle and the neutrophils usually die, but if the bacteria are difficult to eliminate, the neutrophils resort to other tactics.

"It is as though in pure frustration they turn themselves inside out in a desperate attempt to capture the bacteria they have not been able to overcome. Using this approach, they capture a number of bacteria at once in net-like structures, or neutrophil extracellular traps (NETs). This works very well in many places in the body where the NETs containing the captured bacteria can be transported in the blood and then neutralised in the liver or spleen, for example. However, in the case of bacterial meningitis these NETs get caught in the cerebrospinal space, and the cleaning station there is not very effective", explains Adam Linder, associate professor at Lund University and specialist in Infectious Diseases at Skåne University Hospital.

Researchers observed, by using advanced microscopy; that the cerebrospinal fluid of patients with bacterial meningitis was cloudy and full of lumps, which proved to be NETs. However, among patients with viral meningitis, the cerebrospinal fluid was free from NETs. When captured bacteria get caught in the cerebrospinal fluid, this adversely affects the immune system's work of clearing away bacteria and also impedes standard antibiotics from getting at the bacteria, says Adam Linder.

Would it be possible to cut up the nets so that the bacteria are exposed to the body's immune system, as well as to antibiotics, making it easier to combat the infection? As the NETs consist mainly of DNA, the researchers investigated what would happen if you brought in drugs used for cutting up DNA, so-called DNase.

"We gave DNase to rats infected with pneumococcus bacteria, which caused bacterial meningitis, and could show that the NETs dissolved and the bacteria disappeared. It seems that when we cut up the NETs, the bacteria are exposed to the immune system, which finds it easier to combat the bacteria single-handed. We were able to facilitate a significant reduction in the number of bacteria without antibiotic intervention, says Tirthankar Mohanty, one of the researchers behind the study.

Before antibiotics, the mortality rate for bacterial meningitis was around 80 per cent. With the advent of antibiotics, the mortality rate quickly fell to around 30 per cent.

In the 1950s, Professor Tillett at the Rockefeller University in the USA, found lumps in the cerebrospinal fluid of patients with bacterial meningitis. Professor Tillett discovered that these lumps could be dissolved using DNase. This was effective in combination with antibiotics and reduced the mortality rate for meningitis from around 30 per cent to about 20 per cent. However, this treatment had side effects, as the DNase was extracted from animals and could therefore trigger allergic side effects.

"At that time, everyone was so happy about the antibiotics, they reduced mortality for the infections and it was thought that we had won the war against bacteria. I believe we need to go back and take up a part of the research that took place around the time of the breakthrough for antibiotics. We can perhaps learn from some of the discoveries that were then flushed down the drain", says Adam Linder.

"The development of resistance in bacteria is accelerating and we need alternatives to antibiotics. The drug we use in the studies is a therapeutic biological product derived from humans and has already been approved for human use. They are not expensive and have also been tested against many different bacteria and infections. Bacterial meningitis is a major challenge in many parts of the world. In India, for example, it is a major cause of death among children, so there would be significant benefits there from using such a treatment strategy", says Tirthankar Mohanty.

The researchers want to go on to set up a major international clinical study and use DNase in the treatment of patients with bacterial meningitis.

Credit: 
Lund University

CSI meets conservation

image: A wild tiger in India.

Image: 
Prasenjeet Yadav

The key to solving a mystery is finding the right clues. Wildlife detectives aiming to protect endangered species have long been hobbled by the near impossibility of collecting DNA samples from rare and elusive animals. Now, researchers at Stanford and the National Centre for Biological Sciences at India's Tata Institute of Fundamental Research have developed a method for extracting genetic clues quickly and cheaply from degraded and left-behind materials, such as feces, skin or saliva, and from food products suspected of containing endangered animals.

Their proof of concept - outlined April 10 in Methods in Ecology and Evolution - could revolutionize conservation approaches and policies worldwide, the researchers said.

"It's CSI meets conservation biology," said co-author Dmitri Petrov, the Michelle and Kevin Douglas Professor in the School of Humanities and Sciences.

The specter of extinction hangs over more than a quarter of all animal species, according to the best estimate of the International Union for Conservation of Nature, which maintains a list of threatened and extinct species. Conservationists have documented extreme declines in animal populations in every region of Earth.

Clues from DNA

Helping species recover often depends on collecting DNA samples, which can reveal valuable information about details ranging from inbreeding and population history to natural selection and large-scale threats such as habitat destruction and illegal wildlife trade. However, current approaches tend to require relatively large amounts of DNA, or expensive and often inefficient strategies for extracting the material. Getting meaningful information rapidly from lower-concentration, often degraded and contaminated DNA samples requires expensive and specialized equipment.

A solution may lie in an ongoing collaboration between Stanford's Program for Conservation Genomics, including the labs of Petrov and co-authors Elizabeth Hadly and Stephen Palumbi, with India's National Centre for Biological Sciences, including the lab of co-author Uma Ramakrishnan, a molecular ecologist and former Fulbright faculty fellow at Stanford.

"I have been working on tiger conservation genetics for over a decade, but have been frustrated at how slow and unreliable the process of generating genetic data can be," Ramakrishnan said. "Conservation needs answers fast, and our research was not providing them fast enough."

The researchers looked at endangered wild tigers in India and overfished Caribbean queen conchs, examining tiger feces, shed hair and saliva found on killed prey, as well as fried conch fritters purchased in U.S. restaurants. All of the samples were too impure, mixed or degraded for conventional genetic analysis.

"Our goal was to find extremely different species that had strong conservation needs, and show how this approach could be used generally," said Palumbi, the Jane and Marshall Steele Jr. Professor of Marine Biology. "The King of the Forest - tigers - and Queen of the Caribbean - conch - were ideal targets."

Inexpensive and effective

Together, the team improvised a new approach, using a sequencing method that amplifies and reads small bits of DNA with unique differences in each sample. By doing this simultaneously across many stretches of DNA in the same test tubes, the researchers kept the total amount of DNA needed to a minimum. Making the procedure specific to tiger and conch DNA allowed for the use of samples contaminated with bacteria or DNA from other species.

The technology proved highly effective at identifying and comparing genetic characteristics. For example, the method worked with an amount of tiger DNA equivalent to about one-one-hundred-thousandth the amount of DNA in a typical blood sample. The method had a higher failure rate in conchs because the researchers did not have whole genomes at their disposal.

The approach's effectiveness, speed and affordability - implementation costs could be as low as $5 per sample, according to the researchers - represents a critical advance for wildlife monitoring and forensics, field-ready testing, and the use of science in policy decisions and wildlife trade.

"It is easy to implement and so can be done in labs with access to more or less basic equipment," said co-author Meghana Natesh of the National Centre for Biological Sciences and Sastra University in India. "If a standard procedure is followed, the data generated should be easy to share and compare across labs. So monitoring populations across states or even countries should be easier."

The scientists have made their methods freely available.

Credit: 
Stanford University

Autoimmune diseases of the liver may be triggered by exposure to an environmental factor

11 April 2019, Vienna, Austria: Investigators from a large population-based study conducted in northern England have suggested that exposure to a persistent, low-level environmental trigger may have played a role in the development of autoimmune diseases of the liver within that population. The study, which was discussed today at The International Liver Congress™ 2019 in Vienna, Austria, found a significant clustering of cases of primary biliary cholangitis (PBC), autoimmune hepatitis (AIH), and primary sclerosing cholangitis (PSC) in well-defined regions of north-east England and North Cumbria, suggesting an environmental agent (or agents) may have been involved.

The autoimmune liver diseases, PBC, AIH, and PSC, are relatively rare diseases that are associated with significant morbidity and mortality. These conditions affect people of all ages and are chronic, life-long conditions. The underlying cause of these autoimmune liver diseases is not fully defined, although an interaction between a genetic predisposition to autoimmunity and environmental factors has been proposed.

Disease clustering, whereby an abnormally large number of cases have been found in a well-defined geographical region, has been reported previously for PBC in two areas of Northern England and New York. However, according to the investigators in today's study, equivalent studies have not been performed in AIH or PSC.

The study reported today was conducted by a team of researchers from Newcastle in Northern England, supported by the National Institute for Health Research Newcastle Biomedical Research Centre. The team identified a large cohort of individuals from North-East England and North Cumbria who had PBC (n=2,150), AIH (n=963), or PSC (n=472). Spatial point analyses were used to investigate disease clustering using postal addresses, and, for those with a known year of diagnosis, spatio-temporal analyses were undertaken.

Areas with a higher than expected number of patients with each of these three conditions were found at approximately 1-2 km, with extra clusters for AIH and PSC at approximately 10 km and 7.5km in PBC. There was no sign of more patients being diagnosed within a particular timeframe that suggests an infection is less likely to be associated with development of these diseases.

"This study suggests that exposure to a persistent, low-level environmental agent may have played a role in the pathogenesis of all three autoimmune liver diseases studied, not just PBC," said Dr Jessica Dyson, Associate Clinical Lecturer at Newcastle University and Consultant Hepatologist at Newcastle upon Tyne Hospitals NHS Foundation Trust in the UK.

"The varying distances of peak clustering raises the possibility that different environmental factors contribute to PBC, AIH, and PSC. In previous PBC clustering studies, water reservoirs, industrial or coal mining factors, or waste disposal site toxins have been implicated," noted Dr Dyson. "Further work is ongoing to try to identify factors that may potentially be associated with the clustering observed in our study."

This study is very important, since autoimmune diseases of the liver are infrequent but have an increasing incidence overall," said Professor Marco Marzioni from the Università Politecnica delle Marche, Ancona, Italy, and an EASL Governing Board Member. "However, their triggers are as yet unknown. Environmental factors have been considered, but no solid data have emerged so far. The study presented today has sufficient scientific rigour to reinforce the idea that environmental exposure may play a major role in triggering autoimmune diseases of the liver."

Credit: 
Spink Health

Genome analysis shows the combined effect of many genes on cognitive traits

Individual differences in cognitive abilities in children and adolescents are partly reflected in variations in their DNA sequence, according to a study published in Molecular Psychiatry. These tiny differences in the human genome can be used together to create so-called polygenic scores; the sum of a number of genetic variants an individual carries reflecting the genetic predisposition to a particular trait. This includes differences in educational achievement (how well pupils do in English, maths, and science), how many years of education they complete, and their IQ at age 16.

Researchers at King's College London, UK analysed genetic information from 7,026 UK children at ages 12 and 16 included in the Twins Early Development Study, a longitudinal study of twins born in England and Wales between 1994 and 1996. Intelligence and educational achievement at ages 12 and 16, and their associated genetic variants, were analysed. Intelligence was assessed via verbal and non-verbal web-based IQ tests. Educational achievement was assessed by how well pupils did in English, maths, and science, which are compulsory in the UK.

The researchers showed that polygenic scores, which reflect the combined effect of multiple genetic variants, may predict up to 11% of the difference in intelligence and 16% of the difference in educational achievement between individuals.

Andrea Allegrini, the corresponding author said: "The effects of single variants on a given trait are often extremely small, and difficult to capture accurately. However, most behavioural traits share a substantial proportion of genetic variation that is a proportion of genetic variants affect multiple traits at the same time. The degree to which shared genetic influences account for similarities between traits is known as genetic correlation.

Multivariate (so-called multi-trait) genomic approaches make use of genetic correlations between traits to more accurately estimate the effect of genetic variants on a given trait. These can be used to increase the predictive power of polygenic scores. We compared several novel, state of the art, multi-trait genomic methods to maximise polygenic score prediction."

The authors found that when analysing genetic variants associated with intelligence, they were able to predict 5.3% of the difference in intelligence between individuals at age 12 and 6.7% of the difference at age 16. For educational achievement, analysing genetic variants associated with educational attainment (years of schooling), they predicted a maximum of 6.6% of the difference at age 12 and 14.8% at age 16. The authors also showed that analysing variants associated with educational attainment allowed them to predict 7.2% of the variance in intelligence at age 12 and 9.9% at age 16, because of the genetic correlation between the two traits.

When taking a multivariate/multi-trait approach, and adding three other, genetically correlated traits and their associated genes to the analysis, prediction accuracy improved to 10% of the difference in intelligence at age 16 and 15.9% of the difference in educational achievement. The authors also tested three different genomic methods to show that their predictive accuracy was similar.

Andrea Allegrini said: "Our findings indicate that there are no notable differences between the multi-trait prediction methods we tested. Even though these methods employ different mathematical models, they arrive at similar conclusions. This is extremely encouraging as it indicates that our estimates are robust, in that they are generally stable across methods tested."

He added: "However, it is also important to understand that these are average differences, which means that many people with a low genetic predisposition to educational attainment can still do very well in school, and vice versa. As such, these scores are probabilistic; they do not show that education or intelligence are determined by a person's genes."

Credit: 
Springer

Stress-related disorders linked to heightened risk of cardiovascular disease

Stress related disorders--conditions triggered by a significant life event or trauma--may be linked to a heightened risk of cardiovascular disease (CVD), finds a large Swedish study published in The BMJ today.

The risk of severe and acute CVD events, such as cardiac arrest and heart attack, was particularly high in the first six months after diagnosis of a stress related disorder, and within the first year for other types of CVD.

Most people are, at some point during their life, exposed to psychological trauma or stressful life events such as the death of a loved one, a diagnosis of a life threatening illness, natural disasters, or violence, write the authors.

And there is building evidence which suggests that severe stress reactions to significant life events or trauma are linked to the development of CVD.

But previous studies have mainly focused on male veterans or those currently active in the military with posttraumatic stress disorder (PTSD), or PTSD symptoms. And because of the smaller size of these samples, data on the effects of stress reactions on different types of CVD are limited.

So to shed some light on this, researchers used Swedish population and health registers to explore the role of clinically diagnosed PTSD, acute stress reaction, adjustment disorder, and other stress reactions in the development of CVD.

They controlled for family background, medical history, and underlying psychiatric conditions.

The researchers matched 136,637 people from an "exposed cohort" who were diagnosed with a stress related disorder between January 1987 and December 2013 with 171,314 full siblings who were free of stress related disorders and CVD.

For each exposed person, 10 people from the general population who were unaffected by stress related disorders and CVD at the date of diagnosis of the "exposed" patient were randomly selected.

Exposed and unexposed people were then individually matched by birth year and sex.

Severe stress reactions to significant life events or trauma were linked to a heightened risk of several types of CVD, especially during the first year after diagnosis, with a 64% higher risk among people with a stress related disorder compared to their unaffected sibling.

The findings were similar for people with a stress related disorder compared to the general population.

And there was a stronger link between stress related disorders and early onset CVD - cases of disease which developed before the age of 50 - than later onset ones.

Out of all studied CVDs, the excess risk during the first year was strongest for heart failure, and for major blood clots (embolism and thrombosis) after one year.

There were similar associations across sex, calendar period, medical history, and family history of CVD. But those who were diagnosed with a stress disorder at a younger age had a heightened risk of CVD.

This is an observational study based on the Swedish population and, as such, can't establish cause. The authors point out evidence from other studies suggesting a biological link between severe stress reactions and cardiovascular disease development. And they can't rule out the role of other unmeasured behavioural factors, such as smoking and alcohol intake.

But they say that their study is the first to explore the association between a number of stress related disorders, including but not limited to PTSD, and several types of CVD using sibling-based comparisons, among both men and women.

And doctors need to be aware of the "robust" link between stress related disorders and a higher subsequent risk of cardiovascular disease, particularly during the months after diagnosis, they add.

"These findings call for enhanced clinical awareness and, if verified, monitoring or early intervention among patients with recently diagnosed stress related disorders," they conclude.

In a linked editorial, Professor Simon Bacon from Concordia University in Canada, says that the design of the study "allows us to make reasonable assumptions about the similarity of the environment, lifestyles, and health behaviours between those with a disorder and their paired siblings without one. Such assumptions allow inferences about other alternative potential pathways linking these disorders to CVD outcomes."

In the future, well designed studies evaluating more appropriate interventions will be critical, not only to confirm the inferences of the new study but also to provide real benefits to patients," he concludes.

Credit: 
BMJ Group

Millions of children worldwide develop asthma annually due to traffic-related pollution

video: Susan C. Anenberg and Pattanun Achakulwisut discuss the findings of their paper, "Global, national, and urban burdens of paediatric asthma incidence attributable to ambient NO2 pollution: estimates from global datasets," published on April 10 in The Lancet Planetary Health.

Image: 
Matthew Golden, GW Milken Institute School of Public Health

WASHINGTON, D.C. (April 10, 2019) - About 4 million children worldwide develop asthma each year because of inhaling nitrogen dioxide air pollution, according to a study published today by researchers at the George Washington University Milken Institute School of Public Health (Milken Institute SPH). The study, based on data from 2010 to 2015, estimates that 64 percent of these new cases of asthma occur in urban areas.

The study is the first to quantify the worldwide burden of new pediatric asthma cases linked to traffic-related nitrogen dioxide by using a method that takes into account high exposures to this pollutant that occur near busy roads, said Susan C. Anenberg, PhD, the senior author of the study and an associate professor of environmental and occupational health at Milken Institute SPH.

"Our findings suggest that millions of new cases of pediatric asthma could be prevented in cities around the world by reducing air pollution," said Anenberg. "Improving access to cleaner forms of transportation, like electrified public transport and active commuting by cycling and walking, would not only bring down NO2 levels, but would also reduce asthma, enhance physical fitness, and cut greenhouse gas emissions."

The researchers linked global datasets of NO2 concentrations, pediatric population distributions, and asthma incidence rates with epidemiological evidence relating traffic-derived NO2 pollution with asthma development in kids. They were then able to estimate the number of new pediatric asthma cases attributable to NO2 pollution in 194 countries and 125 major cities worldwide.

Key findings from the study published in The Lancet Planetary Health:

An estimated 4 million children developed asthma each year from 2010 to 2015 due to exposure to NO2 pollution, which primarily comes motor vehicle exhaust.

An estimated 13 percent of annual pediatric asthma incidence worldwide was linked to NO2 pollution.

Among the 125 cities, NO2 accounted for 6 percent (Orlu, Nigeria) to 48 percent (Shanghai, China) of pediatric asthma incidence. NO2's contribution exceeded 20 percent in 92 cities located in both developed and emerging economies.

The top 10 highest NO2 contributions were estimated for eight cities in China (37 to 48 percent of pediatric asthma incidence) and for Moscow, Russia and Seoul, South Korea at 40 percent.

The problem affects cities in the United States as well: Los Angeles, New York, Chicago, Las Vegas and Milwaukee were the top five cities in the U.S. with the highest percentage of pediatric asthma cases linked to polluted air.

Nationally, the largest burdens related to air pollution were found in China at 760,000 cases of asthma per year, followed by India at 350,000 and the United States at 240,000.

Asthma is a chronic disease that makes it hard to breathe and results when the lung's airways are inflamed. An estimated 235 million people worldwide currently have asthma, which can cause wheezing as well as life-threatening attacks.

The World Health Organization calls air pollution "a major environmental risk to health" and has established Air Quality Guidelines for NO2 and other air pollutants. The researchers estimate that most children lived in areas below the current WHO guideline of 21 parts per billion for annual average NO2. They also found that about 92 percent of the new pediatric asthma cases that were attributable to NO2 occurred in areas that already meet the WHO guideline.

"That finding suggests that the WHO guideline for NO2 may need to be re-evaluated to make sure it is sufficiently protective of children's health," said Pattanun Achakulwisut, PhD, lead author of the paper and a postdoctoral scientist at Milken Institute SPH.

The researchers found that in general, cities with high NO2 concentrations also had high levels of greenhouse gas emissions. Many of the solutions aimed at cleaning up the air would not only prevent new cases of asthma and other serious health problems but they would also attenuate global warming, Anenberg said.

Additional research must be done to more conclusively identify the causative agent within complex traffic emissions, said the researchers. This effort, along with more air pollution monitoring and epidemiological studies conducted in data-limited countries will help to refine the estimates of new asthma cases tied to traffic emissions, Anenberg and Achakulwisut added.

Credit: 
George Washington University

Obeticholic acid improves liver fibrosis and other histological features of NASH

11 April 2019, Vienna, Austria: A prespecified interim analysis of the ongoing Phase 3 REGENERATE study has confirmed that obeticholic acid (OCA) is effective in the treatment of nonalcoholic steatohepatitis (NASH) with liver fibrosis. The 18-month analysis, which was reported today at The International Liver Congress™ 2019 in Vienna, Austria, demonstrated that the 25 mg dose of OCA studied improved fibrosis in almost one-quarter of recipients, with significant improvements also reported in other histological markers of NASH.

Nonalcoholic steatohepatitis is a severe form of nonalcoholic fatty liver disease (NAFLD) and is characterized by the presence of steatosis, hepatocellular ballooning, and lobular inflammation. The condition is associated with rapid progression of fibrosis, which can eventually lead to the development of cirrhosis and hepatocellular carcinoma. The global prevalence of NASH has been estimated to range from 1.50% to 6.45%, with almost 60% of individuals with NAFLD who undergo biopsy found to have NASH. There are currently no medications approved in Europe or the USA specifically for the treatment of NASH.2,4

Obeticholic acid is a potent activator of the farnesoid X nuclear receptor that was shown to improve liver histology and fibrosis in a Phase 2 clinical trial (FLINT) published in 2015. The Phase 3 trial reported today is the first study in NASH to be designed in conjunction with regulatory authorities, with the aim of achieving approval for OCA in NASH with fibrosis.

In the analysis reported today, 931 individuals with biopsy-confirmed NASH and significant or severe fibrosis (stages F2 or F3) were randomized to receive OCA 10 mg/day (n=312), OCA 25 mg/day (n=308), or placebo (n=311). The primary endpoints of the study were either fibrosis improvement (greater than or equal to 1 stage) with no worsening of NASH or NASH resolution with no worsening of liver fibrosis on liver biopsy. The most pronounced benefits were observed in the OCA 25 mg treatment group. Once daily OCA 25 mg met the primary endpoint of fibrosis improvement (greater than or equal to 11 stage) with no worsening of NASH in 23.1% of patients (p=0.0002 vs placebo). Although the NASH resolution primary endpoint was not met, 35.1% of patients receiving OCA 25 mg showed improvements in hepatocellular ballooning (p=0.0011 vs placebo), and 44.2% of patients had lobular inflammation (p=0.0322 vs placebo). Dose-dependent reductions in liver enzymes were also observed.

Pruritus, the most commonly-reported adverse event (AE), affected 51% of the OCA 25 mg/day treatment group, 28% of the OCA 10 mg/day treatment group, and 19% of the placebo group. More participants withdrew from the study due to pruritus in the OCA 25 mg/day group (9%) than in the OCA 10 mg/day (

'There is an urgent need for effective treatment regimens for NASH, a common liver disease which can lead to cirrhosis, liver failure and need for transplant,' said Dr Zobair Younossi, Professor and Chairman of the Department of Medicine at Inova Fairfax Medical Campus in Falls Church, Virginia, USA, who presented the study results. 'These first results from the REGENERATE study give us hope that a new targeted approach to NASH treatment may soon become available and potentially reverse some of the liver damage associated with this important liver disease.'

Professor Philip Newsome, Vice-Secretary of EASL, said. 'These data are very exciting as they demonstrate for the first time in a phase 3 trial that medical therapy, in this case obeticholic acid, is able to improve liver fibrosis compared to placebo - a key treatment goal in NASH.'

Credit: 
Spink Health

High rates of liver disease progression and mortality observed in patients with NAFLD/NASH

11 April 2019, Vienna, Austria: Two independent national studies have reported high rates of liver disease progression and mortality among patients with non-alcoholic fatty liver disease/non-alcoholic steatohepatitis (NAFLD/NASH). The studies reported today at The International Liver Congress™ 2019 in Vienna, Austria, found that within 10 years of diagnosis, up to 11% of patients with NAFLD/NASH had progressed to advanced liver diseases (defined as NAFLD/NASH patients with compensated cirrhosis [CC], decompensated cirrhosis [DCC], liver transplant [LT] or hepatocellular carcinoma [HCC]), and up to 27% of patients with NAFLD/NASH and CC had developed liver decompensation.

Fatty liver is a complex condition that affects up to one-quarter of adults worldwide. The condition is considered to be the liver manifestation of metabolic syndrome and encompasses a histological spectrum from the relatively benign non-alcoholic fatty liver to NASH, which typically has an aggressive course. NAFLD/NASH can lead to cirrhosis or HCC, and is set to become the predominant cause of liver disease in many parts of the world; however, their natural history remains incompletely defined.

In the first study, 215,655 NAFLD/NASH patients were identified retrospectively from a German insurance claims database (InGef; 2011-2016) with 100,644 new events of different liver severity stages identified during the follow-up: 79,245 events (78.7%) of non-progressive NAFLD/NASH, 411 events (0.4%) of CC, 20,614 events (20.5%) of DCC, 11 events (0.01%) of LT and 363 events (0.4%) of HCC. Amongst those with advanced liver diseases, mortality rate during 1 year of follow-up increased by up to 50% (range 8.8-51.2%), compared with non-progressive NAFLD/NASH patients (1.2%, p

'Perhaps most worryingly, during the 5-year study period, 11% of the NAFLD/NASH patients progressed to advanced liver diseases and 17% of CC patients progressed to DCC, after accounting for any dying patients," said Professor Ali Canbay from the University of Magdeburg Medical School in Magdeburg, Germany, who presented the study findings. 'This demonstrates very clearly the need for early detection and effective treatment to prevent progression and potentially reduce mortality."

In the second study, French investigators identified 125,052 NAFLD/NASH patients from the French National Database on hospital care (PMSI; 2009-2015), of whom 1,491 (1.2%) were diagnosed with CC, 7,846 (6.3%) with DCC, and 1,144 (0.9%) with HCC. As was seen in Germany, a small cohort of patients progressed rapidly, with 5.6% of NAFLD/NASH patients progressing to more severe liver disease during 7 years of follow-up, and 27.5% of NAFLD/NASH patients with CC progressing to DCC. Mortality was high across all cohorts and increased with liver disease progression. After 1 year, 2.1% of NAFLD/NASH patients, 4.6% of CC patients, and 19.1% of DCC patients had died. The corresponding mortality rates after 7 years of follow-up were 7.9%, 16.3%, and 34.6% respectively.

'Before this study, we had very limited data on the disease progression and mortality of NAFLD/NASH patients in our country,' explained Professor Jerome Boursier from Angers University Hospital in Angers, France. 'We were surprised by the high overall mortality rate among these patients (7.9%) - almost twice that of the general population of a similar age - as well as the apparent rate of under-diagnosis of cirrhotic patients, the majority only being identified following a decompensation event.'

'This shows us we must direct greater effort into finding and treating NAFLD/NASH patients as early as possible, so we can stop or even reverse disease progression.'

Professor Philip Newsome (Vice-Secretary EASL) said, "These data demonstrate the significant morbidity and mortality found in patients with NAFLD and reinforces the need to identify those patients most at risk for appropriate treatment."

Credit: 
Spink Health

'Cthulhu' fossil reconstruction reveals monstrous relative of modern sea cucumbers

image: This is a life reconstruction of Sollasina cthulhu.

Image: 
Elissa Martin, Yale Peabody Museum of Natural History

An exceptionally-preserved fossil from Herefordshire in the UK has given new insights into the early evolution of sea cucumbers, the group that includes the sea pig and its relatives, according to a new article published today in the journal Proceedings of the Royal Society B.

Palaeontologists from the UK and USA created an accurate 3D computer reconstruction of the 430 million-year-old fossil which allowed them to identify it as a species new to science. They named the animal Sollasina cthulhu due to its resemblance to monsters from the fictional Cthulhu universe created by author H.P. Lovecraft.

Although the fossil is just 3 cm wide, its many long tentacles would have made it appear quite monstrous to other small sea creatures alive at the time. It is thought that these tentacles, or 'tube feet', were used to capture food and crawl over the seafloor.

Like other fossils from Herefordshire, Sollasina cthulhu was studied using a method that involved grinding it away, layer-by-layer, with a photograph taken at each stage. This produced hundreds of slice images, which were digitally reconstructed as a 'virtual fossil'.

This 3D reconstruction allowed palaeontologists to visualise an internal ring, which they interpreted as part of the water vascular system - the system of fluid-filled canals used for feeding and movement in living sea cucumbers and their relatives.

Lead author, Dr Imran Rahman, Deputy Head of Research at Oxford University Museum of Natural History said:

"Sollasina belongs to an extinct group called the ophiocistioids, and this new material provides the first information on the group's internal structures. This includes an inner ring-like form that has never been described in the group before. We interpret this as the first evidence of the soft parts of the water vascular system in ophiocistioids."

The new fossil was incorporated into a computerized analysis of the evolutionary relationships of fossil sea cucumbers and sea urchins. The results showed that Sollasina and its relatives are most closely related to sea cucumbers, rather than sea urchins, shedding new light on the evolutionary history of the group.

Co-author Dr Jeffrey Thompson, Royal Society Newton International Fellow at University College London, said:

"We carried out a number of analyses to work out whether Sollasina was more closely related to sea cucumbers or sea urchins. To our surprise, the results suggest it was an ancient sea cucumber. This helps us understand the changes that occurred during the early evolution of the group, which ultimately gave rise to the slug-like forms we see today."

The fossil was described by an international team of researchers from Oxford University Museum of Natural History, University of Southern California, Yale University, University of Leicester, and Imperial College London. It represents one of many important finds recovered from the Herefordshire fossil site in the UK, which is famous for preserving both the soft as well as the hard parts of fossils.

The fossil slices and 3D reconstruction are housed at Oxford University Museum of Natural History.

Credit: 
University of Oxford

Showy primates have smaller testicles

Well-adorned or well-endowed - but not both. Evolutionary biologists at the University of Zurich have for the first time demonstrated that male primates either have large testicles or showy ornaments. Developing both at the same time may simply take too much energy.

Male primates are highly competitive, especially about one thing: fathering offspring. To maximize their chances of passing on their genes, males of many primate species invest heavily in various sexual traits, such as a large body size, or long canines that can serve as weapons in direct contests over mates. What's more, showy sexual ornaments such as manes, beards, fleshy swellings, and colorful skin patches can help them intimidate rivals and woo females. And if males can't keep other males off their females, they will try to outcompete them at the level of sperm. By swamping the sperm of others, they can increase their chances of fertilization. But producing a lot of sperm requires large testicles.

Showy body ornamentation leads to small testes

All these male traits are energetically costly. So how do primates allocate their limited resources to the various sexual traits to maximize their reproductive success? This question is the focus of a new study by Stefan Lüpold, an evolutionary biologist at the University of Zurich (UZH), and his colleagues Leigh Simmons and Cyril Grueter from the University of Western Australia. These biologists compared the sexual traits of over 100 primate species, including humans. Individually, the expression of these traits increases with the intensity of male competition - as expected. But considering all traits jointly reveals an important trade-off: "Ornament elaboration comes at the expense of testicle size and sperm production. In a nutshell, the showiest males have the smallest testes," says Lüpold.

Limited resources determine degree of expression

The new study is the first to examine all sexual traits simultaneously. It has brought to light the subtleties of how male primates invest in maximizing their reproductive success: "Big testicles come with large weapons but less ornamentation." The researchers offer various explanations for their findings. But one of the key points may be the energy required to develop and maintain multiple sexual traits throughout a male's sexual maturity. "It's hard to have it all," says Lüpold.

Credit: 
University of Zurich

Genetic code of WWI soldier's cholera mapped

The oldest publicly-available strain of the cholera-causing bacterial species, Vibrio cholerae, has had its genetic code read for the first time by researchers at the Wellcome Sanger Institute and their collaborators. The bacterium was isolated from a British soldier during World War One (WWI) and stored for over 100 years before being revived and sequenced.

The results, published today (10 April) in Proceedings of the Royal Society B, show that this strain is a unique, non-toxigenic strain of V. cholerae that is distantly related to the strains of bacteria causing cholera pandemics today and in the past.

Cholera is a severe diarrhoeal disease caused by ingesting food or water that is contaminated with toxigenic V. cholerae. The disease can spread rapidly in epidemics and in global pandemics.

WWI coincided with an historical global cholera pandemic, known as the sixth pandemic, which was caused by 'classical' V. cholerae. Surprisingly, very few soldiers in the British Expeditionary Forces contracted cholera during the war, despite the disease being considered as a threat.

In 1916, a strain of V. cholerae was extracted from the stool of a British soldier who was convalescing in Egypt. Reports indicate that the isolate was taken from 'choleraic diarrhoea'. The bacterium was stored and subsequently deposited in the National Collection of Type Cultures (NCTC)* in 1920.

Researchers at the Sanger Institute revived the WWI soldier's bacteria - thought to be the oldest publicly-available V. cholerae sample - and sequenced its entire genome.

The team found this particular strain of V. cholerae was not the type capable of causing epidemic cholera, and was unrelated to the classical V. cholerae that caused the sixth pandemic at the time of WWI.

Professor Nick Thomson, lead author from the Wellcome Sanger Institute, said: "We have decoded the genome of what we believe to be the oldest archived 'live' sample of V. cholerae. It is a privilege to be able to look at the genome of this isolate. Studying strains from different points in time can give deep insights into the evolution of this species of bacteria and link that to historical reports of human disease. Even though this isolate did not cause an outbreak it is important to study those that do not cause disease as well as those that do. Hence this isolate represents a significant piece of the history of cholera, a disease that remains as important today as it was in past centuries."

Matthew Dorman, first author from the Wellcome Sanger Institute, said: "Reports in the literature indicated that there was something unusual about the strain of bacteria from the WWI soldier. It's promising to see that our genomic information aligns with those historical records. We also made other observations - under the microscope, the bacterium looks broken; it lacks a flagellum - a thin tail that enables bacteria to swim. We discovered a mutation in a gene that's critical for growing flagella, which may be the reason for this characteristic."

The soldier was reported to have cholera-like diarrhoea, but researchers now know he was infected with a non-toxigenic strain of V. cholerae. The team discovered genes that may have been responsible for producing a toxin that caused diarrhoea, but are unsure whether such diarrhoea would be classified as choleraic.

Researchers also found that this strain of V. cholerae possessed a gene for ampicillin resistance. This adds to increasing evidence that genes for antibiotic resistance in bacteria existed before the introduction of antibiotic treatments, possibly because the bacteria needed them to protect against naturally-occurring antibiotics.

Julie Russell, Head of Culture Collections at NCTC, said: "The National Collection of Type Cultures grows and maintains over 5,000 strains of bacteria from the last hundred years or so. Studying these bacteria offers a window into the past and helps scientists to understand how bacteria evolve over time, and the roles they played in history."

Credit: 
Wellcome Trust Sanger Institute

Cancer-killing combination therapies unveiled with new drug-screening tool

UC San Francisco scientists have designed a large-scale screen that efficiently identifies drugs that are potent cancer-killers when combined, but only weakly effective when used alone. Using this technique, the researchers eradicated a devastating blood cancer and certain solid tumor cells by jointly administering drugs that are only partially effective when used as single-agent therapies. The effort, a cross-disciplinary collaboration between UCSF researchers, is described in a study published April 9 in the journal Cell Reports.

When scientists developed the first targeted cancer therapies -- drugs that interfere with specific biological circuits that cancer depends on for growth and survival -- many thought they had finally cornered cancer. But cancer is a devastatingly clever disease that can outwit these precision medicines by "rewiring" itself to sidestep the circuits switched off by these drugs.

"Many cancers either fail to respond to a single targeted therapy or acquire resistance after initially responding. The notion that combining targeted therapies is a far more effective way to treat cancer than a single-drug approach has long existed. We wanted to perform screens with saturating coverage to understand exactly what combinations should be explored," said UCSF's Jeroen Roose, PhD, professor of anatomy and senior author of the new study.

Scientists have found that when they target two distinct circuits with two different drugs -- each of which is inadequate on its own -- the aggregate effect can be greater than sum of its parts. However, figuring out which drugs can synergize to kill cancer remains a challenge.

To demonstrate the power of their screening system, the scientists searched for targeted therapies that could join forces to kill an aggressive blood cancer called T cell acute lymphoblastic leukemia (T-ALL). Their hunt began with a drug that targets PI3K, an enzyme that promotes the growth of many cancers, including T-ALL. Though drugs that target PI3K already exist, the current crop of PI3K inhibitors can slow, but normally can't kill, this type of cancer.

"Nearly 65 percent of T-ALL patients have hyperactive PI3K, but most patients will likely not be cured by single-drug treatments. We wanted to find drugs that would kill T-ALL when combined with a PI3K inhibitor," said Roose, a member of the UCSF Helen Diller Family Comprehensive Cancer Center. To find those drugs, the researchers turned to RNA interference (RNAi) -- a technique that allows scientists to massively reduce the activity of specific genes. The discovery of RNAi, which occurs naturally in all animals and plants, and is now widely used in research, was a major breakthrough that was recognized with the 2006 Nobel Prize in Physiology or Medicine.

"RNAi is sort of a magic bullet for targeting specific genes," said Michael T. McManus, PhD, professor at the UCSF Diabetes Center and study co-author, who designed the screen with Roose. "Although there is a great deal of fascinating underlying biology that relates to RNAi, most scientists use it as a tool to 'turn down the volume' of a specific gene in a cell."

The gene-editing tool CRISPR has made it possible to completely remove genes. But according to McManus, while eliminating a specific gene is the gold standard -- an essential first step in determining its function in cells -- at times, reducing a gene's activity level using RNAi activity may be more desirable. This is especially true, he says, when researchers are seeking to mimic the effects of drugs, which often reduce the activity associated with a particular gene without completely eliminating it.

"When searching for cancer drugs, for example, RNAi may do a better job of approximating precision therapies, both of which only partially inhibit their biological targets," McManus said. The researchers have also started exploring CRISPRi and CRISPRa -- modified forms of CRISPR that inhibit or amplify the activity of target genes, respectively, without making cuts to the DNA -- for these reasons.

Roose and McManus aren't the first scientists to use RNAi to search for these kinds of combinatorial therapies. But earlier efforts were error-prone because those screens used RNAi libraries that were too small, Roose said. What sets the new study apart is the ultra-complex collection of short hairpin RNAs (shRNAs) that were used. These RNA fragments contain sequences that correspond to those found in messenger RNAs (mRNA) -- the molecular arbiters of gene activity in the cell. When an shRNA finds an mRNA that contains a matching sequence, the two molecules bind together to initiate a process that destroys the mRNA and inhibits the activity of that gene. In total, the researchers targeted some 1,800 cancer-associated genes with approximately 55,000 shRNAs, or about 30 shRNAs per gene, "more than enough to eliminate false positives and false negatives," Roose said.

The screen itself involved growing two different human T-ALL cell lines in the presence of PI3K inhibitors and then simultaneously administering shRNAs to find out which genes, when silenced in the presence of these drugs, killed the cancer. From this comprehensive screen, the researchers then focused on 10 genes whose activity, when curbed with precision medicines, was predicted to kill T-ALL cancer cells in combination with PI3K drugs. They tested these predictions and found that nine of the combined therapies could kill T-ALL -- a feat that none of the drugs could achieve on its own. The researchers then tested the most effective of these synergistic drug combinations on mouse models of T-ALL and found that it could extend survival by 150 percent.

The screen also yielded a digital tool that Roose says will be useful for other researchers: a user-friendly, searchable database based on results from the screen. The search engine -- developed by Marsilius Mues, PhD, a former Roose lab postdoc and lead author of the new study -- produces mansuscript-quality figures that help resarchers identify genes that emerged from the screen as potential targets for combination therapy with PI3K inhibitors.

Recognizing that discoveries made in blood cancers don't always translate to solid tumors, the researchers also tested the predicted drug combinations on 28 solid tumor cell lines derived from human breast, colorectal, pancreatic and brain cancers. They found that even in these solid tumor cells, the combination therapies synergized to reduce the number of cancer cells by up to 20 percent over the course of the experiment.

"An important message from our work is that scientists can use leukemia cells as a platform to find drug combinations that also work in solid tumors. Our screening platform is very generalizable," Roose said.

Among the most surprising and promising of the results was that the researchers were able to find pairs of drugs that impeded cancer growth, but which had no effect on normal cells.

"Finding therapies that specifically target cancer without harming healthy tissue is the holy grail of cancer research," Roose said. "This surprising result suggests that our method may aid in the discovery of this kind of cancer-specific precision medicine."

Credit: 
University of California - San Francisco