Culture

New glaucoma treatment could ease symptoms while you sleep

image: This is lead researcher Vikramaditya Yadav, a professor of chemical and biological engineering, and biomedical engineering at UBC.

Image: 
Clare Kiernan, University of British Columbia

Eye drops developed by UBC researchers could one day treat glaucoma while you sleep - helping to heal a condition that is one of the leading causes of blindness around the world.

"Medicated eye drops are commonly used to treat glaucoma but they're often poorly absorbed. Less than five per cent of the drug stays in the eye because most of the drops just roll off the eye," said lead researcher Vikramaditya Yadav, a professor of chemical and biological engineering, and biomedical engineering at UBC.

"Even when the drug is absorbed, it may fail to reach the back of the eye, where it can start repairing damaged neurons and relieving the pressure that characterizes glaucoma."

To solve these problems, the UBC team developed a hydrogel that was then filled with thousands of nanoparticles containing cannabigerolic acid (CBGA), a cannabis compound that has shown promise in relieving glaucoma symptoms.

They applied the drops on donated pig corneas, which are similar to human corneas, and found that the drug was absorbed quickly and reached the back of the eye.

"You would apply the eye drops just before bedtime, and they would form a lens upon contact with the eye. The nanoparticles slowly dissolve during the night and penetrate the cornea. By morning, the lens will have completely dissolved," said Yadav.

Previous research shows that cannabinoids like CBGA are effective in relieving glaucoma symptoms, but no cannabis-based eye drops have so far been developed because cannabinoids don't easily dissolve in water, according to the researchers.

"By suspending CBGA in a nanoparticle-hydrogel composite, we have developed what we believe is the first cannabinoid-based eye drops that effectively penetrate through the eye to treat glaucoma. This composite could also potentially be used for other drugs designed to treat eye disorders like infections or macular degeneration," said study co-author Syed Haider Kamal, a research associate in Yadav's lab.

Credit: 
University of British Columbia

The changing chemistry of the Amazonian atmosphere

How do you measure a chemical compound that lasts for less than a second in the atmosphere?

That's the question atmospheric chemists have been trying to answer for decades. The compound: hydroxyl radicals -- also known as OH radicals. These oxidizing chemicals are vital to the atmosphere's delicate chemical balance, acting as natural air scrubbers to remove organic compounds and greenhouse gasses such as formaldehyde and methane from the atmosphere. But OH radicals also initiate reactions leading to secondary pollutants that affect human health and climate, such as organic particulate matter and ozone.

Researchers have been debating whether pollutants emitted from human activities, in particular nitrogen oxides (NOx), can affect levels of OH radicals in a pristine atmosphere, but quantifying that relationship has been difficult. Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have quantified the effect of NOx pollution on OH radicals in the Amazon rainforest.

The research is published in Science Advances.

While a remote and seemingly sparsely populated rainforest might seem like a strange place to study the effects of human pollution, the Amazon is actually home to one of the fastest growing cities in the world: Manaus, Brazil. Manaus, with more than 2 million people, has seen a boom in a range of industries from petroleum refining and chemical manufacturing to mobile phone manufacturing, ship building and tourism.

That growth has led to substantial amounts of pollution released into the atmosphere and when it moves downwind and meets the pristine air from the rainforest, it creates a real-world laboratory for atmospheric chemists. That laboratory is the perfect spot to study the impact of pollution on OH concentrations, as it is easy to distinguish unpolluted regional air, from air that had passed over Manaus.

"Because we were able to contrast unpolluted air with polluted air, this research has given us a great opportunity to understand how pollution from human activity will affect the atmospheric chemistry, especially with continued urbanization of forested areas," said Yingjun Liu, a former graduate student at SEAS and first author of the paper.

The researchers measured levels of isoprene, a chemical compound naturally emitted by trees, as well as levels of major OH oxidation products of isoprene. As OH concentration increases, the ratio of oxidation products to isoprene increase in the atmosphere. The researchers were able to infer OH concentrations from the measured product-to-parent ratio, using a delicate model that took many confounding factors into account.

The researchers collected data in the rainforest region, about 70 km downwind of Manaus, during the wet and dry seasons of 2014. The researchers found that accompanying the increase of NOx concentration from urban pollution, daytime peak OH concentrations in the rainforest skyrocketed, increasing by at least 250 percent.

"Our research shows that the oxidation capacity over tropical forests is susceptible to anthropogenic NOx emissions," said Scot Martin, the Gordon McKay Professor of Environmental Science and Engineering and Professor of Earth and Planetary Sciences at SEAS and senior author of the paper. "In the future, if trends of deforestation and urbanization continue, these increased levels of OH concentrations in the Amazon atmosphere could lead to changes in atmospheric chemistry, cloud formation, and rainfall."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Do Democrat and Republican doctors treat patients differently at the end of life?

The divide that separates conservative and liberal values may be as wide now as it has ever been in our country. This divide shows itself in areas of everyday life, and health care is no exception.

But do doctors' political beliefs influence the way they practice medicine, choose therapies and treat patients?

Which treatments and how much care to provide at the end of a patient's life has been a notoriously contentious question in medicine. However, a physician's party affiliation appears to have no bearing on clinical decisions in the care of terminally ill patients at the end of life, according to a new study led by researchers at Harvard Medical School.

Previous research has looked at the role of physicians' personal beliefs in hypothetical scenarios. Results of the new study, believed to be the first to look at how doctors behave in real-life clinical practice, are published April 11 in The BMJ.

For this new analysis, Anupam Jena, the Ruth L. Newhouse Associate Professor of Health Care Policy at Harvard Medical School and an internist at Massachusetts General Hospital, teamed up with colleagues from Columbia University, Cornell University, Stanford University and New York University to compare the end-of-life care and treatment patterns of doctors with different political affiliations.

Despite the dramatic rhetorical differences among members of the two parties on end-of-life care, the researchers found no evidence that a physician's political affiliation influenced decisions on how much and what kind of care to offer their terminally ill patients.

"Our findings are reassuring evidence that the political and ideological opinions of physicians don't seem to have any discernible impact on how they care for patients at the end of life," Jena said. "Physicians seem to be able to look past their politics to determine the best care for each patient's individual situation."

A 2016 study in the Proceedings of the National Academy of Sciences suggested that physicians' personal beliefs on politically controversial health care issues like abortion, gun safety and birth control were linked to differences in how they delivered care. That earlier study was based on surveys that combined questions about political opinions with questions about how a physician would react in hypothetical clinical scenarios.

In the new study, researchers compared clinical treatment data from nearly 1.5 million Medicare beneficiaries who died in the hospital or shortly after discharge against data from the Federal Election Commission on the patients' attending physicians' political contributions. The investigators looked for differences in treatment patterns among doctors who donated to either party or to no party. Comparing physicians working within the same hospital to account for any regional variations in end-of-life care, the researchers found no differences between the groups in end-of-life spending, use of intensive resuscitation or life-sustaining interventions such as intubation or feeding tubes or in how likely they were to refer their patients to hospice.

Jena and colleagues say that further study is needed to determine whether decisions about other politically controversial health care and medical issues might break down along party lines.

Credit: 
Harvard Medical School

Some can combat dementia by enlisting still-healthy parts of the brain

image: A person with primary progressive aphasia activates part of the right-hand side of their brain (shown in blue) to decipher a sentence, whereas the normal person (left-hand image) does not.

Image: 
Aneta Kielar, University of Arizona

People with a rare dementia that initially attacks the language center of the brain recruit other areas of the brain to decipher sentences, according to new research led by a University of Arizona cognitive scientist.

The study is one of the first to show that people with a neurodegenerative disease can call upon intact areas of the brain for help. People who have had strokes or traumatic brain injuries sometimes use additional regions of the brain to accomplish tasks that were handled by the now-injured part of the brain.

"We were able to identify regions of the brain that allowed the patients to compensate for the dying of neurons in the brain," said first author Aneta Kielar, a UA assistant professor of speech, language and hearing sciences and of cognitive science.

The type of the dementia the researchers tested, primary progressive aphasia, or PPA, is unusual because it starts by making it hard for people to process language, rather than initially harming people's memory.

Kielar and her colleagues used magnetoencephalography, or MEG, to track how the 28 study participants' brains responded when confronted with several different language tasks. MEG revealed which part of the participant's brain responded and how fast the person's brain responded to the task.

People typically rely on the left side of the brain to comprehend language. Some of the people with PPA who were tested showed additional brain activity on the right, and those people did better on the language tests.

Senior author Jed Meltzer said, "These findings offer hope since it demonstrates that despite the brain's degeneration during PPA, the brain naturally adapts to try and preserve function."

Meltzer, a scientist at the Rotman Research Institute of the Baycrest Health Sciences Toronto, in Ontario, Canada, and Canada Research Chair in Interventional Cognitive Neuroscience, added, "This brain compensation suggests there are opportunities to intervene and offer targeted treatment to those areas."

Kielar conducted the research as a part of a postdoctoral fellowship at the Rotman Research Institute.

Kielar's and Meltzer's co-authors on the paper, "Abnormal language-related oscillatory responses in primary progressive aphasia," are Regina Jokel and Tiffany Deschamps of the University of Toronto. The journal NeuroImage: Clinical published the paper online in March.

The Ontario Brain Institute, the Alzheimer's Association, the Ontario Research Coalition and the Sandra A. Rotman program in Cognitive Neuroscience funded the research.

Kielar became intrigued by PPA because its effects on language are so different from other dementias. PPA's unusual characteristics also make it hard to diagnose, she said.

At the early stages of the disease, people with PPA can drive, go to the grocery store by themselves and do other things that require working memory, but they have trouble reading, writing and speaking grammatical sentences, she said.

"PPA specifically attacks language initially," she said. "I wanted to know what is special about the language regions of the brain."

Previous research using electroencephalograms, or EEGs, of PPA patients showed as the disease progressed, something at the neuronal level became slower. However, EEGs do not provide information about which brain region is slowing.

Therefore Kielar and her colleagues used MEG to take images of the brains of 28 people--13 people with PPA and 15 age-matched healthy controls--as they read sentences on an LCD screen. Some of the sentences had grammatical errors or mismatched words.

The researchers also conducted MRI scans of each participant to map each person's brain.

Working brains generate tiny changes in magnetic fields. MEG records those tiny, millisecond changes in magnetic fields that occur as the brain processes information. The MEG machinery is so sensitive that it requires a special shielded room that prevents any outside magnetic fields--such as those from electric motors, elevators, and even the Earth's magnetic field--from entering.

MEG can tell both when the brain was working on a task and what region of the brain was doing the task, Kielar said.

She and her colleagues found the brains of people with PPA responded more slowly to the language tests, which was not known before.

"You can tell that they are struggling, but we did not know that the neural processing in the brain was slowed down," she said. "It seems that this delay in processing may account for some of the deficits they have in processing language."

She and her colleagues hope knowing which parts of the brain are damaged by PPA will help develop a treatment. There is no cure for PPA, she said.

Transcranial magnetic stimulation, or TMS, a non-invasive treatment that sends a magnetic pulse to specific brain regions, has helped people who have had strokes. Kielar and her colleagues are planning to see if TMS can slow the progression of PPA.

Credit: 
University of Arizona

Melting of Arctic mountain glaciers unprecedented in the past 400 years

image: Scientists spent a month in Denali National Park in 2013 drilling ice cores from the summit plateau of Mt. Hunter. The ice cores showed the glaciers on Mt. Hunter are melting more now than at any time in the past 400 years.

Image: 
Dominic Winski.

WASHINGTON D.C. -- Glaciers in Alaska's Denali National Park are melting faster than at any time in the past four centuries because of rising summer temperatures, a new study finds.

New ice cores taken from the summit of Mt. Hunter in Denali National Park show summers there are least 1.2-2 degrees Celsius (2.2-3.6 degrees Fahrenheit) warmer than summers were during the 18th, 19th, and early 20th centuries. The warming at Mt. Hunter is about double the amount of warming that has occurred during the summer at areas at sea level in Alaska over the same time period, according to the new research.

The warmer temperatures are melting 60 times more snow from Mt. Hunter today than the amount of snow that melted during the summer before the start of the industrial period 150 years ago, according to the study. More snow now melts on Mt. Hunter than at any time in the past 400 years, said Dominic Winski, a glaciologist at Dartmouth College in Hanover, New Hampshire and lead author of the new study published in the Journal of Geophysical Research: Atmospheres, a journal of the American Geophysical Union.

The new study's results show the Alaska Range has been warming rapidly for at least a century. The Alaska Range is an arc of mountains in southern Alaska home to Denali, North America's highest peak.

The warming correlates with hotter temperatures in the tropical Pacific Ocean, according to the study's authors. Previous research has shown the tropical Pacific has warmed over the past century due to increased greenhouse gas emissions.

The study's authors conclude warming of the tropical Pacific Ocean has contributed to the unprecedented melting of Mt. Hunter's glaciers by altering how air moves from the tropics to the poles. They suspect melting of mountain glaciers may accelerate faster than melting of sea level glaciers as the Arctic continues to warm.

Understanding how mountain glaciers are responding to climate change is important because they provide fresh water to many heavily-populated areas of the globe and can contribute to sea level rise, Winski said.

"The natural climate system has changed since the onset of the anthropogenic era," he said. "In the North Pacific, this means temperature and precipitation patterns are different today than they were during the preindustrial period."

Assembling a long-term temperature record

Winski and 11 other researchers from Dartmouth College, the University of Maine and the University of New Hampshire drilled ice cores from Mt. Hunter in June 2013. They wanted to better understand how the climate of the Alaska Range has changed over the past several hundred years, because few weather station records of past climate in mountainous areas go back further than 1950.

The research team drilled two ice cores from a glacier on Mt. Hunter's summit plateau, 13,000 feet above sea level. The ice cores captured climate conditions on the mountain going back to the mid-17th century.

The physical properties of the ice showed the researchers what the mountain's past climate was like. Bands of darker ice with no bubbles indicated times when snow on the glacier had melted in past summers before re-freezing.

Winski and his team counted all the dark bands - the melt layers - from each ice core and used each melt layer's position in the core to determine when each melt event occurred. The more melt events they observed in a given year, the warmer the summer.

They found melt events occur 57 times more frequently today than they did 150 years ago. In fact, they counted only four years with melt events prior to 1850. They also found the total amount of annual meltwater in the cores has increased 60-fold over the past 150 years.

The surge in melt events corresponds to a summer temperature increase of at least 1.2-2 degrees Celsius (2.2-3.6 degrees Fahrenheit) relative to the warmest periods of the 18th and 19th centuries, with nearly all of the increase occurring in the last 100 years. Because there were so few melt events before the start of the 20th century, the temperature change over the past few centuries could be even higher, Winski said.

Connecting the Arctic to the tropics

The research team compared the temperature changes at Mt. Hunter with those from lower elevations in Alaska and in the Pacific Ocean. Glaciers on Mt. Hunter are easily influenced by temperature variations in the tropical Pacific Ocean because there are no large mountains to the south to block incoming winds from the coast, according to the researchers.

They found during years with more melt events on Mt. Hunter, tropical Pacific temperatures were higher. The researchers suspect warmer temperatures in the tropical Pacific Ocean amplify warming at high elevations in the Arctic by changing air circulation patterns. Warmer tropics lead to higher atmospheric pressures and more sunny days over the Alaska Range, which contribute to more glacial melting in the summer, Winski said.

"This adds to the growing body of research showing that changes in the tropical Pacific can manifest in changes across the globe," said Luke Trusel, a glaciologist at Rowan University in Glassboro, New Jersey who was not connected to the study. "It's adding to the growing picture that what we're seeing today is unusual."

Credit: 
American Geophysical Union

Tiny distortions in universe's oldest light reveal strands in cosmic web

image: In this illustration, the trajectory of cosmic microwave background (CMB) light is bent by structures known as filaments that are invisible to our eyes, creating an effect known as weak lensing captured by the Planck satellite (left), a space observatory. Researchers used computers to study this weak lensing of the CMB and produce a map of filaments, which typically span hundreds of light years in length.

Image: 
Siyu He, Shadab Alam, Wei Chen, and Planck/ESA

Scientists have decoded faint distortions in the patterns of the universe's earliest light to map huge tubelike structures invisible to our eyes - known as filaments - that serve as superhighways for delivering matter to dense hubs such as galaxy clusters.

The international science team, which included researchers from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, analyzed data from past sky surveys using sophisticated image-recognition technology to home in on the gravity-based effects that identify the shapes of these filaments. They also used models and theories about the filaments to help guide and interpret their analysis.

Published April 9 in the journal Nature Astronomy, the detailed exploration of filaments will help researchers to better understand the formation and evolution of the cosmic web - the large-scale structure of matter in the universe - including the mysterious, unseen stuff known as dark matter that makes up about 85 percent of the total mass of the universe.

Dark matter constitutes the filaments - which researchers learned typically stretch across hundreds of millions of light years - and the so-called halos that host clusters of galaxies are fed by the universal network of filaments. More studies of these filaments could provide new insights about dark energy, another mystery of the universe that drives its accelerating expansion.

Filament properties could also put gravity theories to the test, including Einstein's theory of general relativity, and lend important clues to help solve an apparent mismatch in the amount of visible matter predicted to exist in the universe - the "missing baryon problem."

"Usually researchers don't study these filaments directly - they look at galaxies in observations," said Shirley Ho, a senior scientist at Berkeley Lab and Cooper-Siegel associate professor of physics at Carnegie Mellon University who led the study. "We used the same methods to find the filaments that Yahoo and Google use for image recognition, like recognizing the names of street signs or finding cats in photographs."

The study used data from the Baryon Oscillation Spectroscopic Survey, or BOSS, an Earth-based sky survey that captured light from about 1.5 million galaxies to study the universe's expansion and the patterned distribution of matter in the universe set in motion by the propagation of sound waves, or "baryonic acoustic oscillations," rippling in the early universe.

The BOSS survey team, which featured Berkeley Lab scientists in key roles, produced a catalog of likely filament structures that connected clusters of matter that researchers drew from in the latest study.

Researchers also relied on precise, space-based measurements of the cosmic microwave background, or CMB, which is the nearly uniform remnant signal from the first light of the universe. While this light signature is very similar across the universe, there are regular fluctuations that have been mapped in previous surveys.

In the latest study, researchers focused on patterned fluctuations in the CMB. They used sophisticated computer algorithms to seek out the imprint of filaments from gravity-based distortions in the CMB, known as weak lensing effects, that are caused by the CMB light passing through matter.

Since galaxies live in the densest regions of the universe, the weak lensing signal from the deflection of CMB light is strongest from those parts. Dark matter resides in the halos around those galaxies, and was also known to spread from those denser areas in filaments.

"We knew that these filaments should also cause a deflection of CMB and would also produce a measurable weak gravitational lensing signal," said Siyu He, the study's lead author who is a Ph.D. researcher from Carnegie Mellon University - she is now at Berkeley Lab and is also affiliated with UC Berkeley. The research team used statistical techniques to identify and compare the "ridges," or points of higher density that theories informed them would point to the presence of filaments.

"We were not just trying to 'connect the dots' - we were trying to find these ridges in the density, the local maximum points in density," she said. They checked their findings with other filament and galaxy cluster data, and with "mocks," or simulated filaments based on observations and theories. The team used large cosmological simulations generated at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC), for example, to check for errors in their measurements.

The filaments and their connections can change shape and connections over time scales of hundreds of millions of years. The competing forces of the pull of gravity and the expansion of the universe can shorten or lengthen the filaments.

"Filaments are this integral part of the cosmic web, though it's unclear what is the relationship between the underlying dark matter and the filaments," and that was a primary motivation for the study, said Simone Ferraro, one of the study's authors who is a Miller postdoctoral fellow at UC Berkeley's Center for Cosmological Physics.

New data from existing experiments, and next-generation sky surveys such as the Berkeley Lab-led Dark Energy Spectroscopic Instrument (DESI) now under construction at Kitt Peak National Observatory in Arizona should provide even more detailed data about these filaments, he added.

Researchers noted that this important step in sleuthing the shapes and locations of filaments should also be useful for focused studies that seek to identify what types of gases inhabit the filaments, the temperatures of these gases, and the mechanisms for how particles enter and move around in the filaments. The study also allowed them to determine the length of filaments.

Siyu He said that resolving the filament structure can also provide clues to the properties and contents of the voids in space around the filaments, and "help with other theories that are modifications of general relativity," she said.

Ho added, "We can also maybe use these filaments to constrain dark energy - their length and width may tell us something about dark energy's parameters."

Credit: 
DOE/Lawrence Berkeley National Laboratory

Study highlights the health and economic benefits of a US salt reduction strategy

New research, published in PLOS Medicine, conducted by researchers at the University of Liverpool, Imperial College London, Friedman School of Nutrition Science and Policy at Tufts and collaborators as part of the Food-PRICE project, highlights the potential health and economic impact of the United States (US) Food and Drug Administration's proposed voluntary salt policy.

Excess salt consumption is associated with higher risk of cardiovascular disease (CVD) and gastric cancer. Globally, more than 1.5 million CVD related deaths every year can be attributed to the excess dietary salt intake.

Further salt-related deaths come from gastric cancer. Health policies worldwide, therefore, are being proposed to reduce dietary salt intake.

Health and economic impact

The US Food & Drug Administration (FDA) has proposed voluntary sodium reduction goals targeting processed and commercially prepared foods. The researchers aimed to quantify the potential health and economic impact if this FDA policy was successfully implemented. The results will be of great interest to policy makers.

The researchers modelled and compared the potential health and economic effects of three differing levels of implementing the FDA's proposed voluntary sodium reformulation policy over a 20-year period.

They found that the optimal scenario, 100% compliance with the 10-year FDA targets, could prevent approximately 450,000 CVD cases, gain 2 million Quality Adjusted Life Years (QALYs) and produce discounted cost savings of approximately $40 billion over a 20 year period (2017-2016).

In contrast, the modest scenario, 50% compliance of the 10-year FDA targets, and the pessimistic scenario, 100% compliance of the two-year targets but no further progress, could yield health and economic gains approximately half as great, and a quarter as great, respectively.

All three scenarios were likely to be cost-effective by 2021 and cost-saving by 2031.

Substantial decreases

Dr Jonathan Pearson-Stuttard, University of Liverpool and Imperial College London, said: "Our study suggests that full industry compliance with the FDA voluntary sodium reformulation targets, would result in very substantial decreases in CVD incidence and mortality whilst also offering impressive cost savings to the health payers and the wider economy.

Senior author Professor Martin O'Flaherty, University of Liverpool, said: "There is no doubt that these findings have important implications for the processed and commercially prepared food industry in the US."

Senior author Renata Micha, Research Associate Professor at Friedman School of Nutrition Science and Policy at Tufts University, said: "Population-wide salt reduction strategies with high industry compliance should be prioritized to save lives and reduce healthcare costs. Industry engagement is crucial in implementing dietary policy solutions to improve population health, particularly for developing and marketing healthier foods."

Other research collaborators in the project were Department of Preventive Medicine and Education, Medical University of Gdansk (Poland) and the American Heart Association (Washington).

This work was supported by awards from the US National Heart, Lung, and Blood Institute of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH. For conflict of interest disclosure, please see the article.

Credit: 
University of Liverpool

Newly discovered biomarkers could be key to predicting severity of brain tumor recurrence

Researchers have identified specific predictive biomarkers that could help assess the level of risk for recurrence in patients with malignant glioma. The study, led by Henry Ford Health System's Department of Neurosurgery and Department of Public Health Sciences, was published today in Cell Reports.

The team performed an analysis of 200 brain tumor samples from 77 patients with diffuse glioma harboring IDH mutation, the largest collection of primary and recurrent gliomas from the same patients to date. Comparing samples from the patients' initial diagnosis with those from their disease recurrence, researchers focused, in particular, on a distinct epigenetic modification occurring along the DNA segment, a process called DNA methylation.

Previously, their research showed that when there was no change in the DNA methylation, patients had a good clinical outcome. When the DNA methylation was lost, patients had a poor outcome. In this latest study, the authors were able to identify a set of epigenetic biomarkers that can predict, at a patient's initial diagnosis, which tumors are likely to recur with a more aggressive tumor type.

Houtan Noushmehr, Ph.D., Henry Ford Department of Neurosurgery, Hermelin Brain Tumor Center, and senior author of the study, says this discovery could make a huge difference when a patient is first diagnosed. "To date, we really don't have any predictive clinical outcomes once a patient is diagnosed with glioma. By pinpointing these molecular abnormalities, we can begin to predict how aggressive a patient's recurrence will be and that can better inform the treatment path we recommend from the very beginning."

Of the 200 tissue samples, 10% were found to have a distinct epigenetic alteration at genomic sites known to be functionally active in regulating genes that are known to be associated with aggressive tumors such as glioblastoma.

"This research presents a set of testable DNA-methylation biomarkers that may help clinicians predict if someone's brain tumor is heading in a more or less aggressive direction, essentially illustrating the behavior of a patient's disease," says James Snyder, D.O., study co-author and neuro-oncologist, Henry Ford Department of Neurosurgery and Hermelin Brain Tumor Center. "If we can identify which brain tumors will have a more aggressive course at the point of initial diagnosis then hopefully we can change the disease trajectory and improve care for our patients."

For example, patients predicted to have a more aggressive tumor at recurrence could be monitored more intensively after their initial treatment, or, undergo a more dynamic therapeutic regimen. Conversely, patients predicted to have a less aggressive recurrence might benefit from a reduction or delay of potentially harsh therapies such as standard chemotherapy and radiation.

"Right now, this level of molecular analysis is not routinely available in precision medicine testing and that needs to change," says Steven N. Kalkanis, M.D., Medical Director, Henry Ford Cancer Institute, and Chair, Department of Neurosurgery. "We need to be examining this level of information for every patient. The hope is that discoveries like this one will lead to clinical trials and increased access and education that make it available for every person who receives a cancer diagnosis."

Credit: 
Henry Ford Health

Two Colorado studies find resistance mechanisms in ALK+ and ROS1+ cancers

image: Robert C. Doebele, MD, PhD, and colleagues find resistance mechanisms in ALK+ and ROS1+ lung cancers, and demonstrate use to circulating tumor DNA to search for these mechanisms in patient samples.

Image: 
University of Colorado Cancer Center

Targeted treatments have revolutionized care for lung cancer patients whose tumors harbor ALK or ROS1 alterations. Basically, cancers may use these genetic changes to drive their growth, but also become dependent on the action of these altered genes for their survival. Targeted treatments like crizotinib block the actions of ALK and ROS1, thus killing cancers that depend on them. However, when doctors target ALK or ROS1, cancers often evolve new ways to survive. After a period of success, targeted treatments against ALK+ and ROS1+ lung cancers often fail.

A University of Colorado Cancer Center study published today in the journal Clinical Cancer Research provides an in-depth look at how these ALK+ and ROS1+ cancers evolve to resist treatment. A second study demonstrates the ability to identify these changes in patient blood samples, perhaps easing the ability to monitor patients for these changes that provide early evidence that treatment is failing.

Unfortunately, the first study shows there is no single or even a dominant way that ALK+ and ROS1+ lung cancers change in response to targeted treatments.

"If there were only one change that follows these treatments, we would know that when treatment fails, we should switch to another, defined treatment," says Robert C. Doebele, MD, PhD, director of the CU Cancer Center Thoracic Oncology Research Initiative. "However, rather than providing a path of action, this study throws down a challenge: There's a lot of stuff we're not looking for or don't even know how to look for, but might be treatable if we knew how to look for it."

Doebele worked with CU postdoctoral fellow Caroline McCoach, MD (now an assistant professor of medical oncology at University of California at San Francisco), to examine tumor samples of 12 ROS1+ patients and 43 ALK+ patients that had evolved to resist targeted treatment. As expected, a percentage of these samples showed genetic changes somewhat similar to the original causes - ALK and ROS1 are both "kinases" that can control the expression of other genes. In one of the 12 ROS1+ samples and 15 of the 43 ALK+ samples, new kinases had been altered to allow treatment resistance.

In the researchers' opinion, these are encouraging cases because, "these kinase mutations are the easiest to detect and, conceptually, the easiest to treat," Doebele says. This ease of detection and possibility to treat kinase mutations with drugs similar to those that already treat ALK+ and ROS1+ lung cancers have led researchers to focus on these changes.

"But we found a lot of stuff besides kinase mutations," he says. "What we're trying to say is that resistance happens in a lot of different ways and we need to be thinking about all the genetic and non-genetic changes that can occur."

For example, one ROS1+ cell line had no identifiable genetic changes. Genetically, the cancer should have remained sensitive to treatments targeting ROS1. But functional analysis showed that the known breast cancer driver, HER2, was creating drug resistance in this cell line.

"On one hand, the panoply of resistance mechanisms that can occur is incredibly frustrating. You're taking a small population of patients and further subdividing them into many other resistance mechanisms. How do we attack that, respond to that resistance when every patient is a little different?" Doebele says. "But on the other hand, though we are learning that resistance is really complex, the more we look and the better our tests are at capturing different types of alterations, the more we are able to target these resistance mechanisms. That's incredibly exciting."

A second paper, published as a companion to the first, shows that once resistance mechanisms are defined, doctors may be able to test lung cancer patients for these changes by sifting blood samples for DNA signatures released by cancers.

"Basically, we show that circulating tumor DNA or ctDNA can show us what's driving the cancer at any given point," Doebele says. "In theory, this strategy gives us an alternate method to spot these changes without having to do a biopsy."

In addition to being less invasive, the use of ctDNA to monitor a cancer's genetics saves time. "Due to the time it takes to schedule a biopsy and then the two weeks it takes to run a tumor test, using ctDNA instead can save patients a week or more." Knowing when a mechanism of drug resistance has evolved can ensure that patients have the opportunity to explore new treatment options as soon as possible.

Testing ctDNA in blood also allows researchers to take an overall sample of cancer genetics, rather than being limited to a snapshot of genetics from a single site of biopsy, "possibly giving us a broader picture of what's going on," Doebele says. However, this and other studies show that ctDNA has somewhat reduced sensitivity compared with biopsy and, "we may miss things," Doebele says, implying that analysis of ctDNA may be an appropriate strategy to monitor tumor evolution in addition to but not instead of biopsy.

The current paper used the ctDNA test Guardant360 to explore blood samples of 88 ALK+ lung cancer patients, showing the partner genes that "fused" with ALK to cause cancer (including EML4, STRN and others). Thirty-one of these patients were tested again at the time their cancer progressed after ALK-targeted treatment. In 16 of these blood samples, researchers found that the ctDNA test was able to identify ALK resistance mechanisms.

"There's been a huge focus on kinase mutations," Doebele says. "But not everything is driven by a simple mutation. A focus on broader testing and on new methods of broad testing will help us widen our net to catch these other changes that are driving resistance to ALK and ROS1 targeted treatments."

Credit: 
University of Colorado Anschutz Medical Campus

Large-scale study links PCOS to mental health disorders

WASHINGTON -- Women with polycystic ovary syndrome (PCOS), the most common hormone condition among young women, are prone to mental health disorders, and their children face an increased risk of developing attention deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD), according to a new study published in the Endocrine Society's Journal of Clinical Endocrinology & Metabolism.

PCOS affects 7 percent to 10 percent of women of childbearing age. It costs an estimated $5.46 billion annually to provide care to reproductive-aged PCOS women in the United States, according to the Society's Endocrine Facts and Figures report. PCOS is the most common cause of infertility in young women, and the elevated male hormone levels associated with the condition lead to many other emotionally distressing symptoms like irregular periods, excessive facial and body hair, weight gain and acne.

"PCOS is one of the most common conditions affecting young women today, and the effect on mental health is still under appreciated," said one of the study's authors, Aled Rees, M.B.B.Ch., Ph.D., F.R.C.P., of Cardiff University in Cardiff, United Kingdom. "This is one of the largest studies to have examined the adverse mental health and neurodevelopmental outcomes associated with PCOS, and we hope the results will lead to increased awareness, earlier detection and new treatments."

In the retrospective cohort design study, researchers from the Neuroscience and Mental Health Research Institute at Cardiff University assessed the mental health history of nearly 17,000 women diagnosed with PCOS. The study leveraged data from the Clinical Practice Research Datalink (CPRD), a database containing records for 11 million patients collected from 674 primary care practices in the United Kingdom.

When compared with unaffected women, matched for age and body mass index, the study found that PCOS patients were more likely to be diagnosed with mental health disorders, including depression, anxiety, bipolar disorder and eating disorders.

Children born to mothers with PCOS were also found to be at greater risk of developing ADHD and autism spectrum disorders. These findings suggest that women with PCOS should be screened for mental health disorders, to ensure early diagnosis and treatment, and ultimately improve their quality of life.

"Further research is needed to confirm the neurodevelopmental effects of PCOS, and to address whether all or some types of patients with PCOS are exposed to mental health risks," said Rees.

Credit: 
The Endocrine Society

New biological research framework for Alzheimer's seeks to spur discovery

image: This table shows column of the eight biomarker profiles (left) and corresponding categories (right) outlined in the framework that could be used to group research participants. The biomarker profiles can be sorted into three broader categories: Normal Alzheimer's biomarkers, Alzheimer's continuum and non-Alzheimer's pathologic change.

Image: 
NIA-AA Research Framework

The research community now has a new framework toward developing a biologically-based definition of Alzheimer's disease. This proposed "biological construct" is based on measurable changes in the brain and is expected to facilitate better understanding of the disease process and the sequence of events that lead to cognitive impairment and dementia. With this construct, researchers can study Alzheimer's, from its earliest biological underpinnings to outward signs of memory loss and other clinical symptoms, which could result in a more precise and faster approach to testing drug and other interventions.

The National Institute on Aging (NIA), part of the National Institutes of Health, and the Alzheimer's Association (AA) convened the effort, which as the "NIA-AA Research Framework: Towards a Biological Definition of Alzheimer's Disease," appears in the April 10, 2018 edition of Alzheimer's & Dementia: The Journal of the Alzheimer's Association. Drafts were presented at several scientific meetings and offered online, where the committee developing the framework gathered comments and ideas which informed the final published document. The framework, as it undergoes testing and as new knowledge becomes available, will be updated in the future.

The framework will apply to clinical trials and can be used for observational and natural history studies as well, its authors noted. They envision that this common language approach will unify how different stages of the disease are measured so that studies can be easily compared and presented more clearly to the medical field and public.

"In the context of continuing evolution of Alzheimer's research and technologies, the proposed research framework is a logical next step to help the scientific community advance in the fight against Alzheimer's," said NIA Director Richard J. Hodes, M.D. "The more accurately we can characterize the specific disease process pathologically defined as Alzheimer's disease, the better our chances of intervening at any point in this continuum, from preventing Alzheimer's to delaying progression,"

Evolution in thinking

This framework reflects the latest thinking in how Alzheimer's disease begins perhaps decades before outward signs of memory loss and decline may appear in an individual. In 2011, NIA-AA began to recognize this with the creation of separate sets of diagnostic guidelines that incorporated recognition of a preclinical stage of Alzheimer's and the need to develop interventions as early in the process as possible. The research framework offered today builds from the 2011 idea of three stages--pre-clinical, mild cognitive impairment and dementia--to a biomarker-based disease continuum.

The NIA-AA research framework authors, which included 20 academic, advocacy, government and industry experts, noted that the distinction between clinical symptoms and measurable changes in the brain has blurred. The new research framework focuses on biomarkers grouped into different pathologic processes of Alzheimer's which can be measured in living people with imaging technology and analysis of cerebral spinal fluid samples. It also incorporates measures of severity using biomarkers and a grading system for cognitive impairment.

"We have to focus on biological or physical targets to zero in on potential treatments for Alzheimer's," explained Eliezer Masliah, M.D., director of the Division of Neuroscience at the NIA. "By shifting the discussion to neuropathologic changes detected in biomarkers to define Alzheimer's, as we look at symptoms and the range of influences on development of Alzheimer's, I think we have a better shot at finding therapies, and sooner."

In an accompanying editorial, Masliah and NIA colleagues, including Dr. Hodes, highlighted both the promise and limitations of the biological approach. They noted that better operational definitions of Alzheimer's are needed to help better understand its natural history and heterogeneity, including prevalence of mimicking conditions. They also emphasized that the research framework needs to be extensively tested in diverse populations and with more sensitive biomarkers.

Batching and matching biomarkers

The NIA-AA research framework proposes three general groups of biomarkers--beta-amyloid, tau and neurodegeneration or neuronal injury--and leaves room for other and future biomarkers. Beta-amyloid is a naturally occurring protein that clumps to form plaques in the brain. Tau, another protein, accumulates abnormally forming neurofibrillary tangles which block communication between neurons. Neurodegeneration or neuronal injury may result from many causes, such as aging or trauma, and not necessarily Alzheimer's disease.

Researchers can use measures from a study participant and identify beta-amyloid (A), tau (T) or neurodegeneration or neuronal injury (N) to characterize that person's combination of biomarkers in one of eight profiles. For example, if a person has a positive beta-amyloid (A+) biomarker but no tau (T-), he or she would be categorized as having "Alzheimer's pathologic change." Only those with both A and T biomarkers would be considered to have Alzheimer's disease, along a continuum. The N biomarker group provides important pathologic staging information about factors often associated with Alzheimer's development or worsening of symptoms.

Framework for certain research only

The authors emphasized that the NIA-AA research framework is neither a diagnostic criteria nor guideline for clinicians. It is intended for research purposes, requiring further testing before it could be considered for general clinical practice, they noted.

They also stressed that the biological approach to Alzheimer's is not meant to supplant other measures, such as neuropsychological tests, to study important aspects of the disease such as its cognitive outcomes. In some cases, the article pointed out, biomarkers may not be available or requiring them would be counterproductive for particular types of research.

The authors acknowledge that the research framework may seem complex, but stress that it is flexible and may be employed to answer many research questions, such as how cognitive outcomes differ among various biomarker profiles, and what the influence of age is on those relationships.

In its commentary the NIA leadership developed a table to help explain how the proposed framework might be used and where it might not apply:

The research framework is...

A testable hypothesis

An approach that facilitates standardized research reporting

A common language and a reference point for researchers for longitudinal studies and clinical trials

A welcome for other approaches

A welcome for other indicators of Alzheimer's and comorbidities

The research framework is NOT...

A requirement for NIH grant submission

A statement about Alzheimer's pathogenesis or etiology

An NIA policy, guideline or criterion for papers or grants

A disease definition for standard medical use

A fixed notion of Alzheimer's

Credit: 
NIH/National Institute on Aging

Alzheimer's disease redefined: New research framework defines Alzheimer's by brain changes, not symptoms

Chicago, April 10, 2018 - "NIA-AA Research Framework: Towards a Biological Definition of Alzheimer's Disease" was published today in the April 2018 issue of Alzheimer's & Dementia: The Journal of the Alzheimer's Association. First author Clifford R. Jack, Jr., M.D., of Mayo Clinic Rochester, MN and colleagues propose shifting the definition of Alzheimer's disease in living people - for use in research - from the current one, based on cognitive changes and behavioral symptoms with biomarker confirmation, to a strictly biological construct. This represents a major evolution in how we think about Alzheimer's.

Understanding and effectively treating Alzheimer's disease and other dementias may be the most difficult challenge for the medical/scientific community this century. The field has experienced monumental challenges developing new and effective drug therapies, not the least of which was the discovery that - until recently -clinical trials were conducted where up to 30% of participants did not have the Alzheimer's disease-related brain change targeted by the experimental drug.

"With the aging of the global population, and the ever-escalating cost of care for people with dementia, new methods are desperately needed to improve the process of therapy development and increase the likelihood of success," said Maria Carrillo, Ph.D., Alzheimer's Association chief science officer and a co-author of the new article. "This new Research Framework is an enormous step in the right direction for Alzheimer's research."

According to the authors, "This evolution of the previous diagnostic criteria is in line with most chronic diseases that are defined biologically, with clinical symptoms being a ... consequence." They say, "the goal of much of medicine is to identify and treat diseases prior to overt symptoms. The [NIA-AA Research] Framework is intended to provide a path forward to ... prevention trials of Alzheimer's disease among persons who are clinically asymptomatic."

Other areas of medicine have used this approach to define disease processes using biomarkers, for example: bone mineral density, hypertension, hyperlipidemia and diabetes are defined by biomarkers. Therapies that address these biomarkers have been shown to reduce the likelihood of developing fractures, heart attacks and strokes.

The authors, "take the position that biomarker evidence of Alzheimer's disease indicates the presence of the disease whether or not symptoms are present, just as an abnormal HbA1C indicates the presence of diabetes whether or not symptoms are present."

In 2011, the Alzheimer's Association (AA) and the National Institute on Aging (NIA) at the U.S. National Institutes of Health convened experts to update the diagnostic guidelines for Alzheimer's disease. The landmark publications designated three stages of Alzheimer's - preclinical (before symptoms affecting memory, thinking or behavior can be detected), mild cognitive impairment and dementia. In bringing together global leaders again in 2017 to review advances in the field and update the guidelines, a profound shift in thinking occurred to define Alzheimer's disease biologically, by pathologic brain changes or their biomarkers, and treat cognitive impairment as a symptom/sign of the disease, rather than its definition.

According to Dr. Jack, once validated in diverse global populations, this new definition will create a powerful tool to speed and improve the development of disease-modifying treatments for Alzheimer's disease.

The authors envision that defining Alzheimer's disease as a biological construct will enable a more accurate understanding of the sequence of events that lead to the cognitive impairment associated with Alzheimer's disease, as well as the multiple causes of the disease. This will enable a more precise approach to therapy trials including focusing more specific targets and including the appropriate people.

In an accompanying editorial, Ara S. Khachaturian, Ph.D., Executive Editor, and the editorial staff of Alzheimer's & Dementia, "commend the effort within the Research Framework to create a common language that may lead to new thinking for the generation of new testable hypotheses about the conceptual basis for Alzheimer's disease. Such a language is a critical and essential element in addressing the ongoing challenge of developing more intricate and comprehensive models of Alzheimer's disease ... for the identification of new interventions and diagnostics."

In their "Editorial comment to the 'NIA-AA Research Framework: Towards a Biological Definition of Alzheimer's Disease,'" Nina Silverberg, Ph.D., Cerise Elliott, Ph.D., Laurie Ryan, Ph.D., Eliezer Masliah, M.D., and Richard Hodes, M.D., of NIA point out that the Framework - in addition to improving early detection and the development of new therapies - could potentially "allow more precise estimates of how many people are at risk [for or living with] Alzheimer's disease, how best to monitor response to therapies, and how to distinguish the effects of Alzheimer's disease from other similar pathologies."

Anticipating questions on the impact of the NIA-AA Research Framework on research funding, they add that, "The NIH will consider research applications using the Framework as well as proposals using alternative schemes when designing experimental approaches. The NIH continues to welcome applications where biomarkers may not be appropriate."

In the article, the authors say they, "appreciate the concern that this biomarker-based Research Framework has the potential to be misunderstood and misused. Therefore, we emphasize: First, it is premature and inappropriate to use this Research Framework in general medical practice. Second, this Research Framework should not be used to restrict alternative approaches to hypothesis testing that do not employ biomarkers ... biomarker-based research should not be considered a template for all research into age-related cognitive impairment and dementia."

That said, the authors believe that Framework applies to the entire Alzheimer's disease research community. In the drafting of the document, "we were careful to include ... representatives of the Industry and the Food and Drug Administration in addition to government and non-governmental organizations. Finally, the Framework was vetted with numerous stakeholders at several meetings as well as posted for months for public comment."

"It is called a 'Research Framework' because it needs to be thoroughly examined - and modified, if needed - before being adopted into general clinical practice," Dr. Jack said. "Importantly, this Framework should be examined in diverse populations."

The authors recognize that the current form of the NIA-AA Research Framework is designed around only the biomarker technology that is presently available. They point out that the proposed biomarker scheme (see the attached fact sheet) is expandable to incorporate new biomarkers, as they are developed and verified.

Credit: 
Alzheimer's Association

Large-scale replication study challenges key evidence for the pro-active reading brain

When listening to a speaker, we often feel that we know what the speaker will say next. How is this possible? It is assumed that our brain routinely uses clues within a sentence to estimate the probability of upcoming words. Activating information about a word before it appears helps to rapidly integrate its meaning, once it appears, with the meaning of the sentence.

"For over 10 years, language scientists and neuroscientists have been guided by a high impact study published in Nature Neuroscience showing that these predictions by the brain are very detailed and can even include the first sound of an upcoming word," explains Mante Nieuwland, cognitive neuroscientist at the Max Planck Institute for Psycholinguistics (MPI) and the University of Edinburgh. These findings had, however, not yet been explicitly replicated since 2005, when the study came out.

Today, a new paper published in eLife by a scientific team led by Nieuwland of the MPI in the Netherlands questions the replicability of those results. The study is the first large-scale, multi-laboratory replication effort for the field of cognitive neuroscience and shows that the predictive function of the human language system may operate differently than the field has come to believe.

Same question, state-of-the-art approach

"Inspired by recent demonstrations for the need for large subject-samples and more robust analyses in psychology and neuroscience research, we re-examined the research question of the original study. We did so by following the original methods and applying improved and current analysis methods," says Guillaume Rousselet from the University of Glasgow, co-author of the study. Furthermore, the researchers pre-registered their analyses, providing a time-stamped proof that their analysis was not tailored to achieve the reported results.

The team embarked on a massive brain imaging study: Across 9 UK laboratories (University of Birmingham, University of Bristol, University of Edinburgh University of Glasgow, University of Kent, University College London, University of Oxford, University of Stirling, and University of York), 334 participants - 10 times the original amount - read sentences that were presented one word at a time, while electrical brain activity was recorded at the scalp. Each sentence contained an expected or unexpected combination of an article and a noun (e.g., "The day was breezy so the boy went outside to fly a kite/an airplane").

Surprising nouns and articles

"We saw that unexpected nouns generated an increased brain response compared to expected nouns. Just like the original study," Nieuwland says. Nevertheless, this reaction, also called an enhanced N400 response, is not the core argument that the participants' brains actually anticipated the nouns. After all, it was generated after the nouns were read, and could mean that nouns like 'kite' are merely easier to process than nouns like 'airplane'.

The key evidence for prediction of a yet unseen noun was originally obtained on the preceding articles. In English, the correct use of the article 'a' or 'an' depends on the first sound of the next word. Even though 'a' and 'an' do not differ in their meaning, the 2005 study showed that unexpected articles also elicited an enhanced N400 response compared to expected articles. Presumably 'an' tells the readers that the next word cannot be 'kite'. This supported the claim that has stood since 2005 - that readers can make such precise predictions as the first sound of upcoming words.

"Crucially, our findings now show that there is no convincing evidence for this claim. With the original analysis, we did not replicate this pattern for the articles. With our improved analysis, we also did not find an effect that was statistically reliable, although the observed pattern did go in the expected direction," according to Nieuwland.

"Of course, it may be that people do predict the sound of upcoming words, but that they do not reliably use the articles to change their prediction. This could be because an unexpected article does not rule out that the expected noun will eventually appear ('a' can precede 'kite' if they are separated by another word, like in 'an old kite'). Also, we have to consider this study only investigates the English language. Other research has shown very different findings in languages such as Spanish, Dutch and French, for which articles correspond to nouns in grammatical gender regardless of intervening words. "

Less straightforward than assumed

The authors caution that these new findings should not be interpreted as being against prediction more generally. "There is a larger body of behavioural and neuroscience work that supports a role of prediction in language processing, for example of the meaning of an upcoming word, although many of those other results in the existing literature, especially in neuroscience, still need to be replicated." However, these new findings show that the reading brain is perhaps not as pro-active as is often assumed, by demonstrating a potential limit to the detail in which it predicts.

Credit: 
Max Planck Institute for Psycholinguistics

Cohesive neighborhoods, less spanking result in fewer child welfare visits

ANN ARBOR--The child welfare system is more likely to intervene in households in "less neighborly" neighborhoods and in which parents spank their kids, a new study shows.

Researchers at the University of Michigan and Michigan State University conducted analyses on nearly 2,300 families from 20 large U.S. cities who responded to surveys and interviews. Participating families had a child who was born between 1998-2000.

They found that living in neighborhoods with strong social cohesion and trust--where neighbors are willing to help each other and generally get along--protects families against getting involved in the child welfare system.

In addition, Child Protective Services is less likely to intervene in households where kids are rarely spanked.

Other factors, such as poverty and mothers feeling depressed, also increase the odds of CPS involvement after controlling for neighborhood risk and spanking.

In the study, mothers reported the neighborhood conditions in which they lived, such as supportive relationships between neighbors and whether they spanked their 3-year-old child within the past month. The moms also reported contact with CPS when their child had been 3-5 years.

"Our findings suggest that promoting caring, neighborly relationships among residents that support the needs and challenges of families with young children can help ensure children's safety," said study co-author Andrew Grogan-Kaylor, U-M associate professor of social work.

About 57 percent of the 3-year-olds in the sample had been spanked by a parent or parental figure in the past month. CPS investigated 7.4 million children for suspected maltreatment during 2016, according to the U.S. Department of Health and Human Services.

Unlike previous research that only factored spanking and neighborhood conditions separately as precursors of child maltreatment, the current study examined these factors simultaneously, said study lead author Julie Ma, assistant professor of social work at UM-Flint.

"Both the types of neighborhoods in which parents choose, or are forced, to raise their children and parents' decisions about whether they spank their children influence the chances of CPS involvement," she said. "Programs and policies should address strategies for building supportive resident interactions in the neighborhoods, as well as nonphysical child discipline to help reduce maltreatment."

Credit: 
University of Michigan

Later school start times really do improve sleep time

A new study in SLEEP, published by Oxford University Press, indicates that delaying school start times results in students getting more sleep, and feeling better, even within societies where trading sleep for academic success is common.

The study aimed to investigate the short and longer-term impact of a 45-min delay in school start time on sleep and well-being of adolescents.

Singapore leads the world in the Programme for International Student Assessment rankings, which measures international scholastic performance in 15-year-olds. East Asian students live in a culture where the importance of academic success is deeply ingrained. This drive for academic achievement leads to high attainment in international academic assessments but has contributed to the curtailment of nocturnal sleep on school nights to well below the recommended eight to ten hours of sleep, putting students at risk of cognitive and psychological problems.

In Singapore, school typically starts around 7:30 AM, which is one hour earlier than the 8:30 AM or later start time recommended by the American Academy of Pediatrics, the American Medical Association, and the American Academy of Sleep Medicine. Sleep deprivation among Singaporean adolescents is rampant, and the average time in bed on school nights is 6 and a half hours.

In July 2016, an all-girls' secondary school in Singapore delayed its start time from 7:30 to 8:15 in the morning by restructuring its schedule in a way that did not delay school end time. Researchers investigated the impact of starting school later on students' sleep and well-being one month and nine months after the institution of the start time delay.

The sample consisted of 375 students in grades 7-10 from an all-girls' secondary school in Singapore that delayed its start time from 7:30 to 08:15 in the morning. Researchers assessed self-reports of sleep timing, sleepiness, and well-being (depressive symptoms and mood) before the school made the schedule change, and evaluated the measures again at approximately one and nine months after the delay. Total sleep time was also measured.

Later school start times have been shown to benefit sleep and well-being in Western cultures, but its usefulness in East Asian countries where students are driven to trade sleep for academic success is less clear. Most studies on later school start times have been conducted in Western countries. These studies have consistently found increased sleep duration on school nights with later start times. However, the sustainability of sleep habit improvement is not as well characterized.

Researchers wondered if students would continue to get more sleep if schools delayed their start times; the gains may not be sustained if students gradually delay their bedtime. For example, one study found that the sleep gained two months after a 45-minute delay in start time was no longer observed after another seven months, due to a delay in the sleep period. Delaying bedtimes, partly as a result of mounting academic workload, is a pressing reality in most East Asian households. Compounding this erosion of sleep time in East Asian societies is the resistance to changing the already packed school schedules. For example, recently, a secondary school in Hong Kong agreed to delay its start time, but only by 15 minutes. Nevertheless, a four-minute increase in time-in-bed on weekdays was found, together with gains in mental health, prosocial behavior and better attentiveness in class and peer relationships.

The results of this new study indicate that after one month, bedtimes on school nights were delayed by nine minutes while the times students got up were delayed by about 32 minutes, resulting in an increase in time in bed of 23 minutes.

Participants also reported lower levels of subjective sleepiness and improvement in well-being at both follow-ups. Notably, greater increase in sleep duration on school nights was associated with greater improvement in alertness and well-being.

Critically, with a later school start time the percentage of participants whose self-reported sleeping time on weekdays was at least 8 hours--the amount generally considered appropriate for adolescents--increased, from 6.9% to 16%. Total sleep time increased by about 10 minutes at the nine-month follow-up.

"Starting school later in East Asia is feasible and can have sustained benefits," said the paper's lead researcher, Michael Chee. "Our work extends the empirical evidence collected by colleagues in the West and argues strongly for disruption in practice and attitudes surrounding sleep and wellbeing in societies where these are believed to hinder rather than enhance societal advancement."

Credit: 
Oxford University Press USA