Earth

Ancient DNA from Sardinia reveals 6,000 years of genetic history

image: The s'Orcu 'e Tueri nuraghi, one of many distinctive Sardinian Bronze Age stone towers dating to the mid- to late 2nd millennium BC, at a site including in the study.

Image: 
Gruppo Grotte Ogliastra

A new study of the genetic history of Sardinia, a Mediterranean island off the western coast of Italy, tells how genetic ancestry on the island was relatively stable through the end of the Bronze Age, even as mainland Europe saw new ancestries arrive. The study further details how the island's genetic ancestry became more diverse and interconnected with the Mediterranean starting in the Iron Age, as Phoenician, Punic, and eventually Roman peoples began arriving to the island.

The research, published in Nature Communications, analyzed genome-wide DNA data for 70 individuals from more than 20 Sardinian archaeological sites spanning roughly 6,000 years from the Middle Neolithic through the Medieval period. No previous study has used genome-wide DNA extracted from ancient remains to look at the population history of Sardinia.

"Geneticists have been studying the people of Sardinia for a long time, but we haven't known much about their past," said the senior author John Novembre, PhD, a leading computational biologist at the Univeristy of Chicago who studies genetic diversity in natural populations. "There have been clues that Sardinia has a particularly interesting genetic history, and understanding this history could also have relevance to larger questions about the peopling of the Mediterranean."

An interdisciplinary team

The people of Sardinia have long been studied by geneticists to understand human health. The island has one of the highest rates of people who live to 100 years or more, and its people have higher than average rates of autoimmune diseases and disorders such as beta-thalassemia and G6PD deficiency. Many villages in Sardinia also have high levels of relatedness, which makes uncovering the genetics of traits simpler. Across the island, the frequencies of genetic variants often differ from mainland Europe. These factors have made Sardinia a useful place for geneticists like senior author Francesco Cucca from the Università di Sassari in Italy to uncover genetic variants that may be linked to disease and aging.

"Contemporary Sardinians represent a reservoir for some variants that are currently very rare in continental Europe," Cucca said. "These genetic variants are tools we can use to dissect the function of genes and the mechanisms that are at the basis of genetic diseases."

Sardinia also has a unique archaeological, linguistic, and cultural heritage, and has been part of Mediterranean trade networks since the Neolithic age. How much the population's genetic ancestry has changed through these times, however, has been unknown.

To generate a new perspective on the genetic history of Sardinia, long-term collaborators Cucca and Novembre brought together an interdisciplinary group with geneticists, archaeologists, and ancient DNA experts. A team led by Johannes Krause at the Max Planck Institute for the Science of Human History and the University of Tübingen in Germany helped coordinate the sampling and carried out DNA sequencing and authentication. Teams led by Novembre and Cucca then analyzed the data and shared the results with the whole group for an interdisciplinary interpretation.

"We were thrilled to be able to generate such a dataset spanning six thousand years because the retrieval of ancient DNA from skeletal remains from Sardinia is very challenging," said Cosimo Posth, an archaeogeneticist at the Max Planck Institute and co-first author of the study.

Periods of stability and change

Sampling DNA from ancient remains allows scientists to get a snapshot of people living at a specific time and place, instead of using modern DNA and inferring the past based on assumptions and mathematical models. When the team compared the DNA of 70 ancient individuals collected from Sardinia to the DNA of other ancient and modern individuals, they uncovered two major patterns.

First, they saw that Sardinian individuals in the Middle Neolithic period (4100-3500 BCE) were closely related to people from mainland Europe of the time. Genetic ancestry then remained relatively stable on the island through at least the end of the "Nuragic" period (~900 BCE). This pattern differs from other regions of mainland Europe which experienced new ancestries entering from people moving across the continent in the Bronze Age.

The results also show the development of Sardinia's distinctive nuraghe stone towers and culture (after which the Nuragic period is named) did not coincide with any detectable, new genetic ancestry arriving to the island.

"We found striking stability in ancestry from the Middle Neolithic through the end of the Nuragic period in Sardinia," said Joe Marcus, a PhD student in the Department of Human Genetics at UChicago and a co-first author on the paper.

Second, the team found evidence of the arrival of different populations across the Mediterranean, first with Phoenicians originating from the Levant (modern-day Lebanon) and Punics, whose culture centered in Carthage (modern-day Tunisia). Then, new ancestry continued to appear during the Roman period and further in the Medieval period, as Sardinia became historically influenced by migration of people from modern-day Italy and Spain.

"We observed clear signals of dynamic periods of contact linking the island to the rest of the Mediterranean, appearing first in individuals from two Phoenician and Punic sites as early as 500 BCE, and then in individuals from the Roman and Medieval periods," said Harald Ringbauer, PhD, a postdoctoral researcher involved in the computational data analysis at UChicago and a co-first author on the paper.

The group's results help explain similarities with DNA from mainland European individuals of the Neolithic and Copper Age, such as "Ötzi the Iceman," an almost perfectly preserved, 5,300-year-old human discovered in alpine ice in northern Italy in 1991. Specifically, among modern Europeans, Ötzi's DNA is most similar to modern-day Sardinians. The new study supports the theory that this similarity remains because Sardinia had less turnover of genetic ancestry over time than mainland Europe, which experienced large-scale migrations in the Bronze Age.

Insights from the past, implications for the present

Besides providing new insight into mysteries of the past, studying ancient DNA also has implications for the well-being of present-day humans. This model of Sardinia's population history--establishment followed by relative isolation and then the arrival of new sources of diversity--provides a new framework for understanding how genetic variants with health implications became more frequent on the island.

"For future studies, we want to look more precisely at mutations that we think are involved in disease to see in which period they changed in frequency and how quickly they changed," Novembre said. "That will help us understand the processes acting on these diseases, and in turn gain a richer view that may yield insights for human health."

Credit: 
University of Chicago

Solar storms may leave gray whales blind and stranded

A new study reported in the journal Current Biology on February 24 offers some of the first evidence that gray whales might depend on a magnetic sense to find their way through the ocean. This evidence comes from the discovery that whales are more likely to strand on days when there are more sunspots.

Sunspots are of interest because they are also linked to solar storms--sudden releases of high-energy particles from the sun that have the potential to disrupt magnetic orientation behavior when they interact with Earth's magnetosphere. But what's especially unique about the new study, according to the researchers, is that they were able to explore how a solar storm might cause whales to strand themselves.

"Is it that the solar storms are pushing the magnetic field around and giving the whales incorrect information--for example, the whale thinks it is on 4th Street, but it is actually on 8th?" asks Jesse Granger of Duke University. "Or is it that the solar storms are messing up the receptor itself--the whale thinks it is on 4th Street, but has just gone blind?

"We show that the mechanism behind the relationship between solar storms and gray whales, if it is an effect on a magnetic sensor, is likely caused by disruption to the sense itself, not inaccurate information. So, to put this back into the earlier metaphor, the big secondary finding of this paper is that it is possible that the reason the whales are stranding so much more often when there are solar storms is because they have gone blind, rather than that their internal GPS is giving them false information."

Granger says her interest in long-distance migrations stems in part from her own personal tendency to get lost, even on her way to the grocery store. She wanted to explore how some animals use magnetoreception to navigate by looking at incidents when navigation went terribly wrong.

"I hypothesized that by looking at patterns in the spacing and timing of incidents where an animal was unable to navigate properly, we could better understand the sense as a whole," Granger says.

She and her colleagues studied 186 live strandings of the gray whale (Eschrichtius robustus). The data showed those strandings occurred significantly more often on days with high sunspot counts than on randomly chosen days. On days with a high sunspot count, the chance of a stranding more than doubled.

Further study showed that strandings happened more often on days with a high solar radio flux index, as measured from Earth, than on randomly chosen days. On days with high RF noise, the likelihood of strandings was more than four times greater than on randomly selected days.

Much to Granger's surprise, they found no significant increase in strandings on days with large deviations in the magnetic field. Altogether, the findings suggest that the increased incidence of strandings on days with more sunspots is explained by a disruption of whales' magnetoreceptive sensor, rather than distortion of the geomagnetic field itself.

"I really thought that the cause of the strandings was going to be inaccurate information," Granger said. "When those results came up negative, I was flummoxed. It wasn't until one of my co-authors mentioned that solar storms also produce high amounts of radio-frequency noise, and I remembered that radio-frequency noise can disrupt magnetic orientation, that things finally started to click together."

Granger says it's important to keep in mind that this isn't the only cause of strandings. There are still many other things that could cause a whale to strand, such as mid-frequency naval sonar.

Granger now plans to conduct a similar analysis for several other species of whales on several other continents to see if this pattern exists on a more global scale. She also hopes to see what sort of information this broader picture of strandings can offer for our understanding of whales' magnetic sense.

Credit: 
Cell Press

1 billion-year-old green seaweed fossils identified, relative of modern land plants

image: In the background of this digital recreation, ancient microscopic green seaweed is seen living in the ocean 1 billion years ago. In the foreground is the same seaweed in the process of being fossilized far later. Image by Dinghua Yang.

Image: 
Dinghua Yang

Virginia Tech paleontologists have made a remarkable discovery in China: 1 billion-year-old micro-fossils of green seaweeds that could be related to the ancestor of the earliest land plants and trees that first developed 450 million years ago.

The micro-fossil seaweeds -- a form of algae known as Proterocladus antiquus -- are barely visible to the naked eyed at 2 millimeters in length, or roughly the size of a typical flea. Professor Shuhai Xiao said the fossils are the oldest green seaweeds ever found. They were imprinted in rock taken from an area of dry land -- formerly ocean -- near the city of Dalian in the Liaoning Province of northern China. Previously, the earliest convincing fossil record of green seaweeds were found in rock dated at roughly 800 million years old.

The findings -- led by Xiao and Qing Tang, a post-doctoral researcher, both in the Department of Geosciences, part of the Virginia Tech College of Science -- are featured in the latest issue of Nature Ecology & Evolution (link not active until embargo lifts). "These new fossils suggest that green seaweeds were important players in the ocean long before their land-plant descendants moved and took control of dry land," Xiao said.

"The entire biosphere is largely dependent on plants and algae for food and oxygen, yet land plants did not evolve until about 450 million years ago," Xiao said. "Our study shows that green seaweeds evolved no later than 1 billion years ago, pushing back the record of green seaweeds by about 200 million years. What kind of seaweeds supplied food to the marine ecosystem?"

Shuhai said the current hypothesis is that land plants -- the trees, grasses, food crops, bushes, even kudzu -- evolved from green seaweeds, which were aquatic plants. Through geological time -- millions upon millions of years -- they moved out of the water and became adapted to and prospered on dry land, their new natural environment. "These fossils are related to the ancestors of all the modern land plants we see today."

However, Xiao added the caveat that not all geobiologists are on the same page - that debate on the origins of green plants remains debated. "Not everyone agrees with us; some scientists think that green plants started in rivers and lakes, and then conquered the ocean and land later," added Xiao, a member of the Virginia Tech Global Change Center.

There are three main types of seaweed: brown (Phaeophyceae), green (Chlorophyta), and red (Rhodophyta), and thousands of species of each kind. Fossils of red seaweed, which are now common on ocean floors, have been dated as far back as 1.047 billion years old.

"There are some modern green seaweeds that look very similar to the fossils that we found," Xiao said. "A group of modern green seaweeds, known as siphonocladaleans, are particularly similar in shape and size to the fossils we found."

Photosynthetic plants are, of course, vital to the ecological balance of the planet because they produce organic carbon and oxygen through photosynthesis, and they provide food and the basis of shelter for untold numbers of mammals, fish, and more. Yet, going back 2 billion years, Earth had no green plants at all in oceans, Xiao said.

It was Tang who discovered the micro-fossils of the seaweeds using an electronic microscope at Virginia Tech's campus and brought it to Xiao's attention. To more easily see the fossils, mineral oil was dripped onto the fossil to create a strong contrast.

"These seaweeds display multiple branches, upright growths, and specialized cells known as akinetes that are very common in this type of fossil," he said. "Taken together, these features strongly suggest that the fossil is a green seaweed with complex multicellularity that is circa 1 billion years old. These likely represent the earliest fossil of green seaweeds. In short, our study tells us that the ubiquitous green plants we see today can be traced back to at least 1 billion years."

According to Xiao and Tang, the tiny seaweeds once lived in a shallow ocean, died, and then became "cooked" beneath a thick pile of sediment, preserving the organic shapes of the seaweeds as fossils. Many millions of years later, the sediment was then lifted up out of the ocean and became the dry land where the fossils were retrieved by Xiao and his team, which included scientists from Nanjing Institute of Geology and Paleontology in China.

Credit: 
Virginia Tech

Columbia team discovers new way to control the phase of light using 2D materials

image: Illustration of an integrated micro-ring resonator based low loss optical cavity with semiconductor 2D material on top of the waveguide.

Image: 
Ipshita Datta and Aseema Mohanty, Lipson Nanophotonics Group/Columbia Engineering

New York, NY--February 24, 2020--Optical manipulation on the nano-scale, or nanophotonics, has become a critical research area, as researchers seek ways to meet the ever-increasing demand for information processing and communications. The ability to control and manipulate light on the nanometer scale will lead to numerous applications including data communication, imaging, ranging, sensing, spectroscopy, and quantum and neural circuits (think LIDAR--light detection and ranging--for self-driving cars and faster video-on-demand, for example).

Today, silicon has become the preferred integrated photonics platform due to its transparency at telecommunication wavelengths, ability for electro-optic and thermo-optic modulation, and its compatibility with existing semiconductor fabrication techniques. But, while silicon nanophotonics has made great strides in the fields of optical data communications, phased arrays, LIDAR, and quantum and neural circuits, there are two major concerns for large-scale integration of photonics into these systems: their ever-expanding need for scaling optical bandwidth and their high electrical power consumption.

Existing bulk silicon phase modulators can change the phase of an optical signal, but this process comes at the expense of either high optical loss (electro-optic modulation) or high electrical power consumption (thermo-optic modulation). A Columbia University team, led by Michal Lipson, Eugene Higgins Professor of Electrical Engineering and professor of applied physics at Columbia Engineering, announced that they have discovered a new way to control the phase of light using 2D materials--atomically thin materials, ? 0.8 nanometer, or 1/100000 the size of a human hair--without changing its amplitude, at extremely low electrical power dissipation.

In this new study, published today by Nature Photonics, the researchers demonstrated that by simply placing the thin material on top of passive silicon waveguides, they could change the phase of light as strongly as existing silicon phase modulators, but with much lower optical loss and power consumption.

"Phase modulation in optical coherent communication has remained a challenge to scale, due to the high optical loss that was associated with phase change," says Lipson. "Now we've found a material that can change the phase only, providing us another avenue to expand the bandwidth of optical technologies."

The optical properties of semiconductor 2D materials such as transition metal dichalcogenides (TMDs) are known to change dramatically with free-carrier injection (doping) near their excitonic resonances (absorption peaks). However, very little is known about the effect of doping on the optical properties of TMDs at telecom wavelengths, far away from these excitonic resonances, where the material is transparent and therefore can be leveraged in photonic circuits.

The Columbia team, which included James Hone, Wang Fong-Jen Professor of Mechanical Engineering at Columbia Engineering, and Dimitri Basov, professor of physics at the University, probed the electro-optic response of the TMD by integrating the semiconductor monolayer on top of a low-loss silicon nitride optical cavity and doping the monolayer using an ionic liquid. They observed a large phase change with doping, while the optical loss changed minimally in the transmission response of the ring cavity. They showed that the doping-induced phase change relative to change in absorption for monolayer TMDs is approximately 125, which is significantly higher than that observed in materials commonly employed for silicon photonic modulators including Si and III-V on Si, while being simultaneously accompanied by negligible insertion loss.

"We are the first to observe strong electro-refractive change in these thin monolayers," says the paper's lead author Ipshita Datta, a PhD student with Lipson. "We showed pure optical phase modulation by utilizing a low loss silicon nitride (SiN)-TMD composite waveguide platform in which the optical mode of the waveguide interacts with the monolayer. So now,
by simply placing these monolayers on silicon waveguides, we can change the phase by the same order of magnitude, but at 10000 times lower electrical power dissipation. This is extremely encouraging for the scaling of photonic circuits and for low-power LIDAR."

The researchers are continuing to probe and better understand the underlying physical mechanism for the strong electrorefractive effect. They are currently leveraging their low-loss and low-power phase modulators to replace traditional phase shifters, and therefore reduce the electrical power consumption in large-scale applications such as optical phased arrays, and neural and quantum circuits.

Credit: 
Columbia University School of Engineering and Applied Science

Climate change will cause a loss of olive production in Andalusia

image: Rafael Villar and Salvador Arenas Castro, investigators.

Image: 
Universidad de Córdoba

We do not have to look as far away as the glaciers in Norway, the fires in Australia or the floods in Brazil to see the effects of climate change. In Spain, changes are also starting to show and they will multiply in the following years. And not only will the climate be affected, but also social and economic aspects as well.

A study by the University of Cordoba and the Centre for Research in Geospace Science (abbreviated to CICGE, at Porto University, Portugal) sought to investigate how climate change will affect one of the main economic pillars in the region of Andalusia: the olive sector. They used a tool known as a Species Distribution Model, that predicts suitable areas for the presence of a species in accordance with environmental features. First, the changes that will occur in the Andalusian climate and how they will influence the distribution of the main olive varieties grown in Andalusia were studied. Next, they estimated, province by province, what olive production will look like in the next 20, 50 and 80 years based on the change that will occur in suitable areas.

"The study shows that there will be a reduction in the amount of area available for growing most of the olive varieties studied. This will be mainly due to less rainfall and loss of soil humidity", says Salvador Arenas Castro, CIGCE researcher, who collaborated with the University of Cordoba and is the lead author of the study.

In the case of the Nevadillo olive variety, grown in the Cordoba province part of the Sierra Morena, it is estimated that by 2100, there will no longer be any area fit for farming. Climate change will also significantly affect Manzanilla, Lechín and Picudo varieties. "If these predictive models foretell major losses in areas suitable for the most common olive varieties, local varieties will run a high risk of disappearing as they are grown in much smaller areas with more specific climate conditions and therefore, are much more vulnerable to climate change", he warns.

In contrast, the suitable area for growing the Picual variety, the most widespread in Andalusia due to its ability to adapt to different environmental conditions, will potentially increase by 25%. This is mainly because cooler areas in the provinces of Granada and Almeria, particularly in the Alpujarra region, will become suitable for farming Picual olives once temperatures increase.

Regarding production, the provinces most affected by climate change will be Seville and Cadiz, with estimated losses of 29% for the former and 24% for the latter by 2100. For the provinces of Malaga, Cordoba and Huelva, production will decrease by 18%, 9% and 7% respectively. In the provinces of Almeria and Granada, potential olive production will increase by 13% and 6%. "This increase will occur thanks to the potential expansion of the Picual and Verdial varieties in higher areas, such as the Alpujarra [mountains]", explains Arenas Castro. In the province of Jaen, the main olive producer, losses will not be as drastic, specifically because of the fact that the Picual variety, one of the most resistant varieties, is the most commonly grown there.

According to the researcher, it has been proven that climate change will be a very important factor in plant and animal species distribution from now on. Many studies predict that species will move towards the north and to higher areas and this research shows that olive farming is no exception. "The trouble will come when, in order to maintain the same levels of production, olive farming will have to be moved to more northern areas or areas at higher altitudes and will disturb, not only other crops, but also protected areas", he warns.

Rafael Villar, Ecology Professor at the University of Cordoba and part of this research team, highlights the need for public authorities to consider these predictions and make long-term plans in order to prevent climate change from affecting the local economy as far as possible. "It is also necessary to raise awareness that climate change is not a myth. It will impact our standards of living and our local economy and we must do as much as we can to prevent it", he concludes.

Credit: 
University of Córdoba

Why Edgar Allan Poe probably did not kill himself

image: The writer died in 1849 after spending several days in hospital while in a state of delirium.

Image: 
Lancaster University

A computational analysis of language used by the writer Edgar Allan Poe has revealed that his mysterious death was unlikely to have been suicide.

The author, poet, editor, and literary critic died in 1849 after spending several days in hospital while in a state of delirium. To date, Poe's death remains an unsolved enigma, with his contemporary, poet Charles Baudelaire even speculating that the incident was "almost a suicide, a suicide prepared for a long time".

But psychologist Dr Ryan Boyd from Lancaster University and his colleague -- Hannah Dean from the University of Texas at Austin -- have found that Poe's psychological markers of depression are not consistent with suicide.

This research has now been published in the Journal of Affective Disorders.

Dr Boyd said: "My hunch is that he was indeed spiralling into a depression toward the end of his life, but that he didn't kill himself."

Using computerized language analysis, they analysed 309 of Poe's personal letters, 49 poems, and 63 short stories and investigated whether a pattern of linguistic cues consistent with depression and suicidal cognition were discernible throughout the writer's life, particularly in his final years.

They focused on five measures which have been established as diagnostic of depression and/or suicidality;

Increased use of first-person singular pronouns (e.g., words like I, me, and my)

Increased use of negative emotion words (bad, sad, angry)

More cognitive processing words (think, understand, know)

Fewer positive emotion words (happy, good, terrific)

Fewer first-person plural pronouns (we, us, our).

These linguistic markers of depression spiked during negative events in Poe's life, like the death of his wife. Past research has shown that depressive language patterns tend to dramatically rise leading up to one's death by suicide, however, this pattern did not consistently emerge in the last year of Poe's life.

Poe was known to have suffered from regular bouts of severe depression and also had drug and alcohol problems. He lost his parents as a two year old and was devastated first by the death of his foster mother and then by that of his own wife Virginia Clemm Poe in 1847.

The researchers concluded: "Significant, consistent patterns of depression were not found and do not support suicide as a cause of death. However, linguistic evidence was found suggesting the presence of several potential depressive episodes over the course of Poe's life - these episodes were the most pronounced during years of Poe's greatest success, as well as those following the death of his late wife."

"Our analyses suggest that he struggled deeply with success, with linguistic markers of depression peaking during the times of his greatest fame and popularity in 1843, 1845 and 1849."

Credit: 
Lancaster University

Releasing brakes: Potential new methods for Duchenne muscular dystrophy therapies

image: A stained cell modeling Duchenne muscular dystrophy used in the study.

Image: 
Courtesy of the University of Pennsylvania

Researchers identified a group of small molecules that may open the door to developing new therapies for Duchenne muscular dystrophy (DMD), an as-yet-uncured disease that results in devastating muscle weakening and loss. The molecules tested by the team from the Perelman School of Medicine at the University of Pennsylvania eased repression of a specific gene, utrophin, in mouse muscle cells, allowing the body to produce more utrophin protein, which can be subbed in for dystrophin, a protein whose absence causes DMD. These findings by were published this month in Scientific Reports.

"We're trying to find therapies that will restore a patient's muscle function without resorting to gene therapy," said the study's senior author Tejvir S. Khurana, MD, PhD, a professor of Physiology and member of the Pennsylvania Muscle Institute. "Increasing utrophin is a major focus of muscular dystrophy research. While, ideally, we would replace the missing dystrophin in patients, there are a number of technical and immunological problems associated with this approach."

Introducing dystrophin through gene therapy is challenging for two main reasons: First, the dystrophin gene is extremely large. It requires extensive down-sizing and conversion into a micro-dystrophin to fit the Adeno-associated viral vectors being used clinically for gene therapy. The second challenge is the immune system. Since the patient's body never produced dystrophin, it interprets the new micro-dystrophin protein as a foreign, hostile invader and attacks, which may lead to adverse events and nullify any benefits.

"We're using an approach that attempts to increase utrophin levels in the body because it has functional characteristics and a genetic structure similar to dystrophin. Since the body already produces it, the immune system recognizes the protein as the body's own and does not attack it or the cells producing it, even when over-expressed," Khurana said.

There have been other attempts to use utrophin as a substitute for dystrophin using drugs, but those methods have focused on boosting utrophin through activating the "promoter," the part of a gene that kick-starts the process of its expression in a person. Using the metaphor of trying to move a car, Khurana said that this approach is like pressing the gas pedal.

However, there are also mechanisms within the body that limit the expression of proteins. It makes simply stimulating more utrophin production similar to pressing a vehicle's gas pedal while the brake is on: there may be some movement, but not a lot.

Khurana and his team, including first author Emanuele Loro, PhD, a Physiology research associate at Penn Medicine, decided to try an approach that would be similar to releasing the parking brake. They believe that by overpowering the repression with drugs, the body would naturally produce more of the utrophin it was already making. The process is referred to as "upregulation," and they hoped it would cover for the missing dystrophin.

The researchers tested a collection -- called a "library" -- of different small molecules in a utrophin cellular assay they developed. Through this, they found 27 promising "hits." After ranking their effectiveness using an algorithm they developed called Hit to Lead Prioritization Score (H2LPS), 10 molecules were extensively tested in muscle cell lines, and the top-scoring molecule, trichostatin A (TSA), was tested in a mouse model of muscular dystrophy where it led to significant improvements in muscle structure and function.

With the molecules they identified, Khurana and his team believe they've found potential ways of developing therapies to treat DMD patients. Testing is still in early stages, but Khurana is very excited about the doors this discovery will open.

"Our next steps here will be to do more screenings to identify new hits using chemically diverse libraries," Khurana said. "This is a completely new approach to increase utrophin for this condition, and we're very keen to test it further and eventually bring it to clinical trials."

Credit: 
University of Pennsylvania School of Medicine

Solar storms could scramble whales' navigational sense

image: California gray whales like these mothers and calves are 4.3 times more likely to strand themselves during a burst of cosmic radio static from a solar flare, further evidence that they navigate by Earth's magnetic field.

Image: 
Nicholas Metheny NOAA

DURHAM, N.C. -- When our sun belches out a hot stream of charged particles in Earth's general direction, it doesn't just mess up communications satellites. It might also be scrambling the navigational sense of California gray whales (Eschrichtius robustus), causing them to strand on land, according to a Duke University graduate student.

Many animals can sense the Earth's magnetic field and use it like a GPS to navigate during their long migrations. However, solar storms could be disrupting that signal, said Duke graduate student Jesse Granger, who studies biophysics in the lab of biology professor Sönke Johnsen.

Earlier research has found a correlation between solar activity like sunspots and flares and stranded sperm whales, but Granger's analysis tried to get to the bottom of what the relationship might be.

Gray whales were an ideal species to test this idea because they migrate 10,000 miles a year from Baja California to Alaska and back and they stay relatively close to the shore, where small navigational errors could lead to disaster, Granger said.

She compiled a NOAA database of gray whale stranding incidents over a period of 31 years and sifted out all the cases in which the whales were obviously sick, malnourished, injured or entangled, leaving only 186 strandings of otherwise healthy animals.

Comparing the healthy strandings data to a record of solar activity and statistically sifting out several other possible factors like seasons, weather, ocean temperatures and food abundance, Granger concluded that gray whales were 4.3 times more likely to strand when a lot of radio frequency noise from a solar outburst was hitting the Earth.

She suspects the issue isn't that a solar storm warps the Earth's magnetic field, though it can. It's that the radio frequency noise created by the solar outburst does something to overwhelm the whales' senses, preventing them from navigating altogether -- as if turning their GPS off in the middle of the trip.

The likelihood that whales might be somehow tapping into the planet's geomagnetic fields is pretty strong because landmarks are few in the open ocean, but unfortunately, researchers don't yet know precisely how they navigate, said Granger, whose work appears Feb. 24 in Current Biology.

While her study provides more evidence for a magnetic sense, Granger said the whales may still be using other cues to make their migration. "A correlation with solar radio noise is really interesting, because we know that radio noise can disrupt an animal's ability to use magnetic information," she said.

"We're not trying to say this is the only cause of strandings," Granger said. "It's just one possible cause."

Credit: 
Duke University

Obesity embargo alert for March 2020

All print, broadcast and online journalists who receive the Obesity embargo alert agree to abide by the embargo and may not publish, post, broadcast or distribute embargoed news releases or details of the embargoed studies before the embargo date and time.

When writing about these studies, journalists are asked to attribute the source as the journal Obesity and to include the online link to the Obesity articles as provided below. Links become active when articles post at 3:00 a.m. on Feb. 24, 2020, unless indicated differently below.

About the journal - Obesity is the peer-reviewed, scientific journal of The Obesity Society.

Editors' Choice 1 -Proposed Coding System Addresses Pathophysiology, Therapeutic Goals, W. Timothy Garvey, garveyt@uab.edu, and Jeffrey I. Mechanick
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22727)

Also see accompanying commentary by Johannes Hebebrand (http://onlinelibrary.wiley.com/doi/10.1002/oby.22740), posting online on Feb. 24, 2020

Editors' Choice 2 -Liraglutide Enhances Weight Loss from IBT in a Primary Care Setting, Thomas A. Wadden, wadden@pennmedicine.upenn.edu, Jena Shaw Tronieri, Danny Sugimoto, Michael Taulo Lund, Pernille Auerbach, Camilla Jensen, and Domenica Rubino
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22726)

Editors' Choice 3 -miRNA Profiling in Omentum Adipose Before and After Gastric Bypass, Donia Macartney-Coxson, donia.macartney-coxson@esr.cri.nz, Kirsty Danielson, Jane Clapham, Miles C. Benton, Alice Johnston, Angela Jones, Odette Shaw, Ronald D. Hagan, Eric P. Hoffman, Mark Hayes, Jacquie Harper, Michael A. Langston, and Richard S. Stubbs
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22722)

Editors' Choice 4 -Measuring Physical Activity and Accumulation of Fat in Infants, Sara E. Benjamin-Neelon, sara.neelon@jhu.edu, Jiawei Bai, Truls Østbye, Brian Neelon, Russell R. Pate, and Ciprian Crainiceanu
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22738) - already online

ADDITIONAL EMBARGOED RESEARCH

Impact of Exposure to Antibiotics During Pregnancy and Infancy on Childhood Obesity: A Systematic Review and Meta-Analysis, Shengrong Wan, Man Guo, Ting Zhang, Qing Chen, Maoyan Wu, Fangyuan Teng, Yang Long, Zongzhe Jiang, Jiangzongzhe555@126.com, and Yong Xu, xywyll@aliyun.com (http://onlinelibrary.wiley.com/doi/10.1002/oby.22747) - embargo lifts March 4, 2020, at 3:00 a.m. (EST).

Scroll down to find abstracts for each of the above papers. To request the full text of any of these studies and agree to the embargo policy, or to arrange an interview with a study's author or an obesity expert, please contact communications@obesity.org.

Editors' Choice Abstracts

Editors' Choice 1 - Proposal for a Scientifically Correct and Medically Actionable Disease Classification System (ICD) for Obesity

Objective: Obesity is responsible for a huge burden of suffering and social costs, and yet many patients lack access to evidence-based therapies. The diagnostic term "obesity" and inadequate International Classification of Diseases (ICD) codes contribute to suboptimal efforts to prevent and treat obesity as a chronic disease. The goal of this review is to develop a medically actionable classification system based on the diagnostic term "adiposity-based chronic disease" (ABCD) that reflects disease pathophysiology and specific complications causing morbidity and mortality.

Methods: A coding system based on the diagnosis of ABCD with four domains is proposed: A codes reflect pathophysiology, B codes indicate BMI classification, C codes specify specific biomechanical and cardiovascular complications remediable by weight loss, and D codes indicate the degree of the severity of complications. Supplemental codes identify aggravating factors that complicate care and that are relevant to a personalized therapeutic plan.

Results: The coding system addresses pathophysiology and therapeutic goals and differential risk, presence, and severity of specific complications that are integral to ABCD as a chronic disease.

Conclusions: The scientifically correct and medically actionable approach to diagnosis and disease coding will lead to greater acknowledgement of ABCD as a disease and accessibility to evidence-based therapies on behalf of patients across the life cycle.

Editors' Choice 2 - Liraglutide 3.0 mg and Intensive Behavioral Therapy (IBT) for Obesity in Primary Care: The SCALE IBT Randomized Controlled Trial

Objective: Previous studies have shown additive weight loss when intensive behavioral therapy (IBT) was combined with weight-loss medication. The present multisite study provides the first evaluation, in primary care, of the effect of the Centers for Medicare and Medicaid Services-based IBT benefit, delivered alone (with placebo) or in combination with liraglutide 3.0 mg.

Methods: The Satiety and Clinical Adiposity--Liraglutide Evidence in individuals with and without diabetes (SCALE) IBT was a 56-week, randomized, double-blind, placebo-controlled, multicenter trial in individuals with obesity who received liraglutide 3.0 mg (n = 142) or placebo (n = 140) as an adjunct to IBT.

Results: At week 56, mean weight loss with liraglutide 3.0 mg plus IBT was 7.5% and 4.0% with placebo combined with IBT (estimated treatment difference [95% CI] ?3.4% [?5.3% to ?1.6%], P = 0.0003). Significantly more individuals on liraglutide 3.0 mg than placebo achieved ? 5% weight loss (61.5% vs. 38.8%; odds ratio [OR] 2.5% [1.5% to 4.1%], P = 0.0003), > 10% weight loss (30.5% vs. 19.8%; OR 1.8% [1.0% to 3.1%], P = 0.0469), and > 15% weight loss (18.1% vs. 8.9%; OR 2.3% [1.1% to 4.7%], P = 0.0311). Liraglutide 3.0 mg in combination with IBT was well tolerated, with no new safety signals identified.

Conclusions: In a primary care setting, Centers for Medicare and Medicaid Services-based IBT produced clinically meaningful weight loss at 56 weeks, enhanced by the addition of liraglutide 3.0 mg.

Editors' Choice 3 - MicroRNA Profiling in Adipose Before and After Weight Loss Highlights the Role of miR-223-3p and the NLRP3 Inflammasome

Objective: Adipose tissue plays a key role in obesity-related metabolic dysfunction. MicroRNA (miRNA) are gene regulatory molecules involved in intercellular and inter-organ communication. It was hypothesized that miRNA levels in adipose tissue would change after gastric bypass surgery and that this would provide insights into their role in obesity-induced metabolic dysregulation.

Methods: miRNA profiling (Affymetrix GeneChip miRNA 2.0 Array) of omental and subcutaneous adipose (n = 15 females) before and after gastric bypass surgery was performed.

Results: One omental and thirteen subcutaneous adipose miRNAs were significantly differentially expressed after gastric bypass, including downregulation of miR-223-3p and its antisense relative miR-223-5p in both adipose tissues. mRNA levels of miR-223-3p targets NLRP3 and GLUT4 were decreased and increased, respectively, following gastric bypass in both adipose tissues. Significantly more NLRP3 protein was observed in omental adipose after gastric bypass (P = 0.02). Significant hypomethlyation of NLRP3 and hypermethylation of miR-223 were observed in both adipose tissues after gastric bypass. In subcutaneous adipose, significant correlations were observed between both miR-223-3p and miR-223-5p and glucose and between NLRP3 mRNA and protein levels and blood lipids.

Conclusions: This is the first report detailing genome-wide miRNA profiling of omental adipose before and after gastric bypass, and it further highlights the association of miR-223-3p and the NLRP3 inflammasome with obesity.

Editors' Choice 4 - Physical Activity and Adiposity in a Racially Diverse Cohort of US Infants

Objective: Early life physical activity may help prevent obesity, but objective quantification in infants is challenging.

Methods: A total of 506 infants were examined from 2013 to 2016. Infants wore accelerometers for 4 days at ages 3, 6, 9, and 12 months. Daily log-transformed physical activity counts were computed, averaged, and standardized across assessments. A linear mixed model was used to examine trends in standardized physical activity counts as well as associations between physical activity and BMI z score, sum of subscapular and triceps skinfold thickness for overall adiposity (SS+TR), and their ratio for central adiposity (SS:TR).

Results: Among infants, 66% were black and 50% were female. For each additional visit, standardized physical activity counts increased by 0.23 (CI: 0.18 to 0.27; P

Conclusions: Physical activity increased over infancy and was associated with central adiposity. Despite limitations, researchers should consider objective measurement in infants.

ADDITIONAL EMBARGOED RESEARCH

Impact of Exposure to Antibiotics During Pregnancy and Infancy on Childhood Obesity: A Systematic Review and Meta-Analysis

Objective: This study aimed to investigate whether antibiotic exposure during pregnancy and infancy was associated with childhood overweight or obesity.

Methods: PubMed, Embase, and Cochrane Library databases were searched from the inception date to April 18, 2019, to identify observational studies that investigated the association between antibiotic exposure during pregnancy and infancy and childhood overweight or obesity. After study selection and data extraction, the meta-analysis was conducted using Stata software version 12.0 (StataCorp, LLC, College Station, Texas). The evaluation of the methodological quality was carried out by AMSTAR 2 (Bruyère Research Institute, Ottawa, Ontario, Canada).

Results: A total of 23 observational studies involving 1,253,035 participants was included. The meta-analysis showed that prenatal exposure to antibiotics was not significantly associated with childhood overweight or obesity, whereas an increased risk of overweight or obesity was seen in subgroup analysis of the second trimester (risk ratio=1.13; 95% CI: 1.06-1.22; P=0.001). In contrast, antibiotic exposure during infancy could increase the risk of childhood overweight or obesity (risk ratio=1.14; 95% CI: 1.06-1.23; P=0.001).

Conclusions: This meta-analysis found that antibiotic exposure during the second trimester and infancy could increase the risk of childhood overweight or obesity.

If you choose not to receive this media mailing, send an email message to: communications@obesity.org with "unsubscribe" in the text field.

Credit: 
The Obesity Society

Validating Toolbox to evaluate cognitive processing in people with intellectual disability

image: Dr. David Hessl, senior author and professor at UC Davis Department of Psychiatry and Behavioral Science

Image: 
UC Davis Health

Researchers at the UC Davis MIND Institute have updated and validated a series of tests delivered on an iPad to accurately assess cognitive processing in people with intellectual disability. The validation opens new opportunities for more rigorous and sensitive studies in this population, historically difficult to evaluate.

The widely used NIH Toolbox was designed for use in the general population. It had not been applied as a rule to people with intellectual disability. Intellectual disability is characterized by significant limitations in both cognitive functioning and in adaptive behavior such as everyday social and practical skills. The most common genetic causes of intellectual disability are Down syndrome and fragile X syndrome.

The article "Validation of the NIH Toolbox Cognitive Battery in Intellectual Disability," published February 24 in Neurology©, the medical journal of the American Academy of Neurology, determined that the tests accurately measure cognitive skills in individuals with a mental age of 5 or above. Additional modifications to the test are needed before it can be shown to be equally good at measuring skills in people with lower functioning.

"Our study assessed how the battery is performing in people with intellectual disability. We made some adaptations to the assessment so that it works well in this population," said Rebecca Shields, the first author on the study and a UC Davis graduate student in human development working in the laboratory of David Hessl. "This is a big first step showing how it works in these individuals. Applying it consistently across this unique population means other researchers and clinicians can use it too."

Manual developed to aid clinicians in using the test

To guide clinicians and researchers in using the Toolbox with this population, the group also developed and published a manual as a supplement to the NIH Toolbox Administrator's Manual. The manual documents the researchers' guidelines specific to assessing individuals with intellectual disabilities, allowing other researchers to administer the test in a standardized way. This project was led by Forrest McKenzie, a member of the Hessl laboratory, and is available in the online article as well as on the NIH Toolbox website.

"People with intellectual disabilities can be very difficult to assess. Many of the existing measures we use to evaluate them have a lot of limitations," said Hessl, senior author on the study and a professor in the UC Davis Department of Psychiatry and Behavioral Sciences. "Also, different investigators choose a wide variety of different tests for research, making it very hard to compare results in the field. We really hope that the NIH Toolbox cognitive tests can be used more uniformly, as a common metric."

The lack of standardized tests also has had an impact on clinical trials of potential new treatments, he said.

"When we are trying to determine if people with disabilities are really improving, if their cognitive rate is getting faster or if they are responding to treatment, we face challenges because of measurement limitations," Hessl said. "This Toolbox really tackles a lot of these limitations. It is well standardized, and objective. And the test is given on an iPad, so the way each person responds to the question should be more consistent and reliable."

Test measures cognitive skills and executive function in just 30 minutes

The test, which typically takes about 30 minutes, measures a variety of skills, including memory, vocabulary, single-word reading and processing speed. It also measures executive function, such as the ability to shift from one thought to another or to pay attention and inhibit impulses. In the cognitive flexibility test, the individual is asked to match items by shape. But the rules of the game then switch, and they are asked to match the items by color.

The test also measures receptive vocabulary, or how words are understood. For example, the test taker will hear a word and see four pictures then select the picture that matches the word. It also measures memory by presenting a picture story in a sequence then asking the test taker to put the story back together in the same sequence.

A list-sorting task on the test requires the individual to remember the group of items they had seen on the screen and repeat them back in a certain order. A processing speed task evaluates how well the individual can compare different patterns that appear on the screen.

Researchers found that the battery of tests was feasible for a very high percentage of individuals with a mental age of five or higher; individuals in the study did not refuse to participate, were able to respond to the tests as designed and understood what the tests required. The battery also proved to be reliable; the scores were consistent for individuals after re-testing. Hessl said these test properties are especially important in determining the value and utility of the battery, such as determining how useful it may be in detecting changes related to treatment.

Shields said that the team is now learning about how well the test battery picks up cognitive changes over development. They are bringing back the same participants in the study two years later.

Credit: 
University of California - Davis Health

New tool for an old disease: Use of PET and CT scans may help develop shorter TB treatment

image: Johns Hopkins Medicine researchers have shown that PET and CT imaging can be used to track, over time, if an anti-tuberculosis drug can reach TB bacteria nested inside cavities in the lungs. The CT scan at the top reveals the location of two lung cavities, while the bottom PET/CT image shows the failure of a radioactively tagged anti-TB drug (shown as colors) to get inside those cavities and attack the microbes they encase.

Image: 
Johns Hopkins Medicine

Experts believe that tuberculosis, or TB, has been a scourge for humans for some 15,000 years, with the first medical documentation of the disease coming out of India around 1000 B.C.E. Today, the World Health Organization reports that TB is still the leading cause of death worldwide from a single infectious agent, responsible for some 1.5 million fatalities annually. Primary treatment for TB for the past 50 years has remained unchanged and still requires patients to take multiple drugs daily for at least six months. Successful treatment with these anti-TB drugs -- taken orally or injected into the bloodstream -- depends on the medications "finding their way" into pockets of TB bacteria buried deep within the lungs.

Now, researchers at Johns Hopkins Medicine and four collaborating medical institutions have developed what they say is a novel means of improving how TB can be treated. Their system adapts two widely used imaging technologies to more precisely track, over time, if an anti-TB drug actually reaches the areas where the bacteria are nested.

The new imaging tool incorporates positron emission tomography and computed tomography -- commonly known as PET and CT scans -- to noninvasively measure the effectiveness of rifampin, a key anti-TB medicine. The researchers describe a trial using the tool in TB patients in a paper published Feb. 17, 2020, in the journal Nature Medicine.

"While most TB patients are successfully treated with drug regimens which include rifampin, it still takes at least a six-month course to cure the disease," says Sanjay Jain, M.D., senior author of the paper; professor of pediatrics, and radiology and radiological science at the Johns Hopkins University School of Medicine; and professor of international health at the Johns Hopkins Bloomberg School of Public Health. "We now have evidence that imaging the lungs with PET and CT scans may help researchers and physicians better determine how much rifampin is reaching the bacteria over time, and then use the data to steer decisions for speedier and more effective TB-fighting measures such as higher doses of the drug."

A serious treatment issue for patients is that the TB infectious agents, called Mycobacterium tuberculosis, protect themselves by acting like a microbial mole, burrowing safe-haven cavities in the lungs. The cavities are carved by the same cell-killing activity that the TB bacteria use to produce pneumonia and its characteristic pulmonary lesions (commonly referred to as "spots on the lungs" when seen on CT scans). Because the process also destroys blood vessels and builds up scar tissue in the area surrounding a cavity, it can be difficult for anti-TB drugs travelling through the bloodstream to reach the microbes nested inside.

"Up until now, the only way we've known that rifampin sometimes does not reach the bacteria inside cavities has been by examining portions of lungs surgically resected from patients for whom standard anti-TB therapy failed," says Alvaro Ordonez, M.D., a research associate in pediatrics at Johns Hopkins Medicine and lead author on the Nature Medicine paper. Besides being invasive and difficult for the patient, such evaluations have two major shortcomings.

"Depending on which pulmonary lesions or cavities are resected, one may see rifampin levels adequate enough to kill the TB bugs," he explains. "But resect a different area of the lung where the drug wasn't able to reach lesions and cavities and you'll get a very different result. More importantly, the overall effectiveness of the treatment course cannot be properly measured since the resections are taken at single points in time and aren't from every location where there could be an infection."

Working with animals over the past decade, Jain and his colleagues developed a noninvasive imaging technique called dynamic 11C-rifampin PET/CT to open a clearer window on the previously hidden battle taking place between microbe and medicine in the lungs. The isotope-tagged version of rifampin, 11C-rifampin, emits a charged particle -- called a positron -- that enables the drug to be detected and tracked by a PET scan.

In studies published in 2015 and 2018, Jain and others demonstrated first in mice with pulmonary TB and then in rabbits with TB meningitis that dynamic 11C-rifampin PET/CT could successfully follow the movement of the tagged drug into lesions and cavities, both in the lungs and the brain. In both cases, the data revealed that the penetration of 11C-rifampin into the TB pockets was consistently low and could change over a period of a few weeks.

For the most recent trial, the researchers looked for the first time at how well the dynamic 11C-rifampin PET/CT tool monitored the levels of rifampin given to 12 human patients with TB in the lungs. The participants were first given an injected microdose of 11C-rifampin that was tracked by PET to determine the drug's concentration over time in TB-infected lesions in the lungs and other areas throughout the body (uninfected sections of the lungs, brain, liver and blood plasma).

Following this imaging, the patients were given untagged rifampin intravenously at the recommended treatment dosage level. Blood was drawn from the patients at various times and the levels of rifampin were measured by mass spectrometry. This showed that the microdose amount of 11C-rifampin could accurately represent the behavior of the traditional clinical dosage.

The PET scan data revealed that the amount of 11C-rifampin uptake was lowest in the walls of the TB-caused lung lesions and cavities, less than half what was seen in uninfected lung tissues.

"This is eye-opening since the lesions and cavities are the sites known to have the largest populations of bacteria in TB patients," Ordonez says. "Therefore, rifampin is not getting where we need it most."

The researchers used the findings on drug concentrations at the infection sites to predict how increasing the rifampin dose might shorten the treatment time for TB patients. This work -- done in collaboration with teams at the University of Maryland School of Pharmacy, led by Vijay Ivaturi, Ph.D., and the Texas Tech University Health Sciences Center, led by Tawanda Gumbo, M.D. -- suggests that increasing the dose of rifampin to higher, yet safely tolerated levels could reduce the treatment course in most TB patients from six months to four months.

"This would have a dramatic impact on the worldwide fight against TB," Jain says.

The researchers say that further human trials are needed to validate the promising results of this study, and perhaps, broaden the use of the PET-CT technique beyond anti-TB drugs. For example, similar studies are being conducted with patients who have infections due to methicillin-resistant Staphylococcus aureus, or MRSA, which often is treated with a long-term course of rifampin.

"We hope that the tool will one day enable clinicians to determine the most effective doses of specific drugs in specific patients, so as to further optimize the treatment of infectious diseases," Jain says.

Credit: 
Johns Hopkins Medicine

Study of 418,000 Europeans finds different foods linked to different types of stroke

image: Figure showing which foods are associated with low or high risk of ischaemic or haemorrhagic stroke

Image: 
European Heart Journal

Different types of food are linked to risks of different types of stroke, according to the largest study to investigate this, published in the European Heart Journal [1] today (Monday).

Until now, most studies have looked at the association between food and total stroke (all types of stroke combined), or focused on ischaemic stroke only. However, the current study of more than 418,000 people in nine European countries investigated ischaemic stroke and haemorrhagic stroke separately.

The study found that while higher intakes of fruit, vegetables, fibre, milk, cheese or yoghurt were each linked to a lower risk of ischaemic stroke, there was no significant association with a lower risk of haemorrhagic stroke. However, greater consumption of eggs was associated with a higher risk of haemorrhagic stroke, but not with ischaemic stroke.

Ischaemic stroke occurs when a blood clot blocks an artery supplying blood to the brain or forms somewhere else in the body and travels to the brain where it blocks blood flow. Haemorrhagic stroke occurs when there is bleeding in the brain that damages nearby cells. About 85% of strokes are ischaemic and 15% are haemorrhagic. Stroke is the second leading cause of deaths worldwide.

Dr Tammy Tong, the first author of the paper and a nutritional epidemiologist at the Nuffield Department of Population Health, University of Oxford (UK), said: "The most important finding is that higher consumption of both dietary fibre and fruit and vegetables was strongly associated with lower risks of ischaemic stroke, which supports current European guidelines. The general public should be recommended to increase their fibre and fruit and vegetable consumption, if they are not already meeting these guidelines.

"Our study also highlights the importance of examining stroke subtypes separately, as the dietary associations differ for ischaemic and haemorrhagic stroke, and is consistent with other evidence, which shows that other risk factors, such as cholesterol levels or obesity, also influence the two stroke subtypes differently."

The total amount of fibre (including fibre from fruit, vegetables, cereal, legumes, nuts and seeds) that people ate was associated with the greatest potential reduction in the risk of ischaemic stroke. Every 10g more intake of fibre a day was associated with a 23% lower risk, which is equivalent to around two fewer cases per 1000 of the population over ten years.

Fruit and vegetables alone were associated with a 13% lower risk for every 200g eaten a day, which is equivalent to one less case per 1000 of the population over ten years. No foods were linked to a statistically significant higher risk of ischaemic stroke.

Based on UK estimates, two thick slices of wholemeal toast provide 6.6g of fibre, a portion of broccoli (around eight florets) provides about 3g, and a medium raw, unpeeled apple provides about 1.2g of fibre. The European Society of Cardiology (ESC) and the World Health Organization Regional Office for Europe recommend consuming at least 400g of fruit and vegetables a day; the ESC also suggests people should consume 30-45g of fibre a day.

The researchers found that for every extra 20g of eggs consumed a day there was a 25% higher risk of haemorrhagic stroke, equivalent to 0.66 extra cases per 1000 (or around two cases per 3000) of the population over ten years. An average large-sized egg weighs approximately 60g. Egg consumption in the EPIC study was low overall, with an average of less than 20g eaten a day.

The researchers say the associations they found between different foods and ischaemic and haemorrhagic stroke might be explained partly by the effects on blood pressure and cholesterol.

Dr Tong and her colleagues analysed data from 418,329 men and women in nine countries (Denmark, Germany, Greece, Italy, The Netherlands, Norway, Spain, Sweden and the United Kingdom) who were recruited to the European Prospective Investigation into Cancer and Nutrition (EPIC) study between 1992 and 2000. The participants completed questionnaires asking about diet, lifestyle, medical history and socio-demographic factors, and were followed up for an average of 12.7 years. During this time, there were 4281 cases of ischaemic stroke and 1430 cases of haemorrhagic stroke.

Food groups studied included meat and meat products (red meat, processed meat and poultry), fish and fish products (white fish and fatty fish), dairy products (including milk, yogurt, cheese), eggs, cereals and cereal products, fruit and vegetables (combined and separately), legumes, nuts and seeds, and dietary fibre (total fibre and cereal, fruit and vegetable fibre).

Major strengths of the study include the large numbers of people studied in several different countries and long follow-up period. Most types of food were included in the study, although information on diet was collected at only one point in time, when the participants joined the study. As the study is observational it cannot show that the foods studied cause an increase or decrease in risk of ischaemic or haemorrhagic stroke, only that they are associated with different risks. Information on medication use (including statins) was not available.

Credit: 
European Society of Cardiology

Opportunity blows for offshore wind in China

image: This map shows electricity demand by province in gigawatts.

Image: 
Harvard SEAS

Under the Paris Climate Agreement, China committed to rely on renewable resources for 20 percent of its energy needs by 2030. Currently, the country is on track to double that commitment, aiming to hit 40 percent by the next decade. Wind power is critical to achieving that goal. Over the past 20 years, China's wind power capacity has exploded from 0.3 gigawatts to 161 gigawatts.

But, in recent years, that growth has slowed and the hopes for China's wind-powered future have dampened.

Why? Location, location, location.

Populous coastal provinces, including Guangdong and Jiangsu, consume about 80 percent of the nation's total electricity but the vast majority of China's wind capacity comes from land-based wind farms in places like Inner Mongolia, more than a thousand miles away from most major cities.

To make matters worse, recent climate studies have suggested that the weakening land-sea temperature gradient due to global climate change is making historically windy regions, like Inner Mongolia, less windy.

In addition, much of the wind power from those regions isn't being used because of when it's produced. Research has suggested that some 16 percent of total potential wind generation was wasted between 2010 and 2016, costing more $1.2 billion.

If China is to meet and exceed its Paris goal by 2030, it's going to need to find a way to increase its wind capacity.

In a recent study, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Huazhong University of Science and Technology in China, found that offshore wind could be a big part of the solution.

The research is published in Science Advances.

"This is an important new contribution, recognition that China has abundant off-shore wind potential that can be developed and brought on shore to the power hungry coastal provinces at costs competitive with existing coal-fired polluting power plants," said Michael McElroy, the Gilbert Butler Professor of Environmental Studies at SEAS and senior author of the paper.

To calculate the capacity and cost of offshore wind in China, the researchers first identified the regions where offshore wind farms could be built, excluding shipping zones, environmentally protected areas and water depths higher than 60 meters. They calculated the wind speeds in those areas and estimated the hourly capacity for each of the turbines.

They found that the total potential wind power from wind farms built along the Chinese coast is 5.4 times larger than the current coastal demand for power.

The researchers also found that this power would be cost-efficient.

"We estimate offshore wind costs according to a range of values derived from recent offshore wind farm developments," said Peter Sherman, a graduate student at the department of Earth and Planetary Science and first author of the paper. "Offshore wind turbines have historically been prohibitively expensive, but it is clear now that, because of significant technological advances, the economics have changed such that offshore wind could be cost-competitive now with coal and nuclear power in China."

The researchers estimated that if electricity prices are high, offshore wind could provide more than 1,000 terawatt-hours, or about 36 percent of all coastal energy demand. If electricity prices are low, it could provide more than 6,000 terawatt-hours, or 200 percent of total energy demand.

"Our research demonstrates the potential for cost-effective, offshore wind to power coastal regions, reduce greenhouse gas emissions and improve air quality in China," said McElroy.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Brain cells protect muscles from wasting away

image: The head of a roundworm, C. elegans. The glia that regulate the stress response in the worm's peripheral cells are highlighted. A mere four of these cells, known as CEPsh glial cells, protect the organism from age-related decline.

Image: 
Ashley Frakes, UC Berkeley

While many of us worry about proteins aggregating in our brains as we age and potentially causing Alzheimer's disease or other types of neurodegeneration, we may not realize that some of the same proteins are aggregating in our muscles, setting us up for muscle atrophy in old age.

University of California, Berkeley, scientists have now found brain cells that help clean up these tangles and prolong life -- at least in worms (Caenorhabditis elegans) and possibly mice. This could lead to drugs that improve muscle health or extend a healthy human lifespan.

The research team's most recent discovery, published Jan. 24 in the journal Science, is that a mere four glial cells in the worm's brain control the stress response in cells throughout its body and increase the worm's lifespan by 75%. That was a surprise, since glial cells are often dismissed as mere support cells for the neurons that do the brain's real work, like learning and memory.

This finding follows a 2013 study in which the UC Berkeley group reported that neurons help regulate the stress response in peripheral cells, though in a different way than glial cells, and lengthen a worm's life by about 25%. In mice, boosting neuronal regulation increases lifespan by about 10%.

Together, these results paint a picture of the brain's two-pronged approach to keeping the body's cells healthy. When the brain senses a stressful environment -- invading bacteria or viruses, for example -- a subset of neurons sends electrical signals to peripheral cells to get them mobilized to respond to the stress, such as through breaking up tangles, boosting protein production and mobilizing stored fat. But because electrical signals produce only a short-lived response, the glial cells kick in to send out a long-lasting hormone, so far unidentified, that maintains a long-term, anti-stress response.

"We have been discovering that if we turn on these responses in the brain, they communicate to the periphery to protect the whole organism from the age onset decline that naturally happens. It rewires their metabolism, it also protects against protein aggregation," said Andrew Dillin, UC Berkeley professor of molecular and cell biology and Howard Hughes Medical Institute (HHMI) investigator. As a result of the new study, "We think that glia are going to be more important than neurons."

While the roundworm C. elegans is a long way evolutionarily from humans, the fact that glial cells seem to have a similar effect in mice suggests that the same may be true of humans. If so, it may lead to drugs that combat muscle wasting and obesity and perhaps increase a healthy lifespan.

"If you look at humans with sarcopenia or at older mice and humans, they have protein aggregates in their muscle," Dillin said. "If we can find this hormone, perhaps it can keep muscle mass higher in older people. There is a huge opportunity here."

In a commentary in the same Jan. 24 issue of Science, two Stanford University scientists, Jason Wayne Miklas and Anne Brunet, echoed that potential. "Understanding how glial cells respond to stress and what neuropeptides they secrete may help identify specific therapeutic interventions to maintain or rebalance these pathways during aging and age-related diseases," they wrote.

How to extend lifespan

Dillin studies the seemingly simultaneous deterioration of cells throughout the body as it ages into death. He has shown in worms and mice that hormones and neurotransmitters released by the brain keep this breakdown in check by activating a stress response in the body's cells and tuning up their metabolism. The response likely originated to fight infection, with the side effect of keeping tissues healthy and extending lifespan. Why our cells stop responding to these signals as we age is the big question.

Over the past decade, he and his colleagues have identified three techniques used by worms to keep their cells healthy and, consequently, longer-lived. Activating the body's heat shock response, for example, protects the cytoplasm of the cell. Stimulating the unfolded protein response protects the cells' energy producing structures, the mitochondria. The unfolded protein response is the cell's way of making sure proteins assume their proper 3D structure, which is crucial for proper functioning inside the cell.

His latest discovery is that glia, as well as neurons, stimulate the unfolded protein response in the endoplasmic reticulum (ER). The ER is the cellular structure that hosts the ribosomes that make proteins -- the ER is estimated to be responsible for the folding and maturation of as many as 13 million proteins per minute.

"A lot of the work we have done has uncovered that certain parts of the brain control the aging of the rest of the animal, in organisms from worms to mice and probably humans," Dillin said.

Two other interventions also increase lifespan in worms: diet restriction, which may call into play other anti-aging mechanisms, and reducing the production of a hormone called insulin-like growth factor (IGF-1).

Dillin's discoveries have already led to new treatments for diseases. He cofounded a company, Mitobridge Inc. (recently acquired by Astellas Pharma Inc.), based on the finding that certain proteins help tune up mitochondria. A drug the company developed is now in phase II clinical trials for treating the damage that occurs when kidneys restart after sudden failure, such as during an operation.

He cofounded another company, Proteostatis Therapeutics, to develop a treatment for cystic fibrosis that is based on activating the unfolded protein response to repair ion channels in people with the disease.

The new discovery about how neurotransmitter and hormones impact the ER could have implications for diseases that involve muscle wasting, such as Huntington's disease and forms of myocytis.

Glial cells

In 2013, Dillin and his colleagues discovered that boosting expression of a protein called xbp-1s in sensory nerve cells in the worm brain boosts the misfolded protein response throughout the worm's body. Shortly afterward, postdoctoral fellow Ashley Frakes decided to see if the glial cells enshrouding these neurons were also involved. When she overexpressed the same protein, xbp-1s, in a subset of these glia (cephalic astrocyte-like sheath glia, or CEPsh), she discovered an even larger effect on peripheral cells, as measured by how they deal with a high-fat diet.

Frakes was able to pinpoint the four CEPsh glia responsible for triggering the ER response, because the C. elegans body is so well studied. There are only 959 cells in the entire worm, 302 of which are nerve cells, and 56 are glial cells.

The CEP neurons and CEPsh glia work differently, but additively, to improve metabolism and clean up protein aggregates as the worms slim down and live twice as long as worms without this protection from a high-fat diet.

"The fact that just a few cells control the entire organism's future is mind-boggling," Dillin said. "Glia work 10 times better than neurons in promoting this response and about twice as good in extending lifespan."

Frakes is currently trying to identify the signaling hormone produced by these glial cells, a first step toward finding a way to activate the response in cells that are declining in function and perhaps to create a drug to tune up human cells and stave off the effects of aging, obesity or other types of stress.

Frakes also found that the worms slimmed down because their fat stores, in the form of lipid droplets, were turned into ER. Another research group in Texas has shown that activating xbp-1s in the neurons of mice also has the effect of reducing fat stores and slimming the mice, protecting them from the effects of a high-fat diet and extending their lifespan.

"When they activate it in the neurons, they see the liver getting rid of fat, redistributing metabolic demands," Dillin said. "I think we would see the same thing in humans, as well."

Credit: 
University of California - Berkeley

Why do whales migrate? They return to the tropics to shed their skin, scientists say

image: Killer whales in Antarctica, as shown here, often display a yellow coloration due to diatom accumulation on their skin, evidence that they are not sloughing skin in frigid waters. Image collected by John Durban (NOAA Fisheries) and Holly Fearnbach (SR3) using a remotely controlled hexacopter drone at 100-foot altitude, authorized by NMFS research permit 19091 and Antarctic Conservation Act Permit 2017-029.

Image: 
John Durban/NOAA Fisheries and Holly Fearnbach/SR3

Whales undertake some of the longest migrations on earth, often swimming many thousands of miles, over many months, to breed in the tropics. The question is why--is it to find food, or to give birth?

In a research paper in Marine Mammal Science, scientists propose that whales that forage in polar waters migrate to low latitudes to maintain healthy skin.

"I think people have not given skin molt due consideration when it comes to whales, but it is an important physiological need that could be met by migrating to warmer waters," said Robert Pitman, lead author of the new paper and marine ecologist with Oregon State University's Marine Mammal Institute. He was formerly with NOAA Fisheries' Southwest Fisheries Science Center in La Jolla, California.

More than a century ago, whalers recognized that most whales that forage in high latitudes migrate to the tropics for calving. Scientists have never agreed on why. Because of their size, large whales should be able to successfully give birth in frigid polar waters. Due to reduced feeding opportunities in the tropics, most whales fast during their months-long migrations.

So why go to the trouble?

Warm Water Speeds Molting

All birds and mammals regularly shed their skin, fur, or feathers in a process known as molting. Pitman and his coauthors propose that whales foraging in the freezing waters of Antarctica conserve body heat by diverting blood flow away from their skin. That would reduce regeneration of skin cells and halt the normal sloughing of skin.

Migrating to warmer water would allow whales to revive their skin metabolism and molt in an environment that does not sap their body heat. The authors suggest that this drives their migrations.

The two lead authors on the study first proposed in 2011 that skin molt could drive the migration for certain Antarctic killer whales. With new data, they now propose the same for all Antarctic killer whales and possibly all whales that migrate to the tropics.

Coauthors on the paper include scientists from NOAA Fisheries; SeaLife Response, Rehabilitation, and Research; and the Italian National Institute for Environmental Protection and Research.

Over eight years, scientists deployed 62 satellite tags on killer whales. They found that all four types that feed in frigid Antarctic waters migrated as far as 11,000 kilometers (almost 7,000 miles) round trip. Most migrations were fast, non-stop, and largely straight north and back. One whale completed two such migrations in 5.5 months. Researchers also photographed newborn killer whale calves in Antarctica, indicating the whales don't need to migrate to warmer waters to give birth.

They suggest that larger whales that migrate to the tropics to molt may have begun giving birth in those same warmer waters. "Instead of whales migrating to the tropics or subtropics for calving, whales could be traveling to warm waters for skin maintenance and perhaps find it adaptive to bear their calves while they are there," the scientists wrote. The warm water could speed the growth of calves in an environment with far fewer killer whales, their main predator.

Much like humans, whales and dolphins normally shed outer skin cells continuously. Scientists observed that whales in frigid Antarctic waters are often discolored by a thick yellow film of microscopic diatoms. This indicated that they were not experiencing their normal, "self-cleaning" skin molt.

Early whalers referred to blue whales with a heavy coating of diatoms on their white bellies as "sulfur-bottoms." They also assumed that whales without a diatom coating were likely recent arrivals from the tropics. When whales shed their skin, they also shed the diatoms.

Molting Jettisons Harmful Bacteria

Recent studies have found that high concentrations of diatoms on the skin of Antarctic killer whales may also accumulate potentially harmful bacteria.

"Basically, the feeding is so good in productive Antarctic waters that the relatively small, warm-blooded killer whale has evolved a remarkable migration behavior. This enables it to exploit these resources and still maintain healthy skin function," said John Durban, coauthor of the research, formerly with the science center and now a senior scientist at SEA Inc.

In another example, beluga whales in the Arctic are known for gathering in summer in river estuaries. The water there is warmer, fresher, and shallower than their typical habitat. At first, scientists assumed that they gathered there to give birth and that the warmer temperatures boosted calf survival.

It turned out that belugas do not calve or feed in the estuaries but go there to molt. In an earlier study, an Inuit hunter pointed out that "Belugas go to the rivers for warmth. And like seals they moult their skins. They moult in the warm water."

The annual (versus continuous) molt cycle of the beluga was long thought to be unique among cetaceans. But, if whales are migrating to the tropics to molt, annual molt "may prove to be the rule among all high-latitude cetaceans," the authors wrote.

In terms of biomass, whales complete the largest annual migrations on earth. They transport millions of tons of animals thousands of miles, with significant impact on local ecosystems, the scientists say. They also call for further testing of their hypothesis by assessing skin growth of migratory and non-migratory whales, at high and low latitudes, throughout the year.

Credit: 
NOAA Fisheries West Coast Region