Earth

Can a high-tech sniffer help keep us safe?

image: NIST chemist Megan Harries tests whether a portable, high-tech sniffing device called a PLOT-cryo system can be used to screen shipping containers for dangerous airborne chemicals at ports of entry. For this test, which was performed at the NIST campus in Boulder, Colo., Harries used an old US Army communications bunker as a stand-in for a shipping container.

Image: 
Image courtesy of Megan Harries

Science stinks.

So thought Megan Harries as she measured drops of putrescine and cadaverine -- the chemicals that give decomposing corpses their distinctive, terrible odor -- into glass vials. She then placed the vials on the floor of a shipping container, walked outside, and closed the door behind her.

Harries, a postdoctoral fellow and chemist at the National Institute of Standards and Technology (NIST), returned a day later, after the vapors diffused. Outside the shipping container, she popped open an aluminum briefcase, unfurled a flexible tube with a metal tip at the end, and inserted that into a small hole drilled into the side of the container.

Harries was conducting the first field test of a high-tech sniffing device called a PLOT-cryo -- short for "porous layer open tubular cryogenic adsorption." This NIST-invented device can be used to detect very low concentrations of chemicals in the air. The results of the test were recently published in Forensic Chemistry.

Harries wanted to know if PLOT-cryo could be used at ports of entry to quickly and safely screen shipping containers for dangerous or illegal cargo. So, did the instrument detect the stench of death?

"Sure did," she said. "It stank."

Especially on the first few days. Harries returned to the shipping container every day for a few weeks as the chemicals evaporated away. Her goal was to determine how temperature, humidity and other factors affected the PLOT-cryo's ability to detect chemicals in air.

The simulated shipping container -- it was actually an old U.S. Army communications bunker -- was in the parking lot at the NIST campus in Boulder, Colorado, not far from where employees parked their cars. "People would ask me, What is in there," she said.

Harries explained that the putrescine and cadaverine -- the "decomposition suite of compounds," she called them -- had been purchased from a chemical supplier. She also tested gasoline, chemicals present in explosives, and other compounds. "We chose those chemical mixtures as surrogates for things that law enforcement might care about," she said.

The PLOT-cryo itself is not a chemical sensor. Instead, it's a sophisticated air sampler. It works by sucking in air and forcing it through super-thin tubes, or capillaries, that are coated with a material that traps airborne chemicals. The capillaries are also chilled (that's where the "cryo" comes in), which makes for a better trap. After sampling, the user removes the trap and brings it to the lab to analyze what they've caught.

That analysis can be done quickly on standard lab equipment because the PLOT-cryo has concentrated the airborne chemicals, making them easier to detect and identify. Without that step, the analysis would be more costly and complicated.

NIST researchers developed PLOT-cryo in 2009 and have shown that it can be used to detect clandestine graves, food spoilage, and chemicals in fire debris that might show evidence of arson.

During this first field test of a portable version of a PLOT-cryo system, Harries found that the instrument usually required less than a minute of sniffing time to pick up a scent -- an important consideration for inspectors at busy ports of entry -- though in cold weather it sometimes needed a bit more time. In addition, when the air was very humid, the cooling system caused ice buildup that obstructed air flow. Harries' colleagues are currently working on a different type of cooling system that they hope will solve that problem.

If it does, Harries said, the PLOT-cryo can give a boost to public health and safety. "It was good at detecting some very hard-to-detect stuff," Harries said. "We're close to solving an important problem."

Credit: 
National Institute of Standards and Technology (NIST)

New biomarker for dementia improves risk prediction

Identifying individuals who are at risk for developing dementia, including Alzheimer's disease, is critical for the development of new therapies and interventions to slow or reverse cognitive symptoms. But current strategies are limited, both in terms of accuracy and the ability to incorporate them into routine practice. Unlike cerebrospinal fluid biomarkers that require a spinal tap, plasma biomarkers can be extracted from the blood, making their collection much less invasive and much more appealing. In a new study led by investigators from Brigham and Women's Hospital, researchers have measured circulating levels of insulin-like growth factor binding protein 2 (IGFBP-2), a potential biomarker for dementia. In a paper published in Annals of Clinical and Translational Neurology, the team reports that IGFBP-2 levels were associated with an increased risk of both all-cause dementia and Alzheimer's disease dementia. When added to a model of traditional risk factors for dementia, IGFBP-2 significantly improved dementia risk classification, suggesting that it may be a useful biomarker for predicting dementia risk.

"Identifying biomarkers for dementia could improve our ability to predict a person's risk of dementia and his or her future outcomes," said corresponding author Emer McGrath, MD, PhD, an associate neurologist in the Brigham's Neurology Department and an investigator with the Framingham Heart Study. "Novel biomarkers could also inform our understanding of complex biological pathways underlying the development of dementia, help to more accurately define disease subgroups and inform future clinical trials."

Recently, researchers have begun to focus on the role of metabolic dysfunction and insulin resistance in the brain in the development of dementia. The insulin-like growth factor (IGF) signaling system is known to play a role in neuroregeneration, neuronal survival and proliferation. IGFBP-2 is thought to impair IGF signaling, thereby inhibiting the neuroprotection and proliferation.

In the current study, investigators measured levels of IGFBP-2 in plasma samples from almost 1,600 participants from the Framingham Offspring cohort. The team analyzed risk of dementia, cognitive performance and structural MRI brain measures predictive of dementia.

They found that elevated circulating IGFBP-2 levels were associated with an increased risk of both all-cause dementia and Alzheimer's disease dementia, as well as poorer performance on tests of abstract reasoning. Addition of IGFBP2 plasma levels to a model of traditional risk factors significantly improved dementia risk classification: based on the net reclassification improvement (NRI) index, 32 percent of individuals with dementia were correctly assigned a higher predicted risk, while 8 percent of individuals without dementia were correctly assigned a lower predicted risk.

The authors note that the Framingham Offspring cohort is predominantly Caucasian, potentially limiting the generalizability of the findings to more diverse populations. They were also unable to explore the association between cerebrospinal fluid levels of IGFBP-2 or Tau levels with IGFBP-2 plasma levels and cognitive outcomes.

"There is increasing interest in manipulating insulin sensitivity and IGF signaling in the brain to help target cognitive decline and dementia," said McGrath. "Our work suggests that manipulating IGF-signaling pathways via IGFBP-2 may be a promising therapeutic target for dementia prevention."

Credit: 
Brigham and Women's Hospital

An overactive cerebellum causes issues across the brain

video: From supplementary video 1. Rats with acute cerebellar inflammation (middle) have reduced exploitative behavior compared to the control (left). When administered with anti-inflammatory cytokines or neuro-immunity suppressants their behavior is rescued (right).

Image: 
Kyoto University/Gen Ohtsuki

Japan -- Consider the cerebellum. That structure tucked into the lower back of your skull. Also known as the 'little brain' it plays a key role in regulating voluntary movement like balance, motor learning, and speech.

Recent evidence even shows the cerebellum involved in higher-order brain functions including visual response, emotion, and motor planning. And now, a team from Kyoto University has found another link, depressive behavior.

Writing in Cell Reports, the research team found -- through a series of experiments with rats -- that acute cerebellar inflammation puts the structure in an 'overexcited' state, resulting in the animal developing a temporary decrease in motivation and sociability.

Team leader Gen Ohtsuki of Kyoto University's Hakubi Center for Advanced Research explains that the investigation began in an effort to understand how the brain's immune system can change its activity. In fact, literature has shown correlations between cerebellar dysfunction and certain pervasive developmental disorders such as autism and depression.

"Even though we now know more about the cerebellum's role in higher order brain functions the detailed signal transduction machinery remains a mystery. We know even less about what happens in the brain during excessive immune activity." explains Ohtsuki. "So, we conducted series of experiments where we activated the immune cells in the cerebellum and observed the results."

The brain's immune cells are known as microglia, and they respond to bacteria and viruses to mitigate damage. That response results in inflammation. Utilizing electrophysiological techniques, the team found that microglia caused neurons to fire at an increased rate, a phenomenon known as 'intrinsic-plasticity'. This in turn caused the cerebellum to go into a 'hyperexcited' state.

This immune-triggered response was shown to even change behavior. When rats were induced with acute cerebellar inflammation, their sociability, free-searching, and motivation dramatically decreased.

"These behavioral modulations are signs of 'depression-like' behavior. Once the inflammation subsided, they were back to normal," Ohtsuki continues. "Moreover, the phenotype can be rescued if the rats are treated with neuro-immunity suppressants and inflammatory cytokines. We also investigated if higher order brain regions were affected. fMRI studies on the rats show a clear increase in activity in the prefrontal cortex, highlighting the interconnectedness of the cerebellum to higher order brain regions."

The team is encouraged of their results, but states that further investigation is needed.

"Excessive immune activity in the brain can induce behavioral pathology, and we expect it to be involved in other mental and cognitive disorders such as dementia. But to understand anything about the pathological mechanisms we need to combine this with additional data such as genetic risk factors," concludes Ohtsuki. "In this study, we focused on inflammation. In the future, we will begin firmly clarifying the physiological, molecular, and genetic aspects of these behavioral changes."

Credit: 
Kyoto University

HKU archaeological team excavates at one of the major fortress-settlements in the Armenian Highlands

image: This drone photograph faces northwest over the Vedi Fortress site. Cliffs surround and protect much of the site, with two lines of fortress walls protecting the western approach to the citadel.

Image: 
@The University of Hong Kong

A team of researchers and students from HKU unearthed huge storage jars, animal bones and fortress walls from 3,000 years ago in Armenia as they initiated the Ararat Plain Southeast Archaeological Project (APSAP) during the summer of 2019.

APSAP is a collaborative research project between HKU and the Institute of Archaeology and Ethnography of the Republic of Armenia's National Academy of Sciences. Dr. Peter J. Cobb, assistant professor in the Faculties of Education and Arts, directs the project in collaboration with Artur Petrosyan and Boris Gasparyan of the Armenian Institute. The Institute's Hayk Azizbekyan helped coordinate all aspects of the project.

The project, expected to last for at least five years, aims at understanding human life and mobility in the ancient landscapes of the Near East. It investigates the area around Vedi, Armenia, at the southeast edge of the wide and fertile Ararat Plain.

This area has been a contact point between Turkey, Iran (Persia) and Russia over the past few centuries. It has always been an important transportation node, including on the famous Silk Road. Today, Armenia is one of the countries on the Belt and Road initiative.

"The Vedi river valley has formed an important transportation corridor throughout history and we want to understand how people lived in and moved through this landscape in the past," said Dr Cobb.

HKU is one of the first universities from East Asia to help lead a major archaeological excavation in the Near East, a region traditionally receiving foreign research attention from only European and North American institutions. The international team this summer consisted of 15 researchers and students from Armenia, mainland China, Hong Kong, Turkey, and the United States.

The main focus was a major excavation at a site in the middle of the valley called the Vedi Fortress. The site preserves huge ruined fortification walls up to four meters high, with a central rectangular defensive tower. Two long series of fortification walls protected an inner "keep" of a citadel. The walls date to the Late Bronze and Iron Ages of 1500-500 BC. The site has been reused multiple times, including during the Medieval period of 800 years ago.

The research team dug three trenches on the site, making exciting finds of huge storage jars, walls of buildings, and a variety of fascinating artifacts including animal bones discarded from meals.

Undergraduate History major Ivi Fung said: "When I was identifying a pottery fragment in the sieve, I imagined what Bronze Age people put into the potteries; when I was surprised by a large skeleton of an animal head, I imagined how they got their food; when I brushed the stone wall, I imagined whom they were defending against."

Her professor, Dr Cobb, added: "Archaeology allows us to learn about the daily life of humans in this region as we study everyday items like the bowls and cups used during meals. The trip also provided chances for HKU students to have new experiences and adventures. As one example, some HKU students had never climbed a tree before, but they had an opportunity in this rural part of the world."

Students from HKU and other universities visited the site from late May to late July and worked together with Armenian archaeologists. They hiked to discover new sites, excavated some of them, and studied the ancient pottery and other objects found at the sites.

Credit: 
The University of Hong Kong

Bones of Roman Britons provide new clues to dietary deprivation

image: A soldier's tombstone from Roman-era London

Image: 
Museum of London

Researchers at the University of Bradford have shown a link between the diet of Roman Britons and their mortality rates for the first time, overturning a previously-held belief about the quality of the Roman diet.

Using a new method of analysis, the researchers examined stable isotope data (the ratios of particular chemicals in human tissue) from the bone collagen of hundreds of Roman Britons, together with the individuals' age-of-death estimates and an established mortality model.

The data sample included over 650 individuals from various published archaeological sites throughout England.

The researchers - from institutions including the Museum of London, Durham University and the University of South Carolina - found that higher nitrogen isotope ratios in the bones were associated with a higher risk of mortality, while higher carbon isotope ratios were associated with a lower risk of mortality.

Romano-British urban archaeological populations are characterised by higher nitrogen isotope ratios, which have been thought previously to indicate a better, or high-status, diet. But taking carbon isotope ratios, as well as death rates, into account showed that the nitrogen could also be recording long-term nutritional stress, such as deprivation or starvation.

Differences in sex were also identified by the researchers, with the data showing that men typically had higher ratios of both isotopes, indicating a generally higher status diet compared to women.

Dr Julia Beaumont of the University of Bradford said: "Normally nitrogen and carbon stable isotopes change in the same direction, with higher ratios of both indicating a better diet such as the consumption of more meat or marine foods. But if the isotope ratios go in opposite directions it can indicate that the individual was under long-term nutritional stress. This was corroborated in our study by the carbon isotope ratios which went down, rather than up, where higher mortality was seen."

During nutritional stress, if there is insufficient intake of protein and calories, nitrogen within the body is recycled to make new proteins, with a resulting rise in the ratio of nitrogen isotopes in the body's tissues.

Dr Beaumont added: "Not all people in Roman Britain were high-status; there was considerable enslavement too and we know slaves were fed a restricted diet. Our research shows that combining the carbon and nitrogen isotope data with other information such as mortality risk is crucial to an accurate understanding of archaeological dietary studies, and it may be useful to look at existing research with fresh eyes."

The paper, A new method for investigating the relationship between diet and mortality: hazard analysis using dietary isotopes is published in Annals of Human Biology.

Credit: 
University of Bradford

Microorganisms reduce methane release from the ocean

image: Professor Bo Thamdrup, University of Southen Denmark, is an expert in marine microorganisms.

Image: 
SDU

Next to CO2, methane is the greenhouse gas that contributes most to the man-made greenhouse effect. Of the methane sources caused by human activity, rice fields and cattle are among the most important. Furthermore, methane is released from swamp areas on land, melting permafrost as in the Arctic tundra, and from areas with oxygen depletion in the oceans.

We have a good understanding of the processes leading to the increasing CO2 content in the atmosphere, but this understanding is far more unclear when it comes to methane.

The majority of the atmosphere's methane is created by microorganisms living under oxygen-free conditions. It has so far been assumed that it is mainly the activity of these organisms that governs the release of methane from e.g. swamp areas on land and ocean areas with oxygen depletion.

1 million square kilometer in the Pacific Ocean

However, new research from the Department of Biology at University of Southern Denmark (SDU) shows that most of the methane created in the ocean's oxygen-depleted areas is removed by methane-eating microorganisms before it is released to the atmosphere.

The discovery, made by SDU researchers in collaboration with colleagues from the Georgia Institute of Technology, USA, is the result of studies in the Pacific off the coast of Mexico.

Here we find the largest oxygen free area in the oceans - an area of more than 1 million square kilometers, where part of the water column is completely oxygen-free. This oxygen-free water contains methane.

Microorganisms remove 80 pct. of the methane produced

Through water sample experiments carried out aboard the research vessel R / V Oceanus, the researchers were able to show that methane is actively consumed, and that it is caused by microorganisms living in the water. The microorganisms remove the methane to use it as an energy source.

The researchers estimate that approx. 80% of the methane produced in the oxygen free area is consumed by these microorganisms and thus removed.

The research was published online June 10, 2019 in the journal Limnology and Oceanography and will be scheduled to publish later in print this year.

Release could be five times larger

"If the microorganisms did not eat the methane, the release from the oxygen free area would be approx. five times larger, and a large part of that could end up in the atmosphere, explains SDU Professor Bo Thamdrup, an expert in how marine microorganisms influence the environment.

"The methane pool of the oxygen-free zone is thus far more dynamic than previously thought, and it therefore becomes important to understand which microorganisms eat methane and how their activity is affected by environmental conditions."

The big question now is which microorganisms are at play and how? The researchers have got a hint that highly specialized bacteria and so-called archaea (bacterial-like organisms) are involved.

What can we learn from the bacteria?

"Although there is a lot of energy in methane, methane as a molecule is difficult to activate and break apart, says Professor Thamdrup.

"Finding out how microorganisms do the job is not only important for understanding the process. In the long term, it may also potentially be of biotechnological value. Maybe it can help us convert methane into other useful products."

Credit: 
University of Southern Denmark

Study shows how salamanders harness limb regeneration to buffer selves from climate change

image: Clemson biological sciences associate professor Mike Sears (left) and former graduate student Eric Riddell published their latest results indicating that salamanders harness their unique ability to regenerate limbs to rapidly minimize the impact of hot temperatures.

Image: 
Clemson College of Science

CLEMSON, South Carolina -- Looking like a cross between a frog and a lizard, the gray cheek salamander has thin, smooth skin and no lungs. The amphibian breathes through its skin, and to survive it must keep its skin moist. As environmental conditions grow hotter or drier, scientists want to know whether and how these animals can acclimate.

Researchers from Clemson University's College of Science have shown for the first time that these salamanders inhabiting the southern Appalachian Mountains use temperature rather than humidity as the best cue to anticipate changes in their environment. Significantly, the researchers observed that salamanders actually harness their unique ability to regenerate limbs to rapidly minimize the impact of hot temperatures.

The findings, which are described in the paper, "Thermal cues drive plasticity in desiccation resistance in montane salamanders with implications for climate change," may have implications for other animals and even plants. The paper was published in Nature Communications on Sept. 9.

A major issue for these salamanders each day is the potentially fatal risk of drying out. Biological sciences associate professor Mike Sears and his research group have shown over the years that these animals tolerate dehydration by regulating their water loss through physiological changes. But the researchers didn't fully understand how they did that until now.

"We're the first to look on the molecular level at salamander physiology with respect to the environment," said Sears, whose team conducted acclimation experiments and gene expression analysis. "We figured out from the genetic perspective how they do this."

Lead author Eric Riddell, who earned his doctorate at Clemson in 2018 and is now a postdoctoral scholar at the Museum of Vertebrate Zoology at the University of California, Berkeley, collected about 150 salamanders from the mountains near Highlands, North Carolina, and brought them back to Sears' Clemson lab, where he gave them a month to get used to their new environment.

He then divided the animals into four groups that would be exposed to different climate conditions they might experience currently or in the future. Because the animals are nocturnal, he and his undergraduate assistants moved the salamanders from a moist rehydration chamber each night into an activity chamber, where they walked for several hours in soil in the open air as they were exposed to different levels of temperature and humidity.

The researchers repeated this routine over several weeks, while also measuring how quickly the salamanders dried out and how much oxygen they consumed by calculating the vapor pressure deficit (VPD).

"We found that salamanders anticipate the risk of drying out by using temperature and not humidity," said Riddell, noting that while humidity does play a role in the rate of dehydration, it's not as reliable a cue for the animals.

Riddell also conducted gene analyses of tissue samples from the salamanders' skin to understand what physiological changes were occurring at the cellular level that enabled the animals to hold water in their bodies rather than have it evaporate through their skin.

According to Riddell, as temperatures increased, the salamanders were able to break down and subsequently rebuild blood vessel networks in their skin. "This temperature-sensitive blood vessel regeneration suggests that salamanders regulate water loss through regression and regeneration of capillary beds in the skin," Riddell said.

In the long term, Riddell said, this blood vessel development might help scientists understand a salamander's unique ability to regenerate or regrow limbs, a model system for understanding regenerative medicine in humans.

"By just focusing on how they regrow this one single type of tissue, these blood vessels, researchers might be able to understand the process of regeneration better," Riddell said.

This fall, Sears plans to explore what happens as salamanders become more tolerant of warmer temperatures. He and his students will conduct experiments at various elevations to determine the maximum temperature the animal will tolerate voluntarily. Since temperature changes with elevation, the amphibians will select an elevation with an acceptable temperature range.

"Ultimately we want to know how genetically adaptable animals are to changes in the future climate," Sears explained. "One of the big questions in our field is whether animals can keep up with the rate of climate change through evolution. By leveraging these genomic tools as we did in this study, we can begin to answer such ecological questions."

In addition to Riddell, other members of Sears' team contributing to this study included Christina Wells, Clemson associate professor of biological sciences; Kelly Zamudio, Cornell University ecology and evolutionary biology professor; and Emma Roback, a Grinnell College undergraduate summer research intern.

This current study builds on Sears' groundbreaking research, published in July 2018, which demonstrated the adaptability of seven species of mountain salamanders in adjusting to their changing environment.

Credit: 
Clemson University

Tides don't always flush water out to sea, study shows

image: Dawn in Willapa Bay in 2015, showing oysters on a tidal flat.

Image: 
Jennifer Ruesink

By area, tidal flats make up more than 50 percent of Willapa Bay in southwest Washington state, making this more than 142-square-mile estuary an ideal location for oyster farming. On some parts of these flats, oysters grow well, filling their shells with delicacies for discerning diners. But according to experienced oyster farmers, oysters raised in other parts of Willapa Bay don't yield as much meat.

Now, scientists may have an explanation for this variability. In a paper published online July 26 in the journal Estuarine, Coastal and Shelf Science, researchers at the University of Washington and the University of Strathclyde report that the water washing over the Willapa Bay tidal flats during high tides is largely the same water that washed over the flats during the previous high tide. This "old" water has not been mixed in with "new" water from deeper parts of the bay or the open Pacific Ocean, and has different chemical and biological properties, such as lower levels of food for creatures within the tide flats.

The team, led by Jennifer Ruesink, a UW professor of biology, employed oceanographic modeling and water quality readings to show that high-tide water flowing over the Willapa Bay flats can take as many as four tidal cycles -- or about two days -- before it is fully replaced by "new" water. Through field experiments measuring oyster growth, they found that this slow turnover has consequences for the creatures that call Willapa Bay home.

Their findings overturn a prior assumption about tides.

"Previously, there had been this belief that when water drains off of tide flats or out of a bay, currents and wind mix that water up," said lead author Eli Wheat, a UW instructor in the College of the Environment who conducted this study as a doctoral student in the UW Department of Biology. "It turns out that this is not necessarily true. It takes multiple tidal cycles for this mixing to occur."

To determine water turnover rates in Willapa Bay, Ruesink and Wheat partnered with Neil Banas, an oceanographer at the University of Strathclyde in Glasgow, who modeled water "residence times" and circulation in Willapa Bay using data on the bay's depth profile, the rivers that feed into it and its outlet to the Pacific Ocean. The model predicted that high-tide waters over the flats have residence times ranging from zero to four tidal cycles -- depending on location in the bay -- before it is fully replaced by "new" water from deeper channels. On stretches of tidal flats more than one kilometer long, generally water over near-shore flats had longer residence times than flats farther from shore.

"It's a bit of a paradox: We can walk across those flats at low water, so how can water stay there for more than a couple of hours between successive low tides?" said Ruesink. "Now we've discovered a new explanation for the quality of oyster beds, which doesn't depend on how much time they spend under water, but rather on the history of the water that reaches them."

The team collected data directly from the bay. They used a network of sensors -- some free-floating, others at fixed positions -- to collect information such as water depth, temperature, salinity and the amount of chlorophyll present. All of those water properties varied throughout the bay. Temperature varied primarily according to the tidal cycle, while variations in salinity and chlorophyll throughout Willapa Bay were more consistent with their model of water residence times. One of the key differences between "old" and "new" water is that "old" water contains less chlorophyll and is usually lower in salinity.

The team also measured oyster growth on flats in sections of the bay with "old" and "new" water. In all parts of Willapa Bay, oysters grew to approximately the same shell size. But oysters grown farther from the main channel of the bay -- regions with higher levels of "old" water at high tide -- had trouble filling those shells with the meaty morsel that people eat. Oysters grown on flats just half a kilometer from the main channel showed a 25 percent drop in dry tissue weight per shell height compared to oysters grown closer to the channel, where "new" water arrives faster.

"Scientists have known for a long time that water residence times increase as you go deeper into bays," said Ruesink. "But this is the first time that both a model and field data show 'old' water close to shore across tidal flats."

These findings may explain why some parts of Willapa Bay -- known as "fattening grounds" by oyster farmers -- are better than others for generating large-biomass oysters, according to Wheat. The study also has far-reaching implications for how scientists understand the health and well-being of all creatures in tidal ecosystems like Willapa Bay. The lower levels of chlorophyll in "old" water, for example, indicate that this water contains fewer particles for filter-feeding creatures along the flats, likely because food was already taken out of the water column during previous passes over the flats. Creatures in these parts of Willapa Bay must wait longer for the bounty brought by "new" water.

Future studies would have to look at additional consequences of these longer water turnover rates, such as how pollutants are diluted and cleared from the water column, said Wheat. The team's findings add a layer of complexity to tidal environments and show definitively what experienced oyster farmers in Willapa Bay already knew: Not all tidal flats are created equal.

Credit: 
University of Washington

Model of health

image: Hannah Dailey is an assistant professor of mechanical engineering and mechanics at Lehigh University's P.C. Rossin School of Engineering and Applied Science.

Image: 
Christa Neu/Lehigh University

Until now, there’s never been a tool that could determine how long it will take a patient to heal from a tibial fracture.

“What was exciting about our project was that all the mechanical analysis was done blinded to the clinical treatment of the patients, and the surgeon never saw any of our data,” says Hannah Dailey, an assistant professor of mechanical engineering and mechanics at Lehigh University's P.C. Rossin School of Engineering and Applied Science. “When we put it all together, we were able to answer the question, ‘Can the virtual mechanical test predict how long it will take the patient to heal?’ We found that it could.”

Dailey, who is also affiliated with Lehigh's Institute for Functional Materials and Devices (I-FMD), is the lead author of “Virtual Mechanical Testing Based On Low-Dose Computed Tomography Scans for Tibial Fracture.” The paper appeared in the July 3 issue of the Journal of Bone and Joint Surgery

Most people who break their tibia, or shin bone, proceed along a normal healing timeline. As the weeks go by, more and more new bone called callus forms along the fracture line. Callus starts out as a spongy material that over time hardens into bone that is just as strong—or stronger—than it was before the break. Patients typically come in for X-rays at regular intervals, and as long as the images reveal there’s increasingly more callus in the region, all is well. 

But some people don’t heal normally. This failure to heal is called a nonunion, and it can be utterly debilitating. 

“Musculoskeletal injuries are very, very painful,” says Dailey. “And when a bone isn’t healing properly, patients can be in pain for weeks or months.”

Ideally, she says, surgeons would re-operate early on a patient with a nonunion. But differentiating between a true nonunion—where no new bone is forming at all—and a bone that is healing—just very slowly—is difficult. And that difference is critical. If it’s the former, a second surgery is imperative. If it’s the latter, it may be better for the patient to wait and avoid the risk and expense of another operation. 

Pinpointing that crucial difference between who needs additional surgery and who does not is difficult because surgeons typically rely on X-rays to determine the extent of bone healing. X-rays, however, are two-dimensional, often fuzzy, and can reveal an incomplete picture. 

“Our approach was, ‘Can we measure healing in a structural way, and put a number on how healed a bone is?’” says Dailey. “Instead of using X-rays to determine, ‘Yes, healed,’ or ‘No, not healed,’ can we be more accurate? By using engineering tools, the answer was, yes. We could.”

In this study, adults with tibial shaft fractures were prospectively recruited for observation following standard reamed intramedullary nailing, a procedure in which a titanium rod is inserted in the hollow space of the tibia and secured at the top and bottom with screws. The screws allow the patient to bear weight soon after surgery by keeping the upper bone fragment from collapsing onto the lower bone fragment.

Patient follow-up included radiographs and completion of patient-reported outcome measures, all performed at 6, 12, 18, and 24 weeks post-surgery. Low-dose computed tomography (CT) scans were done at 12 weeks. These scans provided a detailed, three-dimensional picture of what was going on inside each patient. 

Using specialized, commercially available software on the scans, Dailey’s PhD student and study co-author, Peter Schwarzenberg, built 3-D mechanical structural models that identified the regions of bone and new bone, or callus. Schwarzenberg then ran the models through finite element analysis software—the same program used by civil engineers to simulate how much deformation happens to a bridge when a load (like cars or pedestrians) is applied to it. Schwarzenberg and Dailey wanted to do the same thing for bones—apply a force and see how much the bone flexed. The less it flexed, the more healed it was.  

Schwarzenberg used the finite element software to divide each bone model into tiny zones called tetrahedra that all have a mathematical relationship to each other. He and Dailey then simulated fixing the bottom of the bone so it couldn’t move and putting a load on the top of the bone in the form of a one-degree rotational twist. The technique is called a virtual torsion test.

“So we know what’s happening to the tetrahedra at the edges of the bone,” says Schwarzenberg. “But finite element analysis allows you to calculate what is happening at the neighbors of those tetrahedra and then the neighbors of those tetrahedra, and it calculates all the way through until you’ve evaluated every piece inside the bone.”

Those calculations revealed how much the bone flexed when it was twisted.

“You want to do one test you can apply to everybody, and a twist is a standard one,” says Dailey. “It comes from the history of animal experimentation. We had a pretty good idea about what happens in the bones of animals at 12 weeks, but before we did this, nobody knew how much structural healing has taken place in humans at 12 weeks.” 

The pair then used the CT scans to digitally re-create a healthy version of each person’s leg. Schwarzenberg performed the same virtual torsion test on that healthy model then measured the flex of the unbroken leg against the fractured leg. The resulting percentage helped them determine how stiff the broken bone was compared with the healthy bone. The stiffer a bone was early in the healing process, the quicker the patient could bear weight. 

Schwarzenberg and Dailey found that their results from the virtual mechanical test significantly correlated with how long it took each patient to heal. It also clearly identified the single instance of a nonunion.

Dailey says the goal is to produce a diagnostic test that can help surgeons determine if an additional operation is necessary. It could also potentially help doctors determine when it’s safe for patients to bear weight, and it could help measure the effectiveness of devices like bone growth stimulators that might be alternatives to surgery in some nonunion cases.

Dailey and her team acknowledge one flaw in their experimental design: how they’re currently characterizing the callus. 

“There’s a lot of data for the mechanical properties of bone,” says Schwarzenberg. “It’s impossible to get cadaver bones with callus because callus disappears when a broken bone is fully healed. Bone is this organized, hard structure, and callus is almost like cartilage. It remodels into bone, but at the time points we’re looking at, we don’t expect the callus to have the same underlying structure as bone. We think we’re making it too strong because we’re using a model that was developed from bone.”

Schwarzenberg is currently trying to fill that knowledge gap at the Musculoskeletal Research Unit at the University of Zurich, as part of the International Education’s Graduate International Research Experiences program (IIE-GIRE). During his six-month fellowship, he is combining the virtual technique with an optimization algorithm to measure the mechanical properties of callus.

To be able to answer the question of whether a bone is healing and when it may be capable of bearing weight is revolutionary, says Dailey.

“These advanced modeling and simulation techniques are providing the opportunity to answer fundamental questions like, ‘What are the mechanical properties of newly formed bone?’ Questions that, believe it or not, haven’t been addressed before. Because it’s not like you can take a person, cut out a uniform piece of material, then put it in a machine and test it,” she says. “That’s impossible. But now we can do that in a virtual way.”

Credit: 
Lehigh University

KEYNOTE-024 three-year survival update

Barcelona-- First line pembrolizumab monotherapy provides durable long-term overall survival benefit compared to chemotherapy, according to data presented today by Dr. M. Reck, Lung Clinic Grosshansdorf, Airway Research Center North (ARCN), Member of the German Center for Lung Research Grosshansdorf/Germany. The presentation was made at the IASLC 2019 World Conference on Lung Cancer, hosted by the International Association for the Study of Lung Cancer.

Dr. Reck and his research colleagues had previously presented preliminary data on KEYNOTE-024 and he is now sharing three years' worth of data on these patients.

In the phase III KEYNOTE-024 trial, first-line pembrolizumab significantly improved progression free survival and overall survival compared with platinum-based chemotherapy in patients with advanced non-small cell lung cancer with a PD-L1 tumor proportion score of equal to or greater than 50 percent, and no targetable EGFR/ALK alterations.

Patients were randomized to pembrolizumab 200 mg for two years or platinum doublet for four to six cycles plus optional maintenance (nonsquamous), with stratification by Eastern Cooperative Oncology Group score of 0 or 1, tumor histology (squamous/nonsquamous), and region (East Asia/non?East Asia). Patients in the chemotherapy arm could cross over to pembrolizumab upon disease progression if they met eligibility criteria. The primary endpoint was progression-free survival and overall survival (OS)was a key secondary endpoint.

The median overall survival length among patients in the pembrolizumab arm was 26.3 months compared to 14.2 months in the chemotherapy arm. The 36-month overall survival rate was 43.7 percent in the pembrolizumab arm vs 24.9 percent in the chemotherapy arm. Despite longer mean treatment duration in the pembrolizumab arm (11.1 vs 4.4 months), grade 3?5 treatment-related adverse events were less frequent with pembrolizumab vs chemotherapy.

Initial results showed in addition activity for a re-exposure to Pembrolizumab for those patients, who had progressed after having received two years of treatment with Pembrolizumab with a response in 7 out of 10 patients (70 percent).

"With more than three years' follow-up, first-line pembrolizumab monotherapy continued to provide durable long-term OS benefit vs chemotherapy despite a majority of patients assigned to chemotherapy crossing over to pembrolizumab," said Dr. Reck. "And pembrolizumab was associated with less toxicity than chemotherapy. Patients who completed 35 cycles of pembrolizumab had durable clinical benefit and most were alive at data cutoff."

Credit: 
International Association for the Study of Lung Cancer

Scientists identify rare evolutionary intermediates to understand the origin of eukaryotes

image: The new study illustrates the remarkable similarity between how evolution happens in the macroscopic world and how evolution happens in the world that Darwin never saw -- the microscopic world of invisible molecules that inhabit living cells.

Image: 
Sergey Melnikov

A new study by Yale scientists provides a key insight into a milestone event in the early evolution of life on Earth - the origin of the cell nucleus and complex cells called eukaryotes.

While simple prokaryotic bacteria formed within the first billion years of the Earth, the origin of eurkaryotes, the first cells with nuclei, took much longer. Dating back to between 1.7 and 2.7 billion years ago, an ancient prokaryote was first transformed with a compartment, the nucleus, designed to keep their DNA material more protected from the environment (such as harmful UV damage). From this ancient event, relatively simple organisms, such as bacteria were transformed into more sophisticated ones that ultimately gave rise to all modern animals, plants and fungi.

The details of this key event have remained elusive for many years because not a single transitional fossil has been found to date.

Now, in a study led by Dr. Sergey Melnikov, from the Dieter Söll Laboratory in the Department of Molecular Biophysics and Biochemistry at Yale University, has finally found these missing fossils. To do so, they relied not on unearthing clay or rocks but peering deep inside current living cells, known as Archaea - the organisms that are believed to most closely resemble the ancient intermediates between bacteria and the more complex cells that we now know as eukaryotic cells.

These transitional forms are nothing like the traditional fossils we think of, such as dinosaur bones deposited in the ground or insects trapped in amber. Known as ribosomal proteins, these particular transitional forms are about 100-million times smaller than our bodies. Melnikov and his colleagues discovered that ribosomal proteins can be used as living "molecular fossils", whose ancient origin and structure may hold the key to understanding the origin of the cell nucleus.

"Simple lifeforms, such as bacteria, are analogous to a studio apartment: they have a single interior space which is not subdivided into separate rooms or compartments. By contrast, more complex organisms, such as fungi, animals, and plants, are made up of cells that are separated into multiple compartments," explained Melnikov. "These microscopic compartments are connected to one another via 'doors' and 'gates.' To pass through these doors and gates, the molecules that inhabit living cells must carry special ID badges, some of which are called nuclear localization signals, or NLSs."

Seeking to better understand when NLS-motifs might have emerged in ribosomal proteins, the Yale team assessed their conservation among ribosomal proteins from the three domains of life.

To date, NLS-motifs have been characterized in ten ribosomal proteins from several eukaryotic species. They compared all of the NLS-motifs found in eukaryotic ribosomal proteins (from 482 species) and tried to find a match in bacteria (2,951 species) and Archaea (402 species).

Suprisingly, they found four proteins - uL3, uL15, uL18, and uS12 - to have NLS-type motifs not only in the Eukarya but also in the Archaea. "Contrary to our expectations, we found that NLS-type motifs are conserved across all the archaeal branches, including the most ancient superphylum, called DPANN," said Melnikov.

But since Archaea don't have nuclei, the logical question which then arose was, why do they have these IDs? And what was the original biological function of these IDs in non-compartmentalized cells?"

"If you think about an equivalent to our discovery in the macroscopic world, it is similar to discoveries made during the last century of bird-like dinosaurs such as Caudipteryx zoui," said Melnikov. "These ancient flightless birds have illustrated that it took multiple millions of years for dinosaurs to develop wings. Yet, strikingly, for the first few million years their wings were not good enough to support flight."

Similarly, the study by Melnikov and colleagues suggests that, even though NLSs may not initially have emerged to allow cellular molecules to pass through microscopic doors and gates between cellular compartments, they could have emerged to fulfill a similar biological function - to help molecules get to their proper biological partners.

As Melnikov explains: "Our analysis shows that in complex cells the very same IDs that allow proteins to pass through the microscopic gates are also used to recognize biological partners of these proteins. In other words, in complex cells, the IDs fulfill two conceptually similar biological functions. In the Archaea, however, these IDs play just one of these functions - these IDs, or NLSs, help proteins to recognize their biological partners and distinguish them from the thousands of other molecules that float in a cell."

But what led to the evolution of these IDs among cellular proteins in the first place?

As Melnikov explains, "When life first emerged on the face of our planet, the earliest life forms were likely made of a very limited number of molecules. Therefore, it was relatively easy for these molecules to find one specific partner among all the other molecules in a living cell. However, as cells grew in size and complexity, it is possible, even probable, that the old rules of specific interactions between cellular molecules had to be redefined, and this is how the IDs were introduced into the structure of cellular proteins - to help these proteins identify their molecular partners more easily in the complex environment of a complex cell. Coming back to the analogy with bird-like dinosaurs, our study illustrates the remarkable similarity between how evolution happens in the macroscopic world and how evolution happens in the world that Darwin never saw - the microscopic world of invisible molecules that inhabit living cells."

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

Tweets indicate nicotine dependence, withdrawal symptoms of JUUL users

image: This infographic depicts how JUUL users tweet about symptoms of nicotine dependence.

Image: 
Mattie Winowitch/UPMC

PITTSBURGH, Sept. 9, 2019 - As e-cigarette brand JUUL continues to climb in popularity among users of all ages, University of Pittsburgh School of Medicine researchers took a unique approach to analyzing its impact by using Twitter to investigate any mention of nicotine effects, symptoms of dependence and withdrawal in regards to JUUL use.

The study revealed that 1 out of every 5 tweets mentioning JUUL identified for the analysis also referenced addiction-related themes. The full results are published in the journal Drug and Alcohol Dependence.

"Many news stories have reported that people are using JUUL and experiencing what sound like acute effects of nicotine exposure and symptoms of dependence," said lead author Jaime Sidani, Ph.D., assistant director of Pitt's Center for Research on Media, Technology, and Health. "We turned to Twitter to gather real-time data on what people are sharing about their JUUL use."

To complete the study, Sidani and her team of researchers created search filters within Twitter's Filtered Streams interface to collect data on all available tweets matching the terms "juul," "juuls" and "juuling," as well as their hashtag equivalents between April 11, 2018, and June 16, 2018.

After additional narrowing of search results by implementing specific keywords, excluding commercial content and ensuring the tweets were in first-person context, a final data set of 1,986 tweets remained for final analysis by two independent coders.

Of these tweets, 21.1% were coded as being related to dependence (335 tweets), nicotine effects (189 tweets), quitting JUUL or withdrawal, or both (42 tweets). Sidani said these findings aren't surprising when considering the powerful dose of nicotine that JUUL provides. In addition, JUUL uses a nicotine salt formula, which is designed to increase the rate of absorption and create a more palatable vapor, making JUUL a more appealing option compared to other modes of nicotine delivery.

"We found many self-reported symptoms of nicotine dependence," said co-author A. Everette James, J.D., director of the Pitt Health Policy Institute and interim dean of Pitt's Graduate School of Public Health. "Because of the lack of public knowledge about the dependence risks, it makes sense that many people seemed surprised about experiencing symptoms of withdrawal when they could not use their device."

Sidani and her team hope to continue studying the social conversation surrounding JUUL and its addictive properties, as well as promote the use of Twitter and other social media platforms as analysis tools for related research topics.

"By leveraging real-time data from the Twitter platform, we can research timely health trends on an unprecedented scale," said co-author Jason Colditz, M.Ed., program coordinator at Pitt's Center for Research on Media, Technology, and Health. "In this study, we detected candid narratives related to JUUL dependence, a relatively recent public health trend that deserves further investigation."

Credit: 
University of Pittsburgh

And then there was light: looking for the first stars in the Universe

video: In this simulation of the Epoch of Reionisation, neutral hydrogen, in red, is gradually ionised by the first stars, shown in white. The video was made by the University of Melbourne's Dark-ages Reionisation And Galaxy Observables from Numerical Simulations (DRAGONS) programme.

Image: 
Paul Geil and Simon Mutch

Astronomers are closing in on a signal that has been travelling across the Universe for 12 billion years, bringing them nearer to understanding the life and death of the very earliest stars.

In a paper on the preprint site arXiv and soon to be published in the Astrophysical Journal, a team led by Dr Nichole Barry from Australia's University of Melbourne and the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) reports a 10-fold improvement on data gathered by the Murchison Widefield Array (MWA) - a collection of 4096 dipole antennas set in the remote hinterland of Western Australia.

The MWA, which started operating in 2013, was built specifically to detect electromagnetic radiation emitted by neutral hydrogen - a gas that comprised most of the infant Universe in the period when the soup of disconnected protons and neutrons spawned by the Big Bang started to cool down.

Eventually these hydrogen atoms began to clump together to form stars - the very first ones to exist - initiating a major phase in the evolution of the Universe, known as the Epoch of Reionisation, or EoR.

"Defining the evolution of the EoR is extremely important for our understanding of astrophysics and cosmology," explains Dr Barry.

"So far, though, no one has been able to observe it. These results take us a lot closer to that goal."

The neutral hydrogen that dominated space and time before and in the early period of the EoR radiated at a wavelength of approximately 21 centimetres. Stretched now to somewhere above two metres because of the expansion of the Universe, the signal persists - and detecting it remains the theoretical best way to probe conditions in the early days of the Cosmos.

However, doing so is fiendishly difficult.

"The signal that we're looking for is more than 12 billion years old," explains ASTRO 3D member and co-author Associate Professor Cathryn Trott, from the International Centre for Radio Astronomy Research at Curtin University in Western Australia.

"It is exceptionally weak and there are a lot of other galaxies in between it and us. They get in the way and make it very difficult to extract the information we're after."

In other words, the signals recorded by the MWA - and other EoR-hunting devices such as the Hydrogen Epoch of Reionisation Array in South Africa and the Low Frequency Array in The Netherlands - are extremely messy.

Using 21 hours of raw data Dr Barry, co-lead author Mike Wilensky, from the University of Washington in the US, and colleagues explored new techniques to refine analysis and exclude consistent sources of signal contamination, including ultra-faint interference generated by radio broadcasts on Earth.

The result was a level of precision that significantly reduced the range in which the EoR may have begun, pulling in constraints by almost an order of magnitude.

"We can't really say that this paper gets us closer to precisely dating the start or finish of the EoR, but it does rule out some of the more extreme models," says Professor Trott.

"That it happened very rapidly is now ruled out. That the conditions were very cold is now also ruled out."

Dr Barry said the results represented not only a step forward in the global quest to explore the infant Universe, but also established a framework for further research.

"We have about 3000 hours of data from MWA," she explains, "and for our purposes some of it is more useful than others. This approach will let us identify which bits are most promising, and analyse it better than we ever could before."

Credit: 
ARC Centre of Excellence for All Sky Astrophysics in 3D (ASTRO 3D)

Experimental 'blood test' accurately screens for PTSD

An artificial intelligence tool - which analyzed 28 physical and molecular measures, all but one from blood samples - confirmed with 77 percent accuracy a diagnosis of posttraumatic stress disorder (PTSD) in male combat veterans, according to a new study.

Led by NYU School of Medicine, Harvard John A. Paulson School of Engineering and Applied Sciences, and the U.S. Army Medical Research and Development Command, the study describes for the first time a blood-based biomarker panel for diagnosis of warzone-related PTSD. Published online September 9 in the journal Molecular Psychiatry, the measures included genomic, metabolic, and protein biomarkers.

"While work remains to further validate our panel, it holds tremendous promise as the first blood test that can screen for PTSD with a level of accuracy useful in the clinical setting," says senior study author Charles R. Marmar, MD, the Lucius N. Littauer Professor and chair of the Department of Psychiatry at NYU School of Medicine. "If we are successful, this test would be one of the first of its kind - an objective blood test for a major psychiatric disorder."

There are currently no FDA-approved blood tests, for instance, for depression or bipolar disorder, says Marmar. The new study embodies a longstanding goal in the field of psychiatry: to shift mental health toward standards like those used in cardiology or cancer, for instance, in which lab tests enable accurate diagnoses based on physical measures (biomarkers) instead of on self-reporting or interviews with inherent biases.

Those with PTSD experience strong, persistent distress when reminded of a triggering, traumatic event. According to a World Health Organization survey, more than 70 percent of adults worldwide have experienced a traumatic event at some point in their lives, although not all develop the condition.

Twenty Eight Out of a Million

For the current study, 83 male, warzone-exposed veterans of the Iraq and Afghanistan conflicts with confirmed PTSD, and another 82 warzone-deployed veterans serving as healthy controls, were recruited from the Manhattan, Bronx and Brooklyn Veterans Affairs (VA) Medical Centers, as well as from other regional VA medical centers, veterans' service organizations, and the community.

The researchers tested nearly one million features with current genomic and other molecular tests and narrowed them to 28 markers. By measuring a large number of unbiased quantities, the team sought to determine which of them were associated with an accurate PTSD symptom diagnosis.

Using a combination of statistical techniques, the study authors narrowed the best measures from a million to 343 to 77, and then finally to 28, with the final group outperforming the larger groups in prediction accuracy. Some of this winnowing was accomplished using machine learning, mathematical models trained with data to find patterns.

The team then applied their "PTSD blood test" to an independent group of veterans to see how well their new tool matched the diagnoses made previously using standard clinical questionnaires like the
Clinician Administered PTSD Scale (CAPS). This comparison yielded the 77 percent accuracy figure.

"These molecular signatures will continue to be refined and adapted for commercialization," says co-senior study author Marti Jett, PhD, chief scientist in Systems Biology for the US Army Medical Research & Development Command (USAMRDC), within the US Army Center for Environmental Health Research (CIV USACEHR). "The Department of Health Affairs within the Department of Defense is considering this approach as a potential screening tool that could identify service members, before and after deployment, with features of unresolved post-traumatic stress."

Those identified would be referred for their specific issues (sleep disruption, anger management, etc.), which is available at most military bases, adds Jett.

The current study did not seek to explain the disease mechanisms related to the final markers, but rather to blindly pick those that did the best job of diagnosing PTSD. That said, the group of best-performing markers included the activity levels of certain genes, amounts of key proteins in the blood, levels of metabolites involved in energy processing, as well as levels of circulating microRNAs (miRNAs), snippets of genetic material known to alter gene activity and tied to heart diseases and features of PTSD. The one indicator not measured by blood test was the heart rate variability.

"These results point toward many biochemical pathways that may guide the future design of new drugs, and support the theory that PTSD is a systemic disease that causes genetic and cellular changes well beyond the brain," says corresponding author Frank Doyle, PhD, dean of Harvard John A. Paulson School of Engineering and Applied Sciences, one of the research study's sites.

Previous studies of genetic predictors of PTSD risk have shown strong performance in younger, active duty populations, says author Kelsey Dean, PhD, a member of Doyle's group at Harvard. This suggests that such biomarkers may be able to signal for PTSD at its earliest ages, and so be useful in prevention. For future research, studies of populations beyond male veterans will be needed to better understand the clinical utility of the proposed biomarker panel.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

NASA finds Gabrielle's strength on its northern side

image: On Sept. 8, 2019 at 2:35 p.m. EDT (1635 UTC) the MODIS instrument aboard NASA's Aqua satellite provided a visible image of Gabrielle moving through the Central Atlantic Ocean.

Image: 
NASA/NRL

NASA's Aqua satellite passed over the Central Atlantic Ocean and provided a visible view of Tropical Storm Gabrielle that helped pinpoint its strongest side.

On Sept. 8, 2019 at 2:35 p.m. EDT (1635 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite provided a visible image of Gabrielle that showed strongest thunderstorms northeast of the center.

The MODIS image also showed that there were also fragmented bands of strong thunderstorms south and southwest of the center.

On Sept. 9, NOAA's National Hurricane Center or NHC said, "Deep convection associated with Gabrielle has become somewhat less organized overnight with the center located near the northeastern portion of the coldest cloud tops."

NASA satellites provide research data on structure, rainfall, winds and temperature of tropical cyclones. Those data are shared with forecasters at NHC to incorporate in their forecasts.

NHC noted at 5 a.m. EDT (0900 UTC), on Sept. 9 that the center of Tropical Storm Gabrielle was located near latitude 37.7 degrees north and longitude 48.5 degrees west. Gabrielle's center is about 1,170 miles (1.885 km) west of the Azores Islands.

Gabrielle is moving toward the north-northeast near 16 mph (26 kph).  A turn toward the northeast with an increase in forward speed is expected today, Sept. 9 and a northeastward motion at an even faster forward speed is expected on Tuesday and Wednesday. Maximum sustained winds are near 60 mph (95 kph) with higher gusts. The estimated minimum central pressure is 997 millibars.

NHC said, "Little change in strength is expected today, but a weakening trend is likely to begin tonight." Weakening is expected because Gabrielle will be moving into an area of outside winds (vertical wind shear) and cooler sea surface temperatures.

Gabrielle is expected to become an extratropical low pressure area by Tuesday night, Sept. 10 and the extratropical low is predicted to slowly weaken and be absorbed by a larger low pressure system over the northeastern Atlantic in a little more than 3 days

For updated forecasts, visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center