Earth

Study reveals profound patterns in globally important algae

image: Tiny algae called coccolithophores, such as this Discosphaera tubifera, play a major role in the global carbon cycle. New research shows that coccolithophores are mysteriously scarce in one of the most fertile and productive regions of the Atlantic Ocean.

Image: 
William Balch and Colin Fisher/Bigelow Laboratory for Ocean Sciences

A globally important ocean algae is mysteriously scarce in one of the most productive regions of the Atlantic Ocean, according to a new paper in Deep Sea Research I. A massive dataset has revealed patterns in the regions where Atlantic coccolithophores live, illuminating the inner workings of the ocean carbon cycle and raising new questions.

"Understanding these large-scale patterns helps us understand ocean productivity in the entire Atlantic basin," said William Balch, a senior research scientist at Bigelow Laboratory for Ocean Sciences and lead author of the paper. "Collecting this dataset has been a superhuman effort for my team that has taken hundreds of days at sea and years of analysis."

The researchers found that coccolithophores both struggle and thrive in unexpected places throughout the Atlantic Ocean. They are most abundant in subpolar and temperate waters and surprisingly scarce around the equator, where an abundance of nutrients and sunlight create one of the most biologically productive regions of the global ocean.

The team also discovered that some coccolithophore species thrive deep below the surface near the farthest reaches of sunlight - within or just above an important water layer called "Sub-Antarctic mode water." This distinct feature flows north from the Southern Ocean and provides nutrients to much of the global ocean, including the northern hemisphere. Balch suspects that booming coccolithophore populations in the Southern Ocean are depleting the water layer's nutrient supply and altering its chemistry - potentially making it inhospitable for coccolithophores by the time it reaches the equator.

"Sub-Antarctic mode water exerts a staggering level of control on much of the global ocean," Balch said. "If coccolithophores are changing its essential properties, then they could be influencing which species grow in food webs as far away as the equator or even in the northern hemisphere."

Balch and his team built this vast dataset from measurements collected during 10 45-day research cruises through the Atlantic Meridional Transect program, which crosses the Atlantic Ocean between the United Kingdom and the tip of South America. Their findings also have important applications to observations that rely on NASA ocean color satellites. These powerful oceanographic tools allow scientists to detect coccolithophore populations by measuring the light they reflect back into space, but they require on-the-water measurements to ground-truth the satellite data. NASA was the primary funder of this work.

Coccolithophores build protective crystalline plates from chalk minerals by extracting dissolved inorganic carbon from seawater. The way a species' plates are shaped impacts how those plates scatter light in the surface ocean, especially after they become detached and begin to sink towards the seafloor. The researchers discovered that not all coccolithophores drop their plates, and that the plates found throughout the water column come from just a few species.

This finding vastly simplifies the calculations needed to measure the carbon that coccolithophores contain from satellite reflectance data. Coccolithophores play a major role in the global carbon cycle, and understanding where they live and how they scatter light is essential to quantifying how this important element moves between the surface ocean and seafloor. Ultimately, that carbon is either broken down by deep-sea bacteria or buried in sediment, effectively sequestering it from the atmosphere for thousands of years.

Balch's team, along with an international team of investigators, will continue this research in January, when they embark on a National Science Foundation-funded cruise to answer one of the most important questions raised by this study - how coccolithophores in the Southern Ocean alter Sub-Antarctic mode water before it flows north. Their research will elucidate how these changes may affect productivity further north, and why coccolithophores are so scarce at the equator.

"The grand question remains - what is missing from this equatorial water that makes it not conducive to coccolithophore growth in such a fertile region of the world ocean?" Balch said. "The difference in the amount of coccolithophores at temperate latitudes and the equator is profound, and it has enormous ramifications for the ocean's food webs and the productivity of the entire planet."

Credit: 
Bigelow Laboratory for Ocean Sciences

Biomolecular analyses of Roopkund skeletons show Mediterranean migrants in Indian Himalaya

image: The lake was thought to be the site of an ancient catastrophic event that left several hundred people dead, but the first ancient whole genome data from India shows that diverse groups of people died at the lake in multiple events approximately 1000 years apart.

Image: 
Atish Waghwase

A large-scale study conducted by an international team of scientists has revealed that the mysterious skeletons of Roopkund Lake - once thought to have died during a single catastrophic event - belong to genetically highly distinct groups that died in multiple periods in at least two episodes separated by one thousand years. The study, published this week in Nature Communications, involved an international team of 28 researchers from institutions in India, the United States and Europe.

Situated at over 5000 meters above sea-level in the Himalayan Mountains of India, Roopkund Lake has long puzzled researchers due to the presence of skeletal remains from several hundred ancient humans, scattered in and around the lake's shores, earning it the nickname Skeleton Lake or Mystery Lake. "Roopkund Lake has long been subject to speculation about who these individuals were, what brought them to Roopkund Lake, and how they died," says senior author Niraj Rai, of the Birbal Sahni Institute of Palaeosciences in Lucknow, India, who began working on the Roopkund skeletons when he was a post-doctoral scientist at the CSIR Centre for Cellular and Molecular Biology (CCMB) in Hyderabad, India.

The current publication, the final product of a more than decade-long study that presents the first whole genome ancient DNA data from India, reveals that the site has an even more complex history than imagined.

First ancient DNA data from India shows diverse groups at Roopkund Lake

Ancient DNA obtained from the skeletons of Roopkund Lake - representing the first ancient DNA ever reported from India - reveals that they derive from at least three distinct genetic groups. "We first became aware of the presence of multiple distinct groups at Roopkund after sequencing the mitochondrial DNA of 72 skeletons. While many of the individuals possessed mitochondrial haplogroups typical of present-day Indian populations, we also identified a large number of individuals with haplogroups that would be more typical of populations from West Eurasia," says co-senior author Kumarasamy Thangaraj of CCMB, who started the project more than a decade ago, in an ancient DNA clean lab that he and then-director of CCMB Lalji Singh (deceased) built to study Roopkund.

Whole genome sequencing of 38 individuals revealed that there were at least three distinct groups among the Roopkund skeletons. The first group is composed of 23 individuals with ancestries that are related to people from present-day India, who do not appear to belong to a single population, but instead derived from many different groups. Surprisingly, the second largest group is made up of 14 individuals with ancestry that is most closely related to people who live in the eastern Mediterranean, especially present-day Crete and Greece. A third individual has ancestry that is more typical of that found in Southeast Asia. "We were extremely surprised by the genetics of the Roopkund skeletons. The presence of individuals with ancestries typically associated with the eastern Mediterranean suggests that Roopkund Lake was not just a site of local interest, but instead drew visitors from across the globe," says first-author Éadaoin Harney of Harvard University.

Dietary analysis of the Roopkund individuals confirms diverse origins

Stable isotope dietary reconstruction of the skeletons also supports the presence of multiple distinct groups. "Individuals belonging to the Indian-related group had highly variable diets, showing reliance on C¬3 and C4 derived food sources. These findings are consistent with the genetic evidence that they belonged to a variety of socioeconomic groups in South Asia," says co-senior author Ayushi Nayak of the Max Planck Institute for the Science of Human History. "In contrast, the individuals with eastern Mediterranean-related ancestry appear to have consumed a diet with very little millet."

Two major groups at Roopkund Lake date to 1000 years apart, with the more recent around 1800 AD

The findings also revealed a second surprise about the skeletons of Roopkund Lake. Radiocarbon dating indicates that the skeletons were not deposited at the same time, as previously assumed. Instead, the study finds that the two major genetic groups were actually deposited approximately 1000 years apart. First, during the 7th-10th centuries CE, individuals with Indian-related ancestry died at Roopkund, possibly during several distinct events. It was not until sometime during the 17th-20th centuries that the other two groups, likely composed of travelers from the eastern Mediterranean and Southeast Asia arrived at Roopkund Lake. "This finding shows the power of radiocarbon dating, as it had previously been assumed that the skeletons of Roopkund Lake were the result of a single catastrophic event," says co-senior author Douglas J. Kennett of the University of California, Santa Barbara.

"It is still not clear what brought these individuals to Roopkund Lake or how they died," says Rai. "We hope that this study represents the first of many analyses of this mysterious site."

"Through the use of biomolecular analyses, such as ancient DNA, stable isotope dietary reconstruction, and radiocarbon dating, we discovered that the history of Roopkund Lake is more complex than we ever anticipated, and raises the striking question of how migrants from the eastern Mediterranean, who have an ancestry profile that is extremely atypical of the region today, died in this place only a few hundred years ago," concludes co-senior author David Reich of Harvard Medical School. "This study highlights the power of biomolecular tools to provide unexpected insights into our past."

Credit: 
Max Planck Institute of Geoanthropology

Mini kidneys grown from stem cells give new insights into kidney disease and therapies

image: These kidney organoids measure 1 to 2mm in diameter.

Image: 
NTU Singapore

An international team of researchers led by Nanyang Technological University, Singapore (NTU Singapore) has grown 'miniature kidneys' in the laboratory that could be used to better understand how kidney diseases develop in individual patients.

The mini kidneys, known as kidney organoids, were grown outside the body from skin cells derived from a single patient who has polycystic kidney disease, one of the most common inherited causes of kidney failure in adults.

The researchers reprogrammed these cells to obtain patient-specific pluripotent stem cells, which, under the right conditions, can develop into kidney organoids similar to human foetal kidneys in the first three to six months of development.

The kidney organoids were then used to validate the therapeutic effects of two drug molecules with potential for treating genetic polycystic kidney disease, demonstrating that the research could be of significant value in developing personalised treatments for people with this disease.

Existing approaches to testing potential treatments through such 'drug screening' do not take account of the fact that the genetic errors that cause kidney diseases vary from patient to patient.

By generating induced pluripotent stem cells from an adult patient with a genetic kidney disease, and then growing kidney organoids from them, the research team has paved the way for tailoring treatment plans specific to each patient, which could be extended to a range of kidney diseases.

The research, led by NTU Singapore Assistant Professor Xia Yun and her team, which includes NTU Assistant Professor Foo Jia Nee and Professor Juan Carlos Izpisua Belmonte from the Salk Institute for Biological Studies, in San Diego, California, was published in Cell Stem Cell in July 2019.

Assistant Professor Xia from the NTU Lee Kong Chian School of Medicine (LKCMedicine), said, "A patient's genetic makeup is closely intertwined with how their kidney disease will develop, as the type of mutation within the disease-causing gene can differ from patient to patient.

"Our kidney organoids, grown from the cells of a patient with inherited polycystic kidney disease, have allowed us to find out which drugs will be most effective for this specific patient. We believe that this approach can be extended to study many other types of kidney disease, such as diabetic nephropathy."

Professor Juan Carlos Izpisua Belmonte, a world-renowned stem cell scientist and an international collaborator of this study, said, "Although we are still quite far away from using these kidney organoids for replacement therapy, this study has made a small step closer to this ultimate goal."

New insights into human kidney development

The kidney organoids developed by Asst Prof Xia and her team may also offer new insights into human kidney development, which currently cannot be studied in depth due to concerns surrounding human stem cell research.

While the origin of kidney blood vessel networks is not fully known, it is widely accepted that a type of stem cell known as 'vascular progenitors' is involved in their formation by developing into blood vessel cells.

By examining the genetic information within single cells of the organoid, the NTU-led team also discovered a new source of stem cells that contribute to making these blood vessel networks: nephron progenitor cells. Prior to this discovery, these cells were known only as precursors to nephrons, the kidney's filtering units.

NTU LKCMedicine Assistant Professor Foo Jia Nee, said, "We observed very robust and consistent development of blood vessel networks within our kidney organoids, which opens new doors to investigate the developmental origin of renal blood vessel networks, which is still not fully understood. Using this novel organoid platform, we unexpectedly discovered a new source of renal blood vessels that may improve our understanding of kidney development."

The mini kidneys may also be used to better understand the development of nephrons in the kidney. The number of nephrons at birth is inversely correlated with incidence of hypertension and kidney failure later in life. Being born with a high nephron number appears to provide some degree of protection against these conditions.

Asst Prof Xia said, "A thorough understanding of human embryonic kidney development, especially how environmental factors influence the process, may help us develop ways to promote a high birth nephron number for foetuses as they develop during pregnancy."

Stem cell scientist Dr Jonathan Loh Yuin-Han, senior principal investigator at the Institute of Molecular and Cell Biology at the Agency for Science, Technology and Research, who was not involved in the study, said, "The new vascularised kidney organoids created by Xia Yun and her team represent a transforming advance in the field. The organoids model anatomical and functional hallmarks of the real organ, so they provide deep insights into the kidney developmental processes. This could inspire future works on individualised bioengineered mini organs for application in personalised medicine and treatment of complex diseases."

Understanding the inner workings of a diseased kidney

To study the effects of genetic polycystic kidney disease, Asst Prof Xia and her team first took regular adult cells from an adult patient with the disease and genetically reprogrammed them into stem cells.

The creation of these induced pluripotent stem cells is necessary because the adult human body does not have any kidney stem cells. Two essential chemicals are then added to direct these induced pluripotent stem cells to grow into kidney organoids.

Four to five weeks later, these organoids developed fluid-filled cysts that are characteristic of the disease. This signaled that they were ready to be used to test the efficacy of potential drug candidates for drug development.

The same approach can be employed to generate kidney organoids from stem cells derived from healthy individuals. When these kidney organoids were implanted into mice, the blood vessel network of these mini kidneys successfully connected with the host mice circulation system and developed a more mature architecture capable of preliminary filtration and reabsorption.

Credit: 
Nanyang Technological University

Climate is changing faster than animal adaptation

image: Song Sparrow is one of the species considered in an international review of climate change studies.

Image: 
Jennifer Taggart, courtesy Cornell Lab of Ornithology

Berlin, Germany & Ithaca, N.Y.--An international team of scientists reviewed more than 10,000 published climate change studies and has reached a sobering conclusion. Birds and other animals cannot adapt fast enough to keep pace with climate change, throwing species survival in doubt. These results were recently published in the scientific journal Nature Communications.

"Our research focused on birds because complete data on other groups were scarce," says lead author Viktoriia Radchuk at the Leibniz Institute for Zoo and Wildlife Research in Berlin. She adds: "We demonstrate that in temperate regions, the rising temperatures are associated with a shift in the timing of biological events to earlier dates."

These biological events include hibernation, reproduction, and migration. Changes in body size, body mass, or other physical traits have also been associated with climate change but--as confirmed by this study--show no systematic pattern.

"Birds can respond to changing climate by adjusting the timing of egg-laying. They lay earlier in warmer springs and later in colder springs. This is adaptive because in doing that they have young in the nest when food is most abundant," explains co-author André Dhondt at the Cornell Lab of Ornithology. "The results of this analysis are especially worrisome given its scope. The rate of climate change has increased so much over the last 20 years that in general birds and other animals cannot respond fast enough, leading to a mismatch between the timing of nesting and when food needed to feed the babies is peaking."

The researchers extracted relevant information from the scientific literature to relate changes in climate over the years to possible changes in timing and physical traits. Next, they evaluated whether observed trait changes were associated with higher survival or an increased number of offspring.

Co-author Thomas Reed, senior lecturer at University College Cork, Ireland, explains, "Our results were obtained by comparing the observed response to climate change with the one expected if a population would be able to adjust their traits so to track the climate change perfectly."

The data analyzed included common and abundant species which are known to cope with climate change relatively well. Even they cannot keep pace.

"This work underscores the importance of careful, long-term studies of bird populations with individually marked birds," says Dhondt. "My work on Great and Blue Tits in Belgium, together with multiple other studies, not only documents how birds adapt in response to climate change, but also shows how separate populations of the same bird species can adapt differently."

The scientists hope that their analysis and the assembled datasets will stimulate research on the resilience of bird and other animal populations in the face of global change and contribute to a better predictive framework to assist future conservation management actions.

Credit: 
Cornell University

Low levels of vitamin D in elementary school could spell trouble in adolescence

ANN ARBOR -- Vitamin D deficiency in middle childhood could result in aggressive behavior as well as anxious and depressive moods during adolescence, according to a new University of Michigan study of school children in Bogotá, Colombia.

Children with blood vitamin D levels suggestive of deficiency were almost twice as likely to develop externalizing behavior problems--aggressive and rule breaking behaviors -- as reported by their parents, compared with children who had higher levels of the vitamin.

Also, low levels of the protein that transports vitamin D in blood were related to more self-reported aggressive behavior and anxious/depressed symptoms. The associations were independent of child, parental and household characteristics.

"Children who have vitamin D deficiency during their elementary school years appear to have higher scores on tests that measure behavior problems when they reach adolescence," said Eduardo Villamor, professor of epidemiology at the U-M School of Public Health and senior author of the study appearing in the Journal of Nutrition.

Villamor said vitamin D deficiency has been associated with other mental health problems in adulthood, including depression and schizophrenia, and some studies have focused on the effect of vitamin D status during pregnancy and childhood. However, few studies have extended into adolescence, the stage when behavior problems may first appear and become serious conditions.

In 2006, Villamor's team recruited 3,202 children aged 5-12 years into a cohort study in Bogotá, Colombia, through a random selection from primary public schools. The investigators obtained information on the children's daily habits, maternal education level, weight and height, as well as the household's food insecurity and socioeconomic status. Researchers also took blood samples.

After about six years, when the children were 11-18 years old, the investigators conducted in-person follow-up interviews in a random group of one-third of the participants, assessing the children's behavior through questionnaires that were administered to the children themselves and their parents. The vitamin D analyses included 273 of those participants.

While the authors acknowledge the study's limitations, including a lack of baseline behavior measures, their results indicate the need for additional studies involving neurobehavioral outcomes in other populations where vitamin D deficiency may be a public health problem.

Credit: 
University of Michigan

The meat allergy: Researcher IDs biological changes triggered by tick bites

image: The new discovery by UVA's Loren Erickson, PhD, is an important step toward understanding the strange meat allergy spread by ticks -- and developing a treatment for it.

Image: 
UVA Health

A University of Virginia School of Medicine scientist has identified key immunological changes in people who abruptly develop an allergic reaction to mammalian meat, such as beef. His work also provides an important framework for other scientists to probe this strange, recently discovered allergy caused by tick bites.

The findings by UVA's Loren Erickson, PhD, and his team offer important insights into why otherwise healthy people can enjoy meat all their lives until a hot slab of ground beef or a festive Fourth of July hot dog suddenly become potentially life-threatening. Symptoms of the meat allergy can range from mild hives to nausea and vomiting to severe anaphylaxis, which can result in death.

"We don't know what it is about the tick bite that causes the meat allergy. And, in particular, we haven't really understood the source of immune cells that produce the antibodies that cause the allergic reactions," Erickson explained. "There's no way to prevent or cure this food allergy, so we need to first understand the underlying mechanism that triggers the allergy so we can devise a new therapy."

Understanding the Meat Allergy

People who develop the allergy in response to the bite of the Lone Star tick often have to give up eating mammalian meat, including beef and pork, entirely. Even food that does not appear to contain meat can contain meat-based ingredients that trigger the allergy. That means people living with the meat allergy must be hyper-vigilant. (For one person's experience with the meat allergy, visit UVA's Making of Medicine blog.)

The allergy was first discovered by UVA's Thomas Platts-Mills, MD, a renowned allergist who determined that people were suffering reactions to a sugar called alpha-gal found in mammalian meat. Exactly what is happening inside the body, though, has remained poorly understood. Erickson's work, along with that of others at UVA, is changing that.

Erickson's team in UVA's Department of Microbiology, Immunology and Cancer Biology has found that people with the meat allergy have a distinctive form of immune cells known as B cells, and they have them in great numbers. These white blood cells produce antibodies that release chemicals that cause the allergic reaction to meat.

In addition, Erickson, a member of UVA's Carter Immunology Center, has developed a mouse model of the meat allergy so that scientists can study the mysterious allergy more effectively.

"This is the first clinically relevant model that I know of, so now we can go and ask a lot of these important questions," he said. "We can actually use this model to identify underlying causes of the meat allergy that may inform human studies. So it's sort of a back-and-forth of experiments that you can do in animal models that you can't do in humans. But you can identify potential mechanisms that could lead to new therapeutic strategies so that we can go back to human subjects and test some of those hypotheses."

Credit: 
University of Virginia Health System

A Stone Age boat building site has been discovered underwater

image: This is historian Dan Snow inspecting the site.

Image: 
Maritime Archaeological Trust

The Maritime Archaeological Trust has discovered a new 8,000 year old structure next to what is believed to be the oldest boat building site in the world on the Isle of Wight.

Director of the Maritime Archaeological Trust, Garry Momber, said "This new discovery is particularly important as the wooden platform is part of a site that doubles the amount of worked wood found in the UK from a period that lasted 5,500 years."

The site lies east of Yarmouth, and the new platform is the most intact, wooden Middle Stone Age structure ever found in the UK. The site is now 11 meters below sea level and during the period there was human activity on the site, it was dry land with lush vegetation. Importantly, it was at a time before the North Sea was fully formed and the Isle of Wight was still connected to mainland Europe.

The site was first discovered in 2005 and contains an arrangement of trimmed timbers that could be platforms, walkways or collapsed structures. However, these were difficult to interpret until the Maritime Archaeological Trust used state of the art photogrammetry techniques to record the remains. During the late spring the new structure was spotted eroding from within the drowned forest. The first task was to create a 3D digital model of the landscape so it could be experienced by non-divers. It was then excavated by the Maritime Archaeological Trust during the summer and has revealed a cohesive platform consisting of split timbers, several layers thick, resting on horizontally laid round-wood foundations.

Garry continued "The site contains a wealth of evidence for technological skills that were not thought to have been developed for a further couple of thousand years, such as advanced wood working. This site shows the value of marine archaeology for understanding the development of civilisation.

Yet, being underwater, there are no regulations that can protect it. Therefore, it is down to our charity, with the help of our donors, to save it before it is lost forever."

The Maritime Archaeological Trust is working with the National Oceanography Centre (NOC) to record and study, reconstruct and display the collection of timbers. Many of the wooden artefacts are being stored in the British Ocean Sediment Core Research facility (BOSCORF), operated by the National Oceanography Centre.

As with sediment cores, ancient wood will degrade more quickly if it is not kept in a dark, wet and cold setting. While being kept cold, dark and wet, the aim is to remove salt from within wood cells of the timber, allowing it to be analysed and recorded. This is important because archaeological information, such as cut marks or engravings, are most often found on the surface of the wood and are lost quickly when timber degrades. Once the timbers have been recorded and have desalinated, the wood can be conserved for display.

Dr Suzanne Maclachlan, the curator at BOSCORF, said "It has been really exciting for us to assist the Trust's work with such unique and historically important artefacts. This is a great example of how the BOSCORF repository is able to support the delivery of a wide range of marine science."

When diving on the submerged landscape Dan Snow, the history broadcaster and host of History Hit, one of the world's biggest history podcasts, commented that he was both awestruck by the incredible remains and shocked by the rate of erosion.

This material, coupled with advanced wood working skills and finely crafted tools suggests a European, Neolithic (New Stone Age) influence. The problem is that it is all being lost. As the Solent evolves, sections of the ancient land surface are being eroded by up to half a metre per year and the archaeological evidence is disappearing.

Research in 2019 was funded by the Scorpion Trust, the Butley Research Group, the Edward Fort Foundation and the Maritime Archaeology Trust. Work was conducted with the help of volunteers and many individuals who gave their time and often money, to ensure the material was recovered successfully.

Credit: 
National Oceanography Centre, UK

Ammonia for fuel cells

image: Traditional fuel cell research involves hydrogen fuel cells, but UD researchers are engineering fuel cells that utilize ammonia, the molecule pictured above, instead.

Image: 
Photo illustration by Joy Smoker

Fuel cells are pollution-free power sources that convert chemical energy to electricity with high efficiency and zero emissions. Fuel cell cars, trucks, and buses would allow people to travel long distances with convenient refueling and less of a carbon footprint.

Researchers at the University of Delaware are working on technology to make fuel cells cheaper and more powerful so that fuel cell vehicles can be a viable option for all someday. Traditional fuel cell research involves hydrogen fuel cells, but the UD researchers are engineering fuel cells that utilize ammonia instead.

In a new analysis published in the journal Joule, a team of engineers at the Center for Catalytic Science and Technology found that among fuels produced from renewable energy, ammonia has the lowest cost per equivalent gallon of gasoline.

"As a nitrogen-based liquid fuel, ammonia is cheaper to store and distribute than hydrogen and avoids the carbon dioxide emissions of other liquid fuels, which are expensive to capture" said Brian Setzler, one of the lead authors and a postdoctoral associate at UD.

The challenges, however, are that ammonia does not work in a proton exchange membrane fuel cell; and that ammonia is more difficult to oxidize than hydrogen, which causes ammonia fuel cells to produce less power than hydrogen fuel cells. The team solved the first problem by using hydroxide exchange membrane fuel cells that have been studied for over a decade in the lab of Yushan Yan, a Distinguished Engineering Professor at UD. Assisted by a $2.5 million grant from the REFUEL program of the Advanced Research Projects Agency-Energy (ARPA-E) in the U.S. Department of Energy, the UD team engineered a fuel cell membrane that can operate at higher temperatures to speed up ammonia oxidation. They also identified catalysts that were not poisoned by ammonia.

"With these improvements, we have demonstrated a new direct ammonia fuel cell prototype with a peak power density of 135 milliwatts per square centimeter, which closes much of the performance gap compared to hydrogen," said research associate Yun Zhao, the lead author of the paper who has been working on direct ammonia fuel cells since 2016.

Credit: 
University of Delaware

Nicotine-free e-cigarettes can damage blood vessels

PHILADELPHIA - Smoking e-cigarettes, also called vaping, has been marketed as a safe alternative to tobacco cigarettes and is rising in popularity among non-smoking adolescents. However, a single e-cigarette can be harmful to the body's blood vessels -- even when the vapor is entirely nicotine-free -- according to a new study by researchers in the Perelman School of Medicine at the University of Pennsylvania. The results were published today in Radiology.

To study the short-term impacts of vaping, the researchers performed MRI exams on 31 healthy, non-smoking adults before and after vaping a nicotine-free e-cigarette. Comparing the pre- and post-MRI data, the single episode of vaping resulted in reduced blood flow and impaired endothelial function in the large (femoral) artery that supplies blood to the thigh and leg. The endothelium, which lines the inside surface of blood vessels, is essential to proper blood circulation. Once the endothelium is damaged, arteries thicken and blood flow to the heart and the brain can be cut off, resulting in heart attack or stroke.

"While e-cigarette liquid may be relatively harmless, the vaporization process can transform the molecules -- primarily propylene glycol and glycerol -- into toxic substances," said the study's principal investigator Felix W. Wehrli, PhD, a professor of Radiologic Science and Biophysics. "Beyond the harmful effects of nicotine, we've shown that vaping has a sudden, immediate effect on the body's vascular function, and could potentially lead to long-term harmful consequences."

E-cigarettes are battery-operated devices that convert liquid into aerosol, which is inhaled into the user's lungs. Typically, the liquid contains addictive nicotine, as well as flavors. More than 10 million adults in the United States use e-cigarettes, and vaping has become especially popular among teenagers. While there appears to be some consensus that vaping may be less harmful to health than tobacco cigarette smoking, the dangers of e-cigarettes remain unclear.

In this study, the researchers examined the impact of an e-cigarette that contained propylene glycol and glycerol with tobacco flavoring, but no nicotine, which study participants took 16, three-second puffs from. To evaluate vascular reactivity, the group constricted the vessels of the thigh with a cuff and then measured how quickly the blood flowed after its release. Using a multi-parametric MRI procedure, researchers scanned the femoral artery and vein in the leg before and after each vaping episode to see how vascular function changed.

The researchers then performed a statistical analysis to determine group differences in vascular function before and after vaping. They observed, on average, a 34 percent reduction in the femoral artery's dilation. E-cigarettes exposure also led to a 17.5 percent reduction in peak blood flow, a 20 percent reduction in venous oxygen, and a 25.8 percent reduction in blood acceleration after the cuff release -- the speed at which the blood returned to the normal flow after being constricted. These findings suggest that vaping can cause significant changes to the inner lining of blood vessels, said study lead author Alessandra Caporale, PhD, a post-doctoral researcher in the Laboratory for Structural, Physiologic, and Functional Imaging at Penn.

"E-cigarettes are advertised as not harmful, and many e-cigarette users are convinced that they are just inhaling water vapor," Caporale said. "But the solvents, flavorings and additives in the liquid base, after vaporization, expose users to multiple insults to the respiratory tract and blood vessels."

Wehrli noted that they observed these striking changes after the participants (all of whom never smoked previously) used an e-cigarette a single time. More research is needed to address the potential long-term adverse effects of vaping on vascular health, but he predicts that e-cigarettes are potentially much more hazardous than previously assumed. Earlier this year, for instance, his research group found that acute exposure to e-cigarettes causes vascular inflammation.

"I would warn young people to not even get started using e-cigarettes. The common belief is that the nicotine is what is toxic, but we have found that dangers exist, independent of nicotine," Wehrli said. "Clearly if there is an effect after a single use of an e-cigarette, then you can imagine what kind of permanent damage could be caused after vaping regularly over years."

Credit: 
University of Pennsylvania School of Medicine

Longline fishing hampering shark migration

image: Dr Bonnie Holmes tagging and monitoring sharks in the open ocean.

Image: 
The University of Queensland

Longline fisheries around the world are significantly affecting migrating shark populations, according to an international study featuring a University of Queensland researcher.

The study found that approximately a quarter of the studied sharks' migratory paths fell under the footprint of longline fisheries, directly killing sharks and affecting their food supply.

Dr Bonnie Holmes, from UQ's School of Biological Sciences, wanted to find out why shark numbers have been declining significantly over the past 20 years.

"We're losing these incredible creatures, and we know so little about shark movements and what drives them," she said.

"I joined an international research effort, using new technologies - like satellite tracking and big data analysis - to help answer some critical questions."

The team, comprising of over 150 scientists from 25 countries, collated the migratory tracks of more than 1600 sharks from a range of species, examining shark movements at a global scale.

"This allowed us to, for the first time, to see how different species overlap in habitats - both in time and space - and understand how these species are interacting with global fishing fleets," Dr Holmes said.

"The results indicate that sharks are exposed to fishing pressure from longline fisheries around 25% of the time.

"This doesn't even include localised pressures from near-shore operations, like game fishing, shark control or subsistence fishing.

"These sharks are amazing animals that can travel vast distances, but it's clear that they have limited refuge from both high-seas and coastal fishing operations.

"Of particular concern was the fact that tiger sharks in the Oceania region are at a moderate-to-high risk from longline fleets for at least six months of the year."

Despite the harrowing data, Dr Holmes is hopeful that the international community will be able to protect and foster shark species.

"This research project demonstrates the power of collaboration at an international scale," she said.

"Our international team now has more projects in mind for our collective data, which will hopefully result in some significant action, helping prioritise management improvements across jurisdictions.

"This could help protect many of these migratory species, particularly those that are currently threatened.

"It's important that, as scientists, we continue to work outside of our own 'silos', and understand that collaboration is key to achieving greater success.

"This is how we'll create real action on issues like sustainable fishing, and further understand how contemporary, complex issues like climate change are impacting the wildlife around us.

"It will take time, but governments around the world need to work together to save our most vulnerable shark species."

Credit: 
University of Queensland

Facial recognition technique could improve hail forecasts

image: The shape of a severe storm, such as this one, is an important factor in whether the storm produces hail and how large the hailstones are, but current hail-prediction techniques are typically not able to take the storm's entire structure into account. NCAR scientists are experimenting with a new machine-learning technique that can process images to weigh the impact of storm shape and potentially improve hail forecasts.

Image: 
©UCAR. Image: Carlye Calvin

The same artificial intelligence technique typically used in facial recognition systems could help improve prediction of hailstorms and their severity, according to a new study from the National Center for Atmospheric Research (NCAR).

Instead of zeroing in on the features of an individual face, scientists trained a deep learning model called a convolutional neural network to recognize features of individual storms that affect the formation of hail and how large the hailstones will be, both of which are notoriously difficult to predict.

The promising results, published in the American Meteorological Society's Monthly Weather Review, highlight the importance of taking into account a storm's entire structure, something that's been challenging to do with existing hail-forecasting techniques.

"We know that the structure of a storm affects whether the storm can produce hail," said NCAR scientist David John Gagne, who led the research team. "A supercell is more likely to produce hail than a squall line, for example. But most hail forecasting methods just look at a small slice of the storm and can't distinguish the broader form and structure."

The research was supported by the National Science Foundation, which is NCAR's sponsor.

"Hail - particularly large hail - can have significant economic impacts on agriculture and property," said Nick Anderson, an NSF program officer. "Using these deep learning tools in unique ways will provide additional insight into the conditions that favor large hail, improving model predictions. This is a creative, and very useful, merger of scientific disciplines."

The shape of storms

Whether or not a storm produces hail hinges on myriad meteorological factors. The air needs to be humid close to the land surface, but dry higher up. The freezing level within the cloud needs to be relatively low to the ground. Strong updrafts that keep the hail aloft long enough to grow larger are essential. Changes in wind direction and speed at different heights within the storm also seem to play a role

But even when all these criteria are met, the size of the hailstones produced can vary remarkably, depending on the path the hailstones travel through the storm and the conditions along that path. That's where storm structure comes into play.

"The shape of the storm is really important," Gagne said. "In the past we have tended to focus on single points in a storm or vertical profiles, but the horizontal structure is also really important."

Current computer models are limited in what they can look at because of the mathematical complexity it takes to represent the physical properties of an entire storm. Machine learning offers a possible solution because it bypasses the need for a model that actually solves all the complicated storm physics. Instead, the machine learning neural network is able to ingest large amounts of data, search for patterns, and teach itself which storm features are crucial to key off of to accurately predict hail.

For the new study, Gagne turned to a type of machine learning model designed to analyze visual images. He trained the model using images of simulated storms, along with information about temperature, pressure, wind speed, and direction as inputs and simulations of hail resulting from those conditions as outputs. The weather simulations were created using the NCAR-based Weather Research and Forecasting model (WRF).

The machine learning model then figured out which features of the storm are correlated with whether or not it hails and how big the hailstones are. After the model was trained and then demonstrated that it could make successful predictions, Gagne took a look to see which aspects of the storm the model's neural network thought were the most important. He used a technique that essentially ran the model backwards to pinpoint the combination of storm characteristics that would need to come together to give the highest probability of severe hail.

In general, the model confirmed those storm features that have previously been linked to hail, Gagne said. For example, storms that have lower-than-average pressure near the surface and higher-than-average pressure near the storm top (a combination that creates strong updrafts) are more likely to produce severe hail. So too are storms with winds blowing from the southeast near the surface and from the west at the top. Storms with a more circular shape are also most likely to produce hail.

Building on random forests, testing with actual storms

This research builds on Gagne's previous work using a different kind of machine learning model - known as a random forest - to improve hail prediction. Instead of analyzing images, random forest models ask a series of questions, much like a flowchart, which are designed to determine the probability of hail. These questions might include whether the dew point, temperatures, or winds are above or below a certain threshold. Each "tree" in the model asks slight variants on the questions to come to an independent answer. Those answers are then averaged over the entire "forest," giving a prediction that's more reliable than any individual tree.

For that research, published in 2017, Gagne used actual storm observations for the inputs and radar-estimated hail sizes for the outputs to train the model. He found that the model could improve hail prediction by as much as 10%. The machine learning model has now been run operationally during the last several springs to give on-the-ground forecasters access to more information when making hail predictions. Gagne is in the process of verifying how the model did over those few seasons.

The next step for the newer machine learning model is to also begin testing it using storm observations and radar-estimated hail, with the goal of transitioning this model into operational use as well. Gagne is collaborating with researchers at the University of Oklahoma on this project.

"I think this new method has a lot of promise to help forecasters better predict a weather phenomenon capable of causing severe damage," Gagne said. "We are excited to continue testing and refining the model with observations of real storms."

Credit: 
National Center for Atmospheric Research/University Corporation for Atmospheric Research

Police less proactive after negative public scrutiny, study says

AUSTIN, Texas -- Public safety officers know that their profession could draw them into the line of fire at any moment, as it did recently for six officers wounded in a shooting standoff in Philadelphia.

Yet, in an age when cellphone videos of police misconduct can go viral, the new social phenomenon of "cop shaming" is causing performance problems in police departments nationwide.

According to new research from the McCombs School of Business at The University of Texas at Austin, published in Organizational Behavior and Human Decision Processes, public safety officers' proactivity declines when they perceive negative public scrutiny, even if they are deeply motivated to help people.

The study, " 'I Want to Serve but the Public Does Not Understand:' Prosocial Motivation, Image Discrepancies, and Proactivity in Public Safety," by McCombs Assistant Professor of Management Shefali V. Patil and R. David Lebel from the University of Pittsburgh, found that officers are less likely to proactively build relationships with community members and help solve their problems if they feel that the public does not understand the difficulties of their jobs.

"In the vast majority of jobs, it is really difficult for other people outside to understand your job, but people don't realize how much this misunderstanding can actually influence the behavior of police officers," Patil said.

The researchers asked 183 police officers across six agencies and 238 firefighters across eight stations in the southern United States about whether they believed the public understood the difficulties of their jobs. The researchers also surveyed the officers' supervisors about their proactivity.

The police officers and firefighters who said the public did not understand their jobs were significantly less likely to be rated as proactive by their supervisors, even if their reason for doing their jobs is to help others.

"When proactive officers see something that's happening in a local neighborhood, they get out of the patrol car and go to help somebody even though they don't need to and nobody's actually watching them," Patil said. "But being less proactive would mean taking a less active role while on a shift and basically only doing what your boss tells you."

Patil said that figuring out how to improve public perception is key to healing this rift. Officers who feel that the public respects and appreciates the difficulties and dangers of their profession are much more motivated to interact in positive ways with the people they serve.

"Our research is trying to show how important it is for us to take the next step to try to figure out how we can actually change the public image of law enforcement officers," she said. "It's also helping police officers believe that the public truly cares, and it's just not lip service."

Credit: 
University of Texas at Austin

Towards an 'orrery' for quantum gauge theory

video: The atom shown in blue picks up a phase (arrow) only if a second, red particle is present.

Image: 
Mika Blackmore-Esslinger

The interaction between fields and matter is a recurring theme throughout physics. Classical cases such as the trajectories of one celestial body moving in the gravitational field of others or the motion of an electron in a magnetic field are extremely well understood, and predictions can be made with astonishing accuracy. However, when the quantum character of the particles and fields involved has to be factored in explicitly, then the situation quickly becomes rather complex. And if, in addition, the field depends on the state of the particles evolving in it, then calculations can shift out of reach even for today's most powerful computers.

The limitations in exploring regimes of dynamical interaction between fields and matter hinder progress in areas ranging from condensed-matter to high-energy physics. But there is an alternative approach: instead of calculating the dynamics, simulate it. Famously, for planetary systems mechanical models known as orreries have been built long before digital computers were developed. In the quantum realm, in recent years so-called 'quantum simulators' have been developed, where the unknown dynamics of one quantum system is emulated using another, more controllable one. As they report today in the journal Nature Physics, Frederik Görg and colleagues in the group of Tilman Esslinger in the Department of Physics of ETH Zurich have now made substantial progress towards quantum simulators that might be employed to tackle general classes of problems where the dynamics of matter and fields are coupled.

Hard-to-gauge outcomes

Görg et al. looked not directly at gravitational or electromagnetic fields, but at so-called gauge fields. These are auxiliary fields that are typically not directly observable in experiments, but all the more powerful as a consistent framework for the mathematical treatment of the interactions between particles and fields. As a central concept in physics, gauge fields offer a unique route to understanding forces -- the electromagnetic force as well as those holding together subatomic particles. Consequently, there is substantial interest in quantum simulations of gauge fields, in the hope that they provide fresh insight into situations that currently cannot be explored in calculations or computer simulations.

One of the currently leading platforms for simulating complex quantum systems is based on atoms that are cooled to temperatures close to absolute zero and trapped in lattice structures created by laser light. A major advance in recent years has been the realization that the atoms can be used to mimic the behaviour of electrons in a magnetic field, even if the atoms have no electric charge. The key to achieve this is the use of external control parameters to steer the quantum-tunnelling process by which the atoms move between adjacent sites of the optical lattice. Through suitably tailoring the complex phase that the quantum particles pick up in a tunnelling event -- known as the 'Peierls phase' -- the neutral atoms can be made to behave precisely like charged particles moving in a magnetic field. The engineered dynamics in these synthetic gauge fields can be compared to that of classical orreries, where the model planets move as if they were subjected to a substantial gravitational pull from a central body, emulating the behaviour of real planets.

Shaking up the field

The Esslinger group, and others, have used the ultracold-atom platform before to create artificial gauge fields resulting from complex tunnelling phases. But so far, these engineered fields were intrinsically classical, and did not include backaction from the atoms to the gauge field. Hence the excitement as Görg and his co-workers now present a flexible way to achieve coupling between atoms and gauge fields. They propose -- and implemented -- a procedure to render the Peierls phase dependent on how the atoms are distributed in the lattice. When the distribution changes as a consequence of the interaction with the gauge field, the gauge field itself is altered. This is as if the orrery would speed up or slow down depending on the planetary constellation (which is not needed to model simple celestial mechanics, as the interaction between planets is neglected). In the case of a quantum simulator for quantum gauge fields, however, the interaction between the particles is an essential ingredient.

In the experiments now reported, the ETH physicists created an optical lattice that consists of 'dimers', each made from two neighbouring sites in which fermionic atoms can reside either individually or in pairs (see the figure). The tunnelling between the sites of the dimer is controlled by shaking the lattice at two different frequencies with a piezoelectric actuator. The frequencies and phases of the modulation are chosen such that the Peierls phase between sites depends on whether an atom shares its dimer site with another atom of the opposite spin or not (see the animation).

Generality matters

The step to engineering gauge fields that are coupled to ultracold matter is an important one. Ultracold atoms in optical lattices are already established as a versatile platform for quantum simulations, including emulating complex electronic phenomena occurring in solid-state materials. The current work of Görg et al., together with related recent advances by other groups, promises that in the not-too-distant future also more complex quantum gauge fields can be tackled, in particular those that appear in high-energy physics and challenge current classical simulation approaches.

A distinct strength of the approach by Görg et al. is that it can be used for engineering a variety of different quantized gauge fields, beyond the specific scenario they explored experimentally in the just-published paper, as they show based on theoretical considerations. And as the work also demonstrates exquisite experimental control over a highly tunable atomic many-body system, there is now the clear and intriguing prospect of a modern-day orrery that provides insight not into movements on the sky, but deep into the quantum world.

Credit: 
ETH Zurich Department of Physics

Stanford researchers enhance neuron recovery in rats after blood flow stalls

Researchers at the Stanford University School of Medicine report in a new study that they found a way to help rats recover neurons in the brain's center of learning and memory. They accomplished the feat by blocking a molecule that controls how efficiently genetic instructions are used to build proteins.

If the approach described in the study can be applied to humans, it may one day help patients who've suffered a stroke, cardiac arrest or major blood loss and are thus at higher risk of memory loss.

In the study, to be published online Aug. 19 in eNeuro, researchers induced extremely low blood pressure -- as would happen when the heart stops beating -- in rats. These rats lost neurons in a specific region of the hippocampus critical to learning and memory, but the researchers improved the animals' recovery of the cells by injecting a molecule that blocks a microRNA: a short molecule that tweaks gene activation by preventing the conversion of genetic blueprints into proteins. Interestingly, the scientists found that a microRNA blockade potentially causes astrocytes -- cells that support neurons and make up 50% of the cells in the brain -- to turn into neurons.

The findings demonstrate that neurons, with some assistance from their astrocyte neighbors, recover in a region of the hippocampus not known to have a local stem cell population that can replenish lost neurons. Enhancing this recovery in humans could help those who've suffered a temporary loss of blood flow to the brain.

"There's currently no treatment to improve brain function in patients with heavy blood loss, cardiac arrest or stroke," said Creed Stary, MD, PhD, assistant professor of anesthesiology, perioperative and pain medicine. "This is the first study to show that the natural process of post-injury hippocampal recovery can be substantially improved with a pharmaceutical microRNA-based therapy."

Stary is the study's senior author. Lead authorship is shared by postdoctoral scholar Brian Griffiths, PhD, and senior research scientist Yi-Bing Ouyang, PhD.

Under (low blood) pressure

When fresh blood stops flowing through the brain, cellular waste piles up, and neurons starved of oxygen and glucose eventually die. This can occur when a person has a stroke, loses a significant amount of blood or suffers a cardiac arrest.

Amid the damage, levels of a microRNA known as miR-181a soar. In an earlier study, the researchers blocked miR-181a with a molecule designed to stick to and inactivate the microRNA. They found that blocking miR-181a before reducing the flow of blood to the brains of rats stopped neurons from dying.

"If you want to find a therapy for an injury, one approach is to look for disruptions that occur in cells and try to reverse them. The first step was asking, 'Is reversing the increase in this specific microRNA protective?'" Stary said.

But while the prior findings were encouraging, they didn't reflect how such an intervention would probably be used in a clinical setting; it's more likely that a patient would receive a microRNA blockade after an injury.

To test whether miR-181a blockade helped rats recover hippocampal neurons, the researchers decreased the rats' blood pressure dramatically by siphoning off much of their blood and reinfusing it 10 minutes later. Similar drops in blood pressure can occur in people during cardiac arrest, after a major loss of blood or during certain surgeries.

The blood pressure drop caused nearly 95% of neurons in a region of the hippocampus known as CA1 to die off. By around two months after the procedure, those neurons bounced back to nearly 50% of normal levels.

The researchers then tested the effects of a microRNA blockade by injecting the blocking molecule directly into the hippocampus of rats either two hours or seven days after the animals experienced a drop in blood pressure. These rats had significantly higher neuronal recovery than those injected with a control molecule that didn't target any known microRNAs. In earlier studies, the researchers showed they could deliver the blockade intravenously, making it well-suited for clinical use.

Solving a puzzle

But the fact that there was any recovery was puzzling. The hippocampus is one of the few brain regions that harbors neural stem cells, which can form new neurons in adults, but not in the CA1 region the researchers were studying.

"If you don't have new neural stem cells and you don't have any evidence of cell division, then how are CA1 neurons being repopulated?" Stary said.

The researchers had one important clue: When CA1 neurons were at their nadir, specialized neuronal support cells known as astrocytes moved into the damaged region. Typically, astrocytes sit above and below the neuron-containing layer of the CA1 and support the metabolism and connectivity of their neuronal neighbors.

To figure out what the astrocytes were up to, the scientists tracked them with fluorescent molecules that labelled astrocytes green and neurons red. When they looked under the microscope, they found cells that glowed yellow -- meaning the cells expressed both green and red markers. These yellow cells were found at higher levels in rats in which miR181a had been blocked.

The observation strongly implied that some of the astrocytes were beginning to turn into neurons. While the researchers are planning further experiments to confirm the finding, astrocytes have been shown to turn into neurons in other animal models of brain injury. Whether this phenomenon occurs in humans after loss of blood flow to the brain has not yet been fully established, but if verified, it could open a new realm of astrocyte-based gene therapies for survivors of cardiac arrest and stroke.

The researchers next plan to verify whether blocking miRNA-181a helps the rats recover their memory, learning and other cognitive abilities linked to the hippocampus. If so, the approach is one step closer to aiding recovery from brain injuries in which blood flow gets cut off.

"This paper shows that you can effectively augment the normal recovery the brain tries to do on its own by blocking this specific microRNA across injury models and across species, something of a holy grail for a gene therapy," Stary said. "It points toward blocking the microRNA being a protective agent itself, but also provides insight to identify new therapeutic gene targets, opening the possibility for combinatorial or adjuvant pharmaceutical therapies."

Credit: 
Stanford Medicine

New study offers roadmap for detecting changes in the ocean due to climate change

image: Researchers led by Princeton University examined a range of possible climate-related impacts on the ocean to predict when these impacts are likely to occur. Some impacts - such as sea temperature rise and acidification - have already begun while others, like changes to microbial productivity, which serves as the basis of the marine food web, will happen over the next century. Images from NASA EarthData show ocean color, an indicator of microbial productivity.

Image: 
NASA

Sea temperature and ocean acidification have climbed during the last three decades to levels beyond what is expected due to natural variation alone, a new study led by Princeton researchers finds. Meanwhile other impacts from climate change, such as changes in the activity of ocean microbes that regulate the Earth's carbon and oxygen cycles, will take several more decades to a century to appear. The report was published Aug. 19 online in the journal Nature Climate Change.

The study looked at physical and chemical changes to the ocean that are associated with rising atmospheric carbon dioxide due to human activities. "We sought to address a key scientific question: When, why and how will important changes become detectable above the normal variations that we expect to see in the global ocean?" said Sarah Schlunegger, a postdoctoral research associate at Princeton University's Program in Atmospheric and Oceanic Sciences (AOS).

The study confirms that outcomes tied directly to the escalation of atmospheric carbon dioxide have already emerged in the existing 30-year observational record. These include sea surface warming, acidification and increases in the rate at which the ocean removes carbon dioxide from the atmosphere.

In contrast, processes tied indirectly to the ramp up of atmospheric carbon dioxide through the gradual modification of climate and ocean circulation will take longer, from three decades to more than a century. These include changes in upper-ocean mixing, nutrient supply, and the cycling of carbon through marine plants and animals.

"What is new about this study is that it gives a specific timeframe for when ocean changes will occur," said Jorge Sarmiento, the George J. Magee Professor of Geoscience and Geological Engineering, Emeritus. "Some changes will take a long time while others are already detectable."

The ocean provides a climate service to the planet by absorbing excess heat and carbon from the atmosphere, thereby slowing the pace of rising global temperatures, Schlunegger said. This service, however, comes with a penalty -- namely ocean acidification and ocean warming, which alter how carbon cycles through the ocean and impacts marine ecosystems.

Acidification and ocean warming can harm the microbial marine organisms that serve as the base of the marine food web that feeds fisheries and coral reefs, produce oxygen and contribute to the draw-down of atmospheric carbon dioxide concentration.

The study aimed to sift out ocean changes linked to human-made climate change from those due to natural variability. Natural fluctuations in the climate can disguise changes in the ocean, so researchers looked at when the changes would be so dramatic that they would stand out above the natural variability.

Climate research is often divided into two categories, modeling and observations -- those scientists who analyze observations of the real Earth, and those who use models to predict what changes are to come. This study leverages the predictions made by climate models to inform observational efforts of what changes are likely, and where and when to look for them, Schlunegger said.

The researchers conducted modeling that simulates potential future climate states that could result from a combination of human-made climate change and random chance. These experiments were performed with the Earth System Model, a climate model which has an interactive carbon cycle, so that changes in the climate and carbon cycle can be considered in tandem.

Use of the Earth System Model was facilitated by John Dunne, who leads ocean carbon modeling activities at the National Oceanic and Atmospheric Administration (NOAA)'s Geophysical Fluid Dynamics Laboratory in Princeton. The Princeton team included Richard Slater, senior earth system modeler in AOS; Keith Rodgers, an AOS research oceanographer now at Pusan National University in South Korea; and Jorge Sarmiento, the George J. Magee Professor of Geoscience and Geological Engineering, Emeritus. The team also included Thomas Frölicher, a professor at the University of Bern and a former postdoctoral fellow at Princeton, and Masao Ishii of the Japan Meteorological Agency.

The finding of a 30- to 100-year delay in the emergence of effects suggests that ocean observation programs should be maintained for many decades into the future to effectively monitor the changes occurring in the ocean. The study also indicates that the detectability of some changes in the ocean would benefit from improvements to the current observational sampling strategy. These include looking deeper into the ocean for changes in phytoplankton, and capturing changes in both summer and winter, rather than just the annual mean, for the ocean-atmosphere exchange of carbon dioxide.

"Our results indicate that many types of observational efforts are critical for our understanding of our changing planet and our ability to detect change," Schlunegger said. These include time-series or permanent locations of continuous measurement, as well as regional sampling programs and global remote sensing platforms.

Credit: 
Princeton University