Tech

Oil platforms' fishy future

Biologists and fishermen alike know that offshore oil platforms function as de facto habitats for fish. The structures climb hundreds of feet into the water column, creating a prefab reef out in open water. But many of these platforms will soon be decommissioned, and government agencies are considering the potential ecological effects in deciding how this will be done.

UC Santa Barbara postdoctoral scholar Erin Meyer-Gutbrod and her colleagues have focused their research on predicting how different decommissioning scenarios will affect the productivity of the surrounding waters. They found that completely removing a platform could reduce fish biomass at the sites by 95% on average. Meanwhile, removing just the top of the rig could keep losses to around 10%. Their forecast appears in the journal Ecological Applications.

"The key result of the paper is that the biomass and production on the platforms are much higher than they would be if the structure were removed and the area reverted to soft bottom," Meyer-Gutbrod said.

The state of California is currently weighing several possibilities for decommissioning 27 oil platforms off the coast. The three main options at each site: leave the platform in place, remove all of it, or remove the top part of it. Each possibility has its own economic and ecological effects.

The research team set out to study the size and composition of fish communities at 24 platforms and predict how they might change under the three decommissioning scenarios. They used visual survey and bottom-trawl data on the biomass and composition of fishes living within the platforms' underwater structure, or jacket, and the nearby soft bottom. They divided each platform vertically by habitat type, starting with the mound of shells that accumulates on the seafloor below and rising up the jacket based on the position of all the major horizontal beams.

With data in hand, Meyer-Gutbrod used mathematical models to predict how each of the decommissioning scenarios would affect biological productivity. For partial removal, she assumed that all structures within 26 meters of the surface would be stripped, as this would eliminate the need for a lighted buoy to mark the location as per U.S. Coast Guard guidelines.

The researchers found that completely removing the platform would result in an average loss of 96% of the fish biomass across all 24 sites surveyed. Meanwhile, removing just the top 26 meters resulted in a loss of only 10% of the fish biomass. Meyer-Gutbrod pointed out that this varied considerably between locations, since the jackets are in different places and depths with different fish communities. They forecasted no losses resulting from partial removal on five of the platforms and up to 44% of losses on one, Platform Gina, which sits in only 29 meters of water.

According to the researchers' models, leaving the underwater structure of all 24 sites in place would support slightly more than 29,000 kg of fish biomass. With the top 26 meters removed, these sites could support just shy of 28,000 kg. And if all 24 platforms were completely removed, the reestablished soft bottom habitats would support about 500 kg of fish biomass.

"This result was not particularly surprising," Meyer-Gutbrod said, "however, it was very important to demonstrate it in a rigorous way."

The analysis revealed that fish densities were highest near the base of the jacket. The team also found that the community of fish species observed on soft bottom habitats was very different from the species congregating around platforms. This was among the first studies to specifically consider the new community that would reestablish on the soft seafloor if all of a platform's structure were to be removed.

"In short, partial removal does not result in much loss of fish biomass or production since most of the structure sits below 26 meters of water, and fish densities tend to be higher at the platform base and shell mound than in the midwater," Meyer-Gutbrod said.

The results are a conservative estimate of the impact the different scenarios could have on the wildlife in the area, according to Meyer-Gutbrod. "Our models only account for fish found under the platform or 'inside of the structure,'" she said. However, the halo of marine life extends beyond the confines of a platform's submerged jacket, well into the surrounding waters. This implies that the removal of more structure would have a larger overall effect than reported in the study.

As California weighs how to decommission the oil platforms that sit off its coast, studies like this will be critical to informing those decisions. "The people living near the Santa Barbara Channel are highly invested in the marine ecosystems here, and there is a wide range of perspectives and interest in these habitats," Meyer-Gutbrod said. "Our goal was to provide some predictions of what these sites might look like to help guide these impending decisions."

Research Biologist Bob Miller, who also was involved in the study, said, "Ultimately the decision on what to do with decommissioned platforms will be a value judgement. Our study gives some objective information that will hopefully help the stakeholders come to the best decision for people and the marine environment."

Credit: 
University of California - Santa Barbara

Climate change could dramatically reduce future US snowstorms

image: A new study led by Northern Illinois University scientists suggests American winters late this century could experience significant decreases in the frequency, intensity and size of snowstorms. Under an unabated greenhouse gas emissions scenario, the study projects 28% fewer snowstorms on average per year over central and eastern portions of North America by the century's last decade.

Image: 
Northern Illinois University

DeKalb, Ill. -- A new study led by Northern Illinois University scientists suggests American winters late this century could experience significant decreases in the frequency, intensity and size of snowstorms.

Under an unabated greenhouse gas emissions scenario, the study projects 28% fewer snowstorms on average per year over central and eastern portions of North America by the century's last decade, with a one-third reduction in the amount of snow or frozen precipitation and a 38% loss in average snowstorm size.

"If we do little to mitigate climate change, the winter season will lose much of its punch in the future," said Walker Ashley, an NIU professor of meteorology and lead author of the study, published May 25 in Nature Climate Change.

"The snow season will start later and end earlier," Ashley said. "Generally, what we consider an abnormally mild winter now, in terms of the number and intensity of snowstorms, will be the harshest of winters late this century. There will be fewer snowstorms, less overall precipitation that falls as snow and almost a complete removal of snow events in the southern tier of the United States."

Ashley and NIU Meteorology Professor Victor Gensini, along with alumnus Alex Haberlie of Louisiana State University, used a supercomputing data set created by researchers at the National Center for Atmospheric Research to study how climate change will impact future wintry weather.

Using the NCAR data, the researchers tracked snowstorms for 12 seasons in the early part of this century, establishing a control sample that was found to be representative of actual observations. They then tracked snowstorms to see to how those winter events would change in a climate that was warmer by about 5 degrees Celsius (9 degrees Fahrenheit). That temperature increase is predicted for the late 21st century by averaging 19 leading climate models in an upper-limit greenhouse gas emissions scenario, according to NCAR.

The study is believed to be the first to objectively identify and track individual snowstorm projections of the distant future--from minor snow accumulations, to average winter storms, to crippling blizzards.

In total, the researchers identified and tracked nearly 2,200 snowstorms across central and eastern North America over 24 years (past and future) at a grid spacing of about four kilometers (2.5 miles). These "high resolution" simulations allowed the researchers to examine the snowstorms in much greater detail than had been previously done.

Significant decreases were found in the frequency and size of snowstorms in the global-warming simulation, including those events that produce the most extreme snowfall accumulations.

"A milder winter not only reduces the number of snowstorms per season, but it also reduces the size of the snowstorms when they do happen," Haberlie said. The size of the most extreme snowstorms, such as those that produce blizzards, are projected to decrease by 32%.

The most notable snowfall changes would occur during the "shoulder-seasons," which bookend the core winter months. Snowstorm counts for October, November and April were projected to decrease by 83.5%, 48.4% and 60.5%, respectively.

The scientists also found that much of the northern tier of the United States--in regions that historically have experienced frequent heavy winter snows--would see overall snowstorm reductions of 30% to 50%. Notably, the high emissions scenario used for this work suggests that snowstorms could become a thing of the past in the American South.

"Major cities such as Chicago, New York and Boston would continue to see snowstorms, but when looking over long climatological time periods, the total number of snowstorms is projected to decrease, especially in portions of the early and late winter," Gensini said. "Significant winters of the past like the ones we saw in the late 1970s would still be possible as we go forward in a future climate, but their likelihood would be reduced."

Predicting the impact of climate change on future snowstorms is key for many aspects of life and the economy. Substantial reductions in annual snowfall could have far-reaching implications on everything from snow-removal and energy budgets to water resources and plant and animal life.

"There could be benefits in some areas, such as for air and road transportation systems," Ashley said. "But there also could be serious negative consequences, especially for freshwater resource-dependent industries such as agriculture, recreation, refining, manufacturing, power generation and river and lake transport.

"While this study doesn't dive into the topic, there are also climate feedbacks to consider," he added. "Snow cover reflects solar radiation and helps cool the environment. So annual reductions in snowfall and snow cover could amplify potential warming."

Ashley notes that the study does have limitations. Given data restraints, the researchers only looked at a scenario of unabated greenhouse gas emissions, assessed relatively short periods of time and examined only end-of-the-century projected changes.

Credit: 
Northern Illinois University

Dead Sea Scrolls 'puzzle' pieced together with DNA

An interdisciplinary team from Tel Aviv University, led by Prof. Oded Rechavi of TAU's George S. Wise Faculty of Life Sciences, Prof. Noam Mizrahi of TAU's Department of Biblical Studies, in collaboration with Prof. Mattias Jakobsson of Uppsala University in Sweden, the Israel Antiquities Authority and Prof. Christopher E. Mason of Weill Cornell Medicine, has successfully decoded ancient DNA extracted from the animal skins on which the Dead Sea Scrolls were written. By characterizing the genetic relationships between different scroll fragments, the researchers were able to discern important historical connections.

The research, conducted over seven years, was published as the cover story in the prestigious journal Cell on June 2 and sheds new light on the Dead Sea Scrolls.

"There are many scroll fragments that we don't know how to connect, and if we connect wrong pieces together it can change dramatically the interpretation of any scroll. Assuming that fragments that are made from the same sheep belong to the same scroll," explains Prof. Rechavi, "it is like piecing together parts of a puzzle."

The Dead Sea Scrolls refers to some 25,000 fragments of leather and papyrus discovered beginning in 1947, mostly in the Qumran caves but also in other sites located in the Judean Desert.

Among other things, the scrolls contain the oldest copies of biblical texts. Since their discovery, scholars have faced the breathtaking challenge of classifying the fragments and piecing them together into the remains of some 1,000 manuscripts, which were hidden in the caves before the destruction of the Second Temple in 70 CE.

Researchers have long been puzzled as to the degree this collection of manuscripts, a veritable library from the Qumran caves, reflects the broad cultural milieu of Second Temple Judaism, or whether it should be regarded as the work of a radical sect (identified by most as the Essenes) discovered by chance.

"Imagine that Israel is destroyed to the ground, and only one library survives -- the library of an isolated, 'extremist' sect: What could we deduce, if anything, from this library about greater Israel?" Prof. Rechavi says. "To distinguish between scrolls particular to this sect and other scrolls reflecting a more widespread distribution, we sequenced ancient DNA extracted from the animal skins on which some of the manuscripts were inscribed. But sequencing, decoding and comparing 2,000-year old genomes is very challenging, especially since the manuscripts are extremely fragmented and only minimal samples could be obtained."

Pnina Shor, founder of the Dead Sea Scrolls Unit at the Israel Antiquities Authority, adds, "The Israel Antiquities Authority is in charge of both preserving the scrolls for posterity and making them accessible to the public and to scholars. Recent scientific and technological advances enable us to minimize physical intervention on the scrolls, thus facilitating multidisciplinary collaborations."

Innovative methods to solve historical mysteries

To tackle their daunting task, the researchers developed sophisticated methods to deduce information from tiny amounts of ancient DNA, carefully filtering out potential contaminations and statistically validating the findings. The team employed these mechanisms to deal with the challenge posed by the fact that genomes of individual animals of the same species (for instance, two sheep of the same herd) are almost identical to one another, and even genomes of different species (such as sheep and goats) are very similar.

For the purpose of the research, the Dead Sea Scrolls Unit of the Israel Antiquities Authority supplied samples -- sometimes only scroll "dust" carefully removed from the uninscribed back of the fragments -- and sent them for analysis by Prof. Rechavi's team: Dr. Sarit Anava, Moran Neuhof, Dr. Hila Gingold and Or Sagi. To prevent DNA contamination, Dr. Anava traveled to Sweden to extract the DNA under the meticulous conditions required for ancient DNA analysis (e.g. wearing special full-body suits) in Prof. Jakobsson's paleogenomics lab in Uppsala, which is equipped with cutting-edge equipment. In parallel to the teams that were studying the animals' ancient DNA, Prof. Mason's metagenomics lab in New York studied the scrolls' microbial contaminants. Prof. Jakobsson says, "It is remarkable that we were able to retrieve enough authentic ancient DNA from some of these 2,000 year old fragments considering the tough history of the animal hides. They were processed into parchment, used in a rough environment, left for two millennia, and then finally handled by humans again when they were rediscovered."

Textual pluralism opens window into culture of Second Temple Jewish society

According to Prof. Rechavi, one of the most significant findings was the identification of two very distinct Jeremiah fragments.

"Almost all the scrolls we sampled were found to be made of sheepskin, so most of the effort was invested in the very challenging task of trying to piece together fragments made from the skin of particular sheep, and to separate these from fragments written on skins of different sheep that also share an almost identical genome," says Prof. Rechavi. "However, two samples were discovered to be made of cowhide, and these happen to belong to two different fragments taken from the Book of Jeremiah. In the past, one of the cow skin-made fragments was thought to belong to the same scroll as another fragment that we found to be made of sheepskin. The mismatch now officially disproves this theory.

"What's more, cow husbandry requires grass and water, so it is very likely that cow hide was not processed in the desert but was brought to the Qumran caves from another place. This finding bears crucial significance, because the cowhide fragments came from two different copies of the Book of Jeremiah, reflecting different versions of the book, which stray from the biblical text as we know it today."

Prof. Mizrahi further explains, "Since late antiquity, there has been almost complete uniformity of the biblical text. A Torah scroll in a synagogue in Kiev would be virtually identical to one in Sydney, down to the letter. By contrast, in Qumran we find in the very same cave different versions of the same book. But, in each case, one must ask: Is the textual 'pluriformity,' as we call it, yet another peculiar characteristic of the sectarian group whose writings were found in the Qumran caves? Or does it reflect a broader feature, shared by the rest of Jewish society of the period? The ancient DNA proves that two copies of Jeremiah, textually different from each other, were brought from outside the Judean Desert. This fact suggests that the concept of scriptural authority -- emanating from the perception of biblical texts as a record of the Divine Word -- was different in this period from that which dominated after the destruction of the Second Temple. In the formative age of classical Judaism and nascent Christianity, the polemic between Jewish sects and movements was focused on the 'correct' interpretation of the text, not its wording or exact linguistic form."

Identification of genetically distinct groups of sheep suggests prominence of ancient Jewish mysticism

Another surprising finding relates to a non-biblical text, unknown to the world before the discovery of the Dead Sea Scrolls, a liturgical composition known as the Songs of the Sabbath Sacrifice, found in multiple copies in the Qumran caves and in Masada. Apparently, there is surprising similarity between this work and the literature of ancient Jewish mystics of Late Antiquity and the Middle Ages. Both Songs and the mystical literature greatly expand on the visionary experience of the divine chariot-throne, developing the vision of the biblical prophet Ezekiel. But the Songs predates the later Jewish mystical literature by several centuries, and scholars have long debated whether the authors of the mystical literature were familiar with Songs.

"The Songs of the Sabbath Sacrifice were probably a 'best-seller' in terms of the ancient world," Prof. Mizrahi says. "The Dead Sea Scrolls contain 10 copies, which is more than the number of copies of some of the biblical books that were discovered. But again, one has to ask: Was the composition known only to the sectarian group whose writings were found in the Qumran caves, or was it well known outside those caves? Even after the Masada fragment was discovered, some scholars argued that it originated with refugees who fled to Masada from Qumran, carrying with them one of their scrolls. But the genetic analysis proves that the Masada fragment was written on the skin of different sheep 'haplogroup' than those used for scroll-making in Qumran. The most reasonable interpretation of this fact is that the Masada Scroll did not originate in the Qumran caves but was rather brought from another place. As such, it corroborates the possibility that the mystical tradition underlying the Songs continued to be transmitted in hidden channels even after the destruction of the Second Temple and through the Middle Ages."

From solved riddles to new mysteries: Yet undiscovered caves?

Since most of the scrolls were found to be written on sheepskin, the team had to find a way to distinguish "in higher resolution" between the very similar genomes of different sheep.

"Mitochondrial DNA can tell us whether it is a sheep or a cow, but it can't distinguish between individual sheep," Prof. Rechavi adds. "We developed new experimental and informatic methods to examine the bits of preserved nuclear DNA, which disintegrated over two millennia in arid caves, and were contaminated in the course of 2,000 years, including recently by the people who handled the scrolls, often without even the use of gloves."

Using these methods, it was discovered that all the sampled scroll-fragments written using a particular scribal system characteristic to the sectarian writings found in the Qumran caves (the "Qumran scribal practice") are genetically linked and differ collectively from other scroll-fragments that were written in different ways and discovered in the very same caves. This finding affords a new and powerful tool for distinguishing between scrolls peculiar to the sect and scrolls that were brought from elsewhere, and potentially reflect the broader Jewish society of the period.

Shor says, "Such an interdisciplinary project is very important indeed. It advances Dead Sea Scrolls research into the 21st century, and may answer questions that scholars have been debating with for decades. We consider the present project, which integrates both extraction of genetic information from the scrolls using novel methods together with classical philological analysis, a very significant contribution to the study of the scrolls."

The project examines not only scroll fragments but also other leather artifacts discovered at various sites throughout the Judean Desert. The genetic differences between them have allowed researchers to discern between different groups of findings.

According to Prof. Mizrahi, many scroll fragments were not found by archaeologists, but by shepherds, delivered to antiquity dealers, and only subsequently handed over to scholars.

"We don't always know precisely where each fragment was discovered, and sometimes false information was given about this matter," says Prof. Mizrahi. "Identifying the place of discovery is important, because it affects our understanding of the historical context of the findings. For this reason, we were excited to learn that one fragment, that was suspected to originate not from Qumran but rather from another site, indeed had a 'genetic signature' that was different from all the other scrolls found in the Qumran caves sampled for this research."

But this finding led to yet another enigmatic discovery pertaining to a fragment containing a text from the Book of Isaiah. This fragment was published as a Qumran scroll, but its genetic signature also turned out to be different from other scrolls in Qumran.

Prof. Mizrahi concludes, "This raises a new curious question: Was this fragment really found in the Qumran caves? Or was it originally found in yet another, unidentified location? This is the nature of scientific research: We solve old puzzles, but then discover new mysteries."

Credit: 
American Friends of Tel Aviv University

NASA analyzes Gulf of Mexico's reborn tropical depression soaking potential

image: On June 2 at 3:35 a.m. EDT (0735 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite found coldest cloud top temperatures (yellow) in several areas around Tropical Depression 03L's center of circulation. They were as cold as or colder than minus 80 degrees Fahrenheit (minus 62.2 Celsius). One area of strong storms were off the coast and over the Bay of Campeche, Gulf of Mexico. Several other areas were over Mexico's Yucatan Peninsula.

Image: 
NASA/NRL

Infrared imagery from NASA's Aqua satellite showed that strong storms from a redeveloped tropical cyclone were soaking parts of Mexico's Yucatan Peninsula. Tropical Depression 03L is expected to generate heavy rainfall in the region.

ic Ocean
FacebookTwitterPinterestTumblrMyspaceBlogger
June 02, 2020 - NASA Analyzes Gulf of Mexico's Reborn Tropical Depression Soaking Potential
Infrared imagery from NASA's Aqua satellite showed that strong storms from a redeveloped tropical cyclone were soaking parts of Mexico's Yucatan Peninsula. Tropical Depression 03L is expected to generate heavy rainfall in the region.

Aqua image of TD3
On June 2 at 3:35 a.m. EDT (0735 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite found coldest cloud top temperatures (yellow) in several areas around Tropical Depression 03L's center of circulation. They were as cold as or colder than minus 80 degrees Fahrenheit (minus 62.2 Celsius). One area of strong storms were off the coast and over the Bay of Campeche, Gulf of Mexico. Several other areas were over Mexico's Yucatan Peninsula. Credit: NASA/NRL

Credit: 
NASA/Goddard Space Flight Center

Tulane scientists find a switch to flip and turn off breast cancer growth and metastasis

Researchers at Tulane University School of Medicine identified a gene that causes an aggressive form of breast cancer to rapidly grow. More importantly, they have also discovered a way to "turn it off" and inhibit cancer from occurring. The animal study results have been so compelling that the team is now working on FDA approval to begin clinical trials and has published details in the journal Scientific Reports.

The team led by Dr. Reza Izadpanah examined the role two genes, including one whose involvement in cancer was discovered by Tulane researchers, play in causing triple negative breast cancer (TNBC). TNBC is considered to be the most aggressive of breast cancers, with a much poorer prognosis for treatment and survival. Izadpanah's team specifically identified an inhibitor of the TRAF3IP2 gene, which was proven to suppress the growth and spread (metastasis) of TNBC in mouse models that closely resemble humans.

In parallel studies looking at a duo of genes - TRAF3IP2 and Rab27a, which play roles in the secretion of substances that can cause tumor formation - the research teams studied what happens when they were stopped from functioning. Suppressing the expression of either gene led to a decline in both tumor growth and the spread of cancer to other organs. Izadpanah says that when Rab27a was silenced, the tumor did not grow but was still spreading a small number of cancer cells to other parts of the body. However, when the TRAF3IP2 gene was turned off, they found no spread (known as "metastasis" or "micrometastasis") of the original tumor cells for a full year following the treatment. Even more beneficial, inhibiting the TRAF3IP2 gene not only stopped future tumor growth but caused existing tumors to shrink to undetectable levels.

"Our findings show that both genes play a role in breast cancer growth and metastasis," says Izadpanah. "While targeting Rab27a delays progression of tumor growth, it fails to affect the spread of tiny amounts of cancer cells, or micrometastasis. On the contrary, targeting TRAF3IP2 suppresses tumor growth and spread, and interfering with it both shrinks pre-formed tumors and prevents additional spread. This exciting discovery has revealed that TRAF3IP2 can play a role as a novel therapeutic target in breast cancer treatment."

"It is important to note that this discovery is the result of a truly collaborative effort between basic science researchers and clinicians." Izadpanah continued. Members of the team included Eckhard Alt, David Jansen, Abigail Chaffin, Stephen Braun, Aaron Dumont, Ricardo Mostany and Matthew Burow of Tulane University. Dr. Bysani Chandrasekar of the University of Missouri has joined in the Tulane research efforts and found that targeting TRAF3IP2 can stop the spread of glioblastoma, a deadly brain cancer with limited treatment options. The team is now working on getting FDA approval and hopes to begin clinical trials soon.

Credit: 
Tulane University

Human waste could help combat global food insecurity

image: Leilah Krounbi, a former Cornell PhD student, used the Canadian Light Source synchrotron at the University of Saskatchewan to test the feasibility of a fertilizer made from human waste.

Image: 
Leilah Krounbi

SASKATOON - Researchers from Cornell University's College of Agriculture and Life Sciences and the Canadian Light Source (CLS) at the University of Saskatchewan have proven it is possible to create nitrogen-rich fertilizer by combining the solid and liquid components of human waste.

The discovery, published recently in the journal Sustainable Chemistry and Engineering, has the potential to increase agriculture yields in developing countries and reduce contamination of groundwater caused by nitrogen runoff.

Special separating toilets that were developed through the Reinvent the Toilet Challenge have helped solve long-standing sanitation problems in the slums of Nairobi, Kenya. However, the methods used to dispose of the two outputs failed to capture a key nutrient that local fields were starving for: nitrogen.

Cornell researchers Leilah Krounbi, a former PhD student, now at the Weizmann Institute in Israel, and Johannes Lehmann, senior author and professor of soil and crop sciences, wondered whether it might be possible to close the waste stream loop by recycling nitrogen from the urine, which was otherwise being lost to runoff. While other researchers have engineered adsorbers using high-tech ingredients such as carbon nanotubes or activated carbons, Lehmann and his team wanted to know if they could do so with decidedly low-tech materials like human feces. Adsorbers are materials whose surfaces can capture and hold gas or liquids.

"We were interested in figuring out how to bring nitrogen out of the liquid waste streams, bring it onto a solid material so it has a fertilizer quality and can be used in this idea of a circular economy," said Lehmann.

The researchers began by heating the solid component of human waste to 500 degrees Celsius in the absence of oxygen to produce a pathogen-free charcoal called biochar. Next, they manipulated the surface of the biochar by priming it with CO2, which enabled it to soak up ammonia, the nitrogen-rich gas given off by urine. The chemical process caused the ammonia to bond to the biochar. By repeating the process, they could load up the biochar with extra layers of nitrogen. The result is a solid material rich in nitrogen.

Using the SGM beamline at the CLS enabled Lehmann and his team to see how the chemistry in the nitrogen changed as it adsorbed ammonia. The beamline also provided an indication of just how available the nitrogen would be to plants if the resulting material was used as fertilizer.

"In order to understand what the interactions are between nitrogen, the ammonia gas and the carbon, there really is no other good way than using the NEXAFS (near-edge, X-ray absorption fine structure) spectroscopy that the CLS beamline offers," said Lehmann. "It was really our workhorse to understand what kind of chemical bonds are appearing between the nitrogen gas and our adsorber."

The research team has demonstrated that it is indeed possible to make a fertilizer using the most basic of ingredients, human waste. However, they still have a number of questions to answer: Can you optimize the process to maximize the amount of ammonia soaked up by biochar? How will this "recycled" fertilizer compare to existing commercial nitrogen fertilizers for different crops and soils? Can you build a cost-effective machine that performs this process automatically in a real-world setting?

What started as the search for a solution to a highly localized problem has widespread applicability, said Lehmann. "I do think it is as important for a Saskatchewan wastewater treatment plant, or a dairy farm in upstate New York, as it is for a resident in Nairobi. It's a basic principle that has utility anywhere."

Credit: 
University of Saskatchewan

Antibiotic-destroying genes widespread in bacteria in soil and on people

video: This video shows two different 3D views of TetX7 (green), a tetracycline-destroying enzyme that causes resistance to all tetracycline antibiotics (the small multicolored molecule in the center). Researchers at Washington University in St. Louis and the National Institutes of Health (NIH) have found that genes that confer the power to destroy tetracyclines are widespread in bacteria that live in the soil and on people.

Image: 
Timothy Wencewicz

The latest generation of tetracyclines - a class of powerful, first-line antibiotics - was designed to thwart the two most common ways bacteria resist such drugs. But a new study from researchers at Washington University in St. Louis and the National Institutes of Health (NIH) has found that genes representing yet another method of resistance are widespread in bacteria that live in the soil and on people. Some of these genes confer the power to destroy all tetracyclines, including the latest generation of these antibiotics.

However, the researchers have created a chemical compound that shields tetracyclines from destruction. When the chemical compound was given in combination with tetracyclines as part of the new study, the antibiotics' lethal effects were restored.

The findings, available online in Communications Biology, indicate an emerging threat to one of the most widely used classes of antibiotics­ -- but also a promising way to protect against that threat.

"We first found tetracycline-destroying genes five years ago in harmless environmental bacteria, and we said at the time that there was a risk the genes could get into bacteria that cause disease, leading to infections that would be very difficult to treat," said co-senior author Gautam Dantas, PhD, a professor of pathology and immunology and of molecular microbiology at Washington University School of Medicine in St. Louis. "Once we started looking for these genes in clinical samples, we found them immediately. The fact that we were able to find them so rapidly tells me that these genes are more widespread than we thought. It's no longer a theoretical risk that this will be a problem in the clinic. It's already a problem."

In 2015, Dantas, also a professor of biomedical engineering, and Timothy Wencewicz, PhD, an associate professor of chemistry in Arts & Sciences at Washington University, discovered 10 different genes that each gave bacteria the ability to dice up the toxic part of the tetracycline molecule, thereby inactivating the drug. These genes code for proteins the researchers dubbed tetracycline destructases.

But they didn't know how widespread such genes were. To find out, Dantas and first author Andrew Gasparrini, PhD - then a graduate student in Dantas' lab - screened 53 soil, 176 human stool, two animal feces, and 13 latrine samples for genes similar to the 10 they'd already found. The survey yielded 69 additional possible tetracycline-destructase genes.

Then they cloned some of the genes into E. coli bacteria that had no resistance to tetracyclines and tested whether the genetically modified bacteria survived exposure to the drugs. E. coli that had received supposed destructase genes from soil bacteria inactivated some of the tetracyclines. E. coli that had received genes from bacteria associated with people destroyed all 11 tetracyclines.

"The scary thing is that one of the tetracycline destructases we found in human-associated bacteria - Tet(X7) - may have evolved from an ancestral destructase in soil bacteria, but it has a broader range and enhanced efficiency," said Wencewicz, who is a co-senior author on the new study. "Usually there's a trade-off between how broad an enzyme is and how efficient it is. But Tet(X7) manages to be broad and efficient, and that's a potentially deadly combination."

In the first screen, the researchers had found tetracycline-destructase genes only in bacteria not known to cause disease in people. To find out whether disease-causing species also carried such genes, the scientists scanned the genetic sequences of clinical samples Dantas had collected over the years. They found Tet(X7) in a bacterium that had caused a lung infection and sent a man to intensive care in Pakistan in 2016.

Tetracyclines have been around since the 1940s. They are one of the most widely used classes of antibiotics, used for diseases ranging from pneumonia, to skin or urinary tract infections, to stomach ulcers, as well as in agriculture and aquaculture. In recent decades, mounting antibiotic resistance has driven pharmaceutical companies to spend hundreds of millions of dollars developing a new generation of tetracyclines that is impervious to the two most common resistance strategies: expelling drugs from the bacterial cell before they can do harm, and fortifying vulnerable parts of the bacterial cell.

The emergence of a third method of antibiotic resistance in disease-causing bacteria could be disastrous for public health. To better understand how Tet(X7) works, co-senior author Niraj Tolia, PhD, a senior investigator at the National Institute of Allergy and Infectious Diseases at the NIH, and co-author Hirdesh Kumar, PhD, a postdoctoral researcher in Tolia's lab, solved the structure of the protein.

"I established that Tet(X7) is very similar to known structures but way more active, and we don't really know why because the part that interacts with the tetracycline rings is the same," Kumar said. "I'm now taking a molecular dynamics approach so we can see the protein in action. If we can understand why it is so efficient, we can design even better inhibitors."

Wencewicz and colleagues previously designed a chemical compound that preserves the potency of tetracyclines by preventing destructases from chewing up the antibiotics. In the most recent study, co-author Jana L. Markley, PhD, a postdoctoral researcher in Wencewicz's lab, evaluated that inhibitor against the bacterium from the patient in Pakistan and its powerful Tet(X7) destructase. Adding the compound made the bacteria two to four times more sensitive to all three of the latest generation of tetracyclines.

"Our team has a motto extending the wise words of Benjamin Franklin: 'In this world nothing can be said to be certain, except death, taxes and antibiotic resistance,'" Wencewicz said. "Antibiotic resistance is going to happen. We need to get ahead of it and design inhibitors now to protect our antibiotics, because if we wait until it becomes a crisis, it's too late."

Credit: 
Washington University School of Medicine

Scientists detect crab nebula using innovative gamma-ray telescope

image: The Schwarzschild-Couder telescope, located at the Fred Lawrence Whipple Observatory in Amado, Arizona, detected gamma-ray showers from the Crab Nebula in early 2020, proving the viability of the technology design for gamma-ray astrophysics.

Image: 
Photo: Amy C. Oliver, Center for Astrophysics | Harvard & Smithsonian

Scientists have detected gamma rays from the Crab Nebula, the most famous of supernova remnants, using a next-generation telescope that opens the door for astrophysicists to study some of the most energetic and unusual objects in the universe.

The prototype Schwarzschild-Couder Telescope (SCT)--developed by scientists at the Columbia University in collaboration with researchers from other institutions--is part of an international effort, known as the Cherenkov Telescope Array (CTA), which aims to construct the world's largest and most powerful gamma-ray observatory, with more than 100 similar telescopes in the northern and southern hemispheres.

"That we were able to successfully detect the Crab Nebula demonstrates the viability of the novel Schwarzschild-Couder design," said Brian Humensky, associate professor of physics at Columbia, who worked with a team to design and build the telescope. "It's been a long journey, so it's enormously satisfying to see the telescope performing, and we're excited to see what we can do with it."

The Crab Nebula, so named because of its tentacle-like structure that resembles a crustacean, is the remnant of a massive star that self-destructed almost a millennium ago in an enormous supernova explosion. The estimated distance to what's left of this star from Earth is about 6,500 light-years.

Over time the light from the supernova faded away, leaving behind the remains of a powerful, rapidly spinning neutron star, or pulsar, that can still be seen within a cloud of gas, dust and highly energetic subatomic particles, which emit radiation across the electromagnetic spectrum. The most energetic of those particles radiate gamma rays.

While scientists have been using the SCT technology to observe the Crab Nebula since January 2020, the project has been underway for nearly a decade. At its heart is a high-speed, high-resolution camera and a dual-mirror system--more intricate than the one-mirror design traditionally used in gamma-ray telescopes--that work together to enhance the quality of light for greater imaging detail over larger field of view across the sky.

"The camera triggers upon bursts of light that occur when a gamma ray collides with an air molecule, and records these signals at a rate of a billion frames per second," said Humensky, who collaborated with colleagues at Barnard College to build major components of SCT's mirror alignment system and develop its control software. "This allows us to reconstruct the gamma rays with extraordinary precision."

Humensky's involvement with the prototype SCT, unveiled last year at Fred Lawrence Whipple Observatory in Arizona, began in 2012, when the National Science Foundation funded the project. The Columbia team, including Barnard College postdoctoral research associate Qi Feng, and Ari Brill and Deivid Ribeiro, Columbia doctoral students in physics, helped achieve the initial optical focus.

Ribeiro has worked on the telescope since fall 2015, starting through Columbia's Bridge to the PhD program. "I've made seven trips to Arizona, beginning with a three-month stay to integrate the secondary mirror panels with the telescope structure," he said. "It's rewarding to be part of this team and to have collected some of the data that led to this first detection."

The sighting of the Crab Nebula, announced at the 236th meeting of the American Astronomical Society June 1, lays the groundwork for the use of the SCT in the future Cherenkov Telescope Array observatory. Slated for completion in 2026, the observatory, with its configuration of 120 telescopes of varying sizes split between Chile and Spain's Canary Islands, will detect sources of gamma rays 100 times faster than current instruments.

"The success of the prototype SCT creates an opportunity for the Cherenkov Telescope Array to address and hopefully answer some of the biggest questions in astronomy: What is dark matter? How are the most energetic cosmic rays created?" Humensky said. "It's exciting to look forward to."

Credit: 
Columbia University

COVID-19 news from Annals of Internal Medicine

Below please find a summary and link(s) of new coronavirus-related content published today in Annals of Internal Medicine. The summary below is not intended to substitute for the full article as a source of information. A collection of coronavirus-related content is free to the public at http://go.annals.org/coronavirus.

1. ACP Leaders suggest using "New Vision" to guide U.S. health care reform during and after COVID-19 pandemic
Paper discusses issues of socioeconomic, racial, and gender-based inequality amplified by the current pandemic

A group of American College of Physicians (ACP) leaders say that policy recommendations outlined in the organization's policy paper, "Better is Possible: The American College of Physicians Vision for the U.S. Health Care System," can help advise current actions during the COVID-19 pandemic. They suggest that the paper can guide future actions to improve access to care, reduce per capita health care costs, and reduce health care system complexity. The Ideas and Opinions article, "The Collision of COVID-19 and the U.S. Health System," is published today in Annals of Internal Medicine. Read the full text: https://www.acpjournals.org/doi/10.7326/M20-1851.

The authors say the COVID-19 pandemic underscores the need for improving access to care for all Americans, as universal coverage would help millions of uninsured Americans and provide a safety net for those facing financial burden. And of particular relevance right now, the current pandemic has also demonstrated how factors of race and ethnicity are contributing to an inequitable health care system, resulting in poor outcomes for these populations. ACP's New Vision for U.S. Health Care makes policy recommendations to reduce social factors and eliminate social barriers for vulnerable and underserved populations, including women.

Credit: 
American College of Physicians

New research deepens understanding of Earth's interaction with the solar wind

image: PPPL physicist Derek Schaeffer in front of an image of a jet airplane creating an atmospheric shock wave

Image: 
Elle Starkman / PPPL Office of Communications

As the Earth orbits the sun, it plows through a stream of fast-moving particles that can interfere with satellites and global positioning systems. Now, a team of scientists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University has reproduced a process that occurs in space to deepen understanding of what happens when the Earth encounters this solar wind.

The team used computer simulations to model the movement of a jet of plasma, the charged state of matter composed of electrons and atomic nuclei that makes up all the stars in the sky, including our sun. Many cosmic events can produce plasma jets, from relatively small star burps to gigantic stellar explosions known as supernovae. When fast-moving plasma jets pass through the slower plasma that exists in the void of space, it creates what is known as a collision-less shock wave.

These shocks also occur as Earth moves through the solar wind and can influence how the wind swirls into and around Earth's magnetosphere, the protective magnetic shield that extends into space. Understanding plasma shock waves could help scientists to forecast the space weather that develops when the solar wind swirls into the magnetosphere and enable the researchers to protect satellites that allow people to communicate across the globe.

The simulations revealed several telltale signs indicating when a shock is forming, including the shock's features, the three stages of the shock's formation, and phenomena that could be mistaken for a shock. "By being able to distinguish a shock from other phenomena, scientists can feel confident that what they are seeing in an experiment is what they want to study in space," said Derek Schaeffer, an associate research scholar in the Princeton University Department of Astrophysics who led the PPPL research team. The findings were reported in a paper published in Physics of Plasmas that followed up on previous research reported here and here.

The plasma shocks that occur in space, like those created by Earth traveling against the solar wind, resemble the shock waves created in Earth's atmosphere by supersonic jet aircraft. In both occurrences, fast-moving material encounters slow or stationary material and must swiftly change its speed, creating an area of swirls and eddies and turbulence.

But in space, the interactions between fast and slow plasma particles occur without the particles touching one another. "Something else must be driving this shock formation, like the plasma particles electrically attracting or repelling each other," Schaeffer said. "In any case, the mechanism is not fully understood."

To increase their understanding, physicists conduct plasma experiments in laboratories to monitor conditions closely and measure them precisely. In contrast, measurements taken by spacecraft cannot be easily repeated and sample only a small region of plasma. Computer simulations then help the physicists interpret their laboratory data.

Today, most laboratory plasma shocks are formed using a mechanism known as a plasma piston. To create the piston, scientists shine a laser on a small target. The laser causes small amounts of the target's surface to heat up, become a plasma, and move outward through a surrounding, slower-moving plasma.

Schaeffer and colleagues produced their simulation by modeling this process. "Think of a boulder in the middle of fast-moving stream," Schaeffer said. "The water will come right up to the front of the boulder, but not quite reach it. The transition area between quick motion and zero [standing] motion is the shock."

The simulated results will help physicists distinguish an astrophysical plasma shock wave from other conditions that arise in laboratory experiments. "During laser plasma experiments, you might observe lots of heating and compression and think they are signs of a shock," Schaeffer said. "But we don't know enough about the beginning stages of a shock to know from theory alone. For these kinds of laser experiments, we have to figure out how to tell the difference between a shock and just the expansion of the laser-driven plasma."

In the future, the researchers aim to make the simulations more realistic by adding more detail and making the plasma density and temperature less uniform. They would also like to run experiments to determine whether the phenomena predicted by the simulations can in fact occur in a physical apparatus. "We'd like to put the ideas we talk about in the paper to the test," says Schaeffer.

Credit: 
DOE/Princeton Plasma Physics Laboratory

Killing coronavirus with handheld ultraviolet light device may be feasible

A personal, handheld device emitting high-intensity ultraviolet light to disinfect areas by killing the novel coronavirus is now feasible, according to researchers at Penn State, the University of Minnesota and two Japanese universities.

There are two commonly employed methods to sanitize and disinfect areas from bacteria and viruses -- chemicals or ultraviolet radiation exposure. The UV radiation is in the 200 to 300 nanometer range and known to destroy the virus, making the virus incapable of reproducing and infecting. Widespread adoption of this efficient UV approach is much in demand during the current pandemic, but it requires UV radiation sources that emit sufficiently high doses of UV light. While devices with these high doses currently exist, the UV radiation source is typically an expensive mercury-containing gas discharge lamp, which requires high power, has a relatively short lifetime, and is bulky.

The solution is to develop high-performance, UV light emitting diodes, which would be far more portable, long-lasting, energy efficient and environmentally benign. While these LEDs exist, applying a current to them for light emission is complicated by the fact that the electrode material also has to be transparent to UV light.

"You have to ensure a sufficient UV light dose to kill all the viruses," said Roman Engel-Herbert, Penn State associate professor of materials science, physics and chemistry. "This means you need a high-performance UV LED emitting a high intensity of UV light, which is currently limited by the transparent electrode material being used."

While finding transparent electrode materials operating in the visible spectrum for displays, smartphones and LED lighting is a long-standing problem, the challenge is even more difficult for ultraviolet light.

"There is currently no good solution for a UV-transparent electrode," said Joseph Roth, doctoral candidate in Materials Science and Engineering at Penn State. "Right now, the current material solution commonly employed for visible light application is used despite it being too absorbing in the UV range. There is simply no good material choice for a UV-transparent conductor material that has been identified."

Finding a new material with the right composition is key to advancing UV LED performance. The Penn State team, in collaboration with materials theorists from the University of Minnesota, recognized early on that the solution for the problem might be found in a recently discovered new class of transparent conductors. When theoretical predictions pointed to the material strontium niobate, the researchers reached out to their Japanese collaborators to obtain strontium niobate films and immediately tested their performance as UV transparent conductors. While these films held the promise of the theoretical predictions, the researchers needed a deposition method to integrate these films in a scalable way.

"We immediately tried to grow these films using the standard film-growth technique widely adopted in industry, called sputtering," Roth said. "We were successful."

This is a critical step towards technology maturation which makes it possible to integrate this new material into UV LEDs at low cost and high quantity. And both Engel-Herbert and Roth believe this is necessary during this crisis.

"While our first motivation in developing UV transparent conductors was to build an economic solution for water disinfection, we now realize that this breakthrough discovery potentially offers a solution to deactivate COVID-19 in aerosols that might be distributed in HVAC systems of buildings," Roth explains. Other areas of application for virus disinfection are densely and frequently populated areas, such as theaters, sports arenas and public transportation vehicles such as buses, subways and airplanes.

Credit: 
Penn State

Researchers identify key immune checkpoint protein that operates within T cells

Columbus, Ohio - A new study led by researchers at The Ohio State University Comprehensive Cancer Center - Arthur G. James Cancer Hospital and Solove Research Institute (OSUCCC - James) has identified a protein within certain immune cells that is required for optimal immune responses to cancer.

The findings, reported in the journal Science Advances, also suggest that the protein might be useful for predicting which cancer patients are less likely to respond to the form of therapy called immune checkpoint blockade.

The protein is called PCBP1, or poly(C)-binding protein 1. The researchers found that PCBP1 helps shape immune responses by ensuring that adequate numbers of activated immune T cells differentiate into cytotoxic T cells, which kill cancer cells. At the same time, PCBP1 prevents the development of too many regulatory T cells, which do not kill cancer cells.

"Our findings suggest that PCBP1 is a global intracellular immune checkpoint, and that targeting it would offer a way to influence antitumor responses during immune therapy," says principal investigator Zihai Li, MD, PhD, a professor in the Division of Medical Oncology at Ohio State and director of the Pelotonia Institute for Immuno-Oncology (PIIO) at the OSUCCC - James. Li is also a member of the OSUCCC - James Translational Therapeutics Research Program.

"Immune checkpoint blockade therapy has revolutionized cancer treatment, especially in melanoma, non-small cell lung, and head and neck cancer," says first author Ephraim Abrokwa Ansa-Addo, PhD, an assistant professor in the Division of Medical Oncology and also a member of the Translational Therapeutics Research Program. "But we need better ways to identify which patients will benefit from the therapy. PCBP1 may help us do that."

PCBP1 belongs to a family of molecules called RNA binding protein. It controls gene expression when immune T cells differentiate into either regulatory T cells or into cytotoxic T cells, which carry out immune responses against infection and cancer. (Cytotoxic T cells are a type of effector T cell.)

In activated T cells, PCBP1 prevents cytotoxic T cells from converting to regulatory T cells, thereby promoting immune responses against tumors.

For this study, researchers used cell lines, tumor models, animal models, and models of diabetes and graft-vs-host disease to achieve a better understanding of the role of PCBP1 in T cells. Graft-vs-host disease is a condition in which a donor's T-cells (graft) view the patient's health cells (host) as a foreign and then attack and damage those normal cells.

Key findings include:

In a non-cancer setting, higher PCBP1 activity promotes cytotoxic T-cell functions that inhibit tumor development and progression.

In a cancer setting such as the tumor microenvironment, higher PCBP1 activity prevents cytotoxic T cells from expressing factors such as PD-1, TIGIT and VISTA, which produce conditions less favorable to immune checkpoint blockade therapy.

In a cancer setting, lower PCBP1 in cytotoxic T cells triggers expression of PD-1 and other factors that suppress the T cells' cancer immune responses, producing conditions more favorable to immune checkpoint blockade therapy.

"Overall, our data indicate that PCBP1 shapes tolerance and immunity by distinctively regulating cytotoxic T-cell versus regulatory T-cell differentiation, and that it could be a marker for response to immune checkpoint blockade therapy," Li says.

Credit: 
Ohio State University Wexner Medical Center

Report on New Caledonia's coral reefs offers a glimmer of hope for the future

image: Scientists on the Global Reef Expedition found the coral reefs to be in surprisingly good shape, even in the most unexpected locations.

Image: 
© Living Oceans Foundation/Ken Marks

ANNAPOLIS -- A new report from the Khaled bin Sultan Living Oceans Foundation (KSLOF) provides a promising assessment of the status of coral reefs in New Caledonia. Released today, the Global Reef Expedition: New Caledonia Final Report summarizes the Foundation's findings from a research mission to study the health and resiliency of the coral reefs of New Caledonia, part of KSLOF's larger efforts to study the reef crisis unfolding around the world. They found many of the coral reefs to be in surprisingly good health, even in unexpected places.

This research initiative was conducted as part of the Global Reef Expedition, a 5-year scientific mission that circumnavigated the globe to collect valuable baseline data on the state of the reefs and the threats they face. Of the 22 research missions the Foundation conducted in the western Atlantic, Pacific, and Indian Oceans, the reefs of New Caledonia stood apart as some of the most beautiful and well-preserved.

"The reefs of New Caledonia are simply spectacular. Incredible diversity. Remarkable morphology," said Dr. Sam Purkis, KSLOF's Chief Scientist as well as Professor and Chair of the Department of Marine Geosciences at the University of Miami's Rosenstiel School of Marine and Atmospheric Science. "But in New Caledonia, as elsewhere, the reefs are gravely threatened by local impacts and climate change. The Living Oceans Foundation achieved two important objectives in the country - first, they mapped, using satellite, many of the remotest reef systems in New Caledonia for the first time. Second, the field data collected by the Foundation set a baseline condition for these reefs which can be tracked into the future to understand change. We hope that future change takes the form of an improving condition of the reefs, as new conservation initiatives are sparked by the Living Oceans dataset."

Working closely with local experts, researchers from the Institut de Recherche pour le Développment (IRD), and marine scientists from around the world, scientists at the Khaled bin Sultan Living Oceans Foundation spent more than one month at sea conducting comprehensive surveys of the coral reefs and their fish in New Caledonia, as well as creating detailed seabed maps. In October and November of 2013, these scientists conducted over 1,000 surveys of corals and reef fish and mapped over 2,600 km2 of shallow-water marine habitats in 10 locations throughout the country, including reefs in the Entrecasteaux Atolls, Cook Reef, Ile des Pins, and Prony Bay.

On the Global Reef Expedition mission to New Caledonia, scientists found most of the reefs to be relatively healthy, with abundant and diverse coral and fish communities. Reefs far from shore, or protected in Marine Protected Areas (MPAs), were in particularly good condition, but many nearshore reefs showed signs of fishing pressure with few large and commercially valuable fish. One notable exception was Prony Bay, which had the highest live coral cover observed in New Caledonia.

"One of our most surprising findings from New Caledonia was coral reefs thriving in unexpected locations, such as Prony Bay," said Alexandra Dempsey, the Director of Science Management at KSLOF and one of the report's authors, who was shocked to find such high coral cover in the bay's murky waters. This was unexpected given the nutrient and sediment runoff from nearby copper mines and the presence of hydrothermal vents in the bay. "Corals were surprisingly abundant in what would normally be sub-optimal conditions for coral growth. This gives us hope for the future of coral reefs. More research is needed, but this finding shows us that at least some corals can adapt to survive in high-stress environments."

New Caledonia is a global leader in marine conservation. Home to the 2nd largest MPA in the world, New Caledonia has already made great strides to protect their reefs and coastal marine resources. The report released today provides new information on the status of coral reefs and reef fish in New Caledonia, including baseline information on reefs inside Le Parc Naturel de la Mer de Corail, an MPA established in 2014. Although several years have passed since the research mission, these baseline data could be very helpful to marine managers in New Caledonia, by helping them identify areas which may be in need of additional protection and allowing ecosystem changes to be tracked through time.

"This report provides government officials, marine park managers, and the people of New Caledonia with relevant information and recommendations they can use to effectively manage their reefs and coastal marine resources," said Renée Carlton, a Marine Ecologist with the Khaled bin Sultan Living Oceans Foundation and one of the authors of the report. "We hope the data will inform ongoing marine conservation and management efforts to protect coral reefs and fisheries in New Caledonia, so that these reefs continue to thrive for generations to come."

Credit: 
Khaled bin Sultan Living Oceans Foundation

Tracking fossil fuel emissions with carbon-14

image: Because fossil fuels and materials used to produce cement are devoid of radiocarbon, associated emissions appear as areas of low Δ14C in the radiocarbon field that can be traced back to sources at the surface using atmospheric transport models. This map depicts areas where air samples were depleted of 14C and hence showed the influence of fossil fuel emissions.

Image: 
Sourish Basu, CIRES

Researchers from NOAA and the University of Colorado have devised a breakthrough method for estimating national emissions of carbon dioxide from fossil fuels using ambient air samples and a well-known isotope of carbon that scientists have relied on for decades to date archaeological sites.

In a paper published in the journal the Proceedings of the National Academy of Sciences, they report the first-ever national scale estimate of fossil-fuel derived carbon dioxide (CO2) emissions obtained by observing CO2 and its naturally occurring radioisotope, carbon-14, from air samples collected by NOAA's Global Greenhouse Gas Reference Network.

Carbon-14, or 14C, a very rare isotope of carbon created largely by cosmic rays, has a half-life of 5,700 years. The carbon in fossil fuels has been buried for millions of years and therefore is completely devoid of 14C. Careful laboratory analysis can identify the degree of 14C-depletion of the CO2 in discrete air samples, which reflects the contribution from fossil fuel combustion and cement manufacturing (which also has no 14C), otherwise known as the "fossil CO2" contribution. Knowing the location, date and time when the air samples were taken, the research team used a model of atmospheric transport to disentangle the CO2 variations due to fossil fuel combustion from other natural sources and sinks, and traced the man-made variations to the fossil CO2 sources at the surface.

A new method for evaluating inventories

"This is a new, independent, and objective method for evaluating emission inventories that is based on what we actually observe in the atmosphere," said lead author Sourish Basu, who was a CIRES scientist working at NOAA during the study. He is now a scientist at NASA's Goddard Space Flight Center in Maryland.

While the link between fossil CO2 emissions and atmospheric 14C has been known for many decades, the construction of a national-scale emission estimate based on atmospheric 14C required the simultaneous development of precise measurement techniques and an emissions estimation framework, largely spearheaded over the past 15 years by NOAA scientist John Miller and University of Colorado scientist Scott Lehman.

"Carbon-14 allows us to pull back the veil and isolate CO2 emitted from fossil fuel combustion," said Lehman, one of the paper's authors. "It provides us with a tracer we can track to sources on the ground. "We can then add these up and compare to other emissions estimates at various time and space scales"

Bottom-up vs. top-down

Accurately calculating emissions of carbon dioxide from burning fossil fuels has challenged scientists for years. The two primary methods in current use - "bottom up" inventories and "top down" atmospheric studies used in regional campaigns - each have their strengths and weaknesses.

"Bottom-up" estimates, such as those used in the EPA Inventory of U.S. Greenhouse Gas Emissions and Sinks, are developed by counting CO2 emissions from various processes and fuel types, and then scaling up emissions based on records of fossil fuel use. In contrast, "top-down" estimates are based on measured changes in the concentrations of emitted gases in the atmosphere and wind patterns connecting the surface source regions with the measurement locations.

Bottom-up inventories can provide more detail than top-down methods but their accuracy depends on the ability to track all emission processes and their intensities at all times, which is an intrinsically difficult task with uncertainties that are not readily quantified. Top-down studies are limited by the density of atmospheric measurements and our knowledge of atmospheric circulation patterns but implicitly account for all possible sectors of the economy that emit CO2

The team constructed annual and monthly top-down fossil CO2 emission estimates for the U.S. for 2010, the first year with sufficient atmospheric samples to provide robust results. As one point of comparison, they compared their numbers to bottom-up estimates from a recent U.S. Environmental Protection Agency's (EPA) report of 2010 emissions. The team's estimate of the US annual total 2010 emissions was 5 percent higher than EPA's central estimate. The new estimate is also significantly higher than those from other inventories commonly used in global and regional CO2 research. On the other hand, the atmospheric results appear to agree with a recent update of the Vulcan U.S. emissions data product developed by researchers at Northern Arizona University.

As these were the first estimates constructed using the new observing system, scientists cautioned that they should be considered provisional. Now they are busy applying the method to measurements from subsequent years, in order to determine if the differences they see are robust over time.

One of the benefits of this approach, according to the scientists, is that with an expanded 14C measurement network, there is the potential to calculate emissions from different regions - information that would augment EPA's national totals. States such as California and collections of states such as the members of the eastern Regional Greenhouse Gases Initiative have created their own greenhouse gas mitigation targets, and the ability to independently evaluate regional emissions using top-down methods would help evaluate regional emissions reduction efforts.

"Independent verification of annual and regional totals and multi-year trends using independent methods like this would promote confidence in the accuracy of emissions reporting, and could help guide future emissions mitigation strategies," said NOAA scientist John Miller.

Credit: 
NOAA Headquarters

Asteroids Bennu and Ryugu may have formed directly from collision in space

Scientists with NASA's first asteroid sample return mission, OSIRIS-REx, are gaining a new understanding of asteroid Bennu's carbon-rich material and signature "spinning-top" shape. The team, led by the University of Arizona, has discovered that the asteroid's shape and hydration levels provide clues to the origins and histories of this and other small bodies.

Bennu, the target asteroid for the OSIRIS-REx mission, and Ryugu, the target of the Japan Aerospace Exploration Agency's Hayabusa2 asteroid sample return mission, are composed of fragments of larger bodies that shattered upon colliding with other objects. The small fragments reaccumulated to form an aggregate body. Bennu and Ryugu may actually have formed in this way from the same original shattered parent body. Now, scientists are looking to discover what processes led to specific characteristics of these asteroids, such as their shape and mineralogy.

Bennu and Ryugu are both classified as "spinning-top" asteroids, which means they have a pronounced equatorial ridge. Until now, scientists thought that this shape formed as the result of thermal forces, called the YORP effect. The YORP effect increases the speed of the asteroid's spin, and over millions of years, material near the poles could have migrated to accumulate on the equator, eventually forming a spinning-top shape - meaning that the shape would have formed relatively recently.

However, in a new paper published in Nature Communications, scientists from the OSIRIS-REx and Hayabusa2 teams argue that the YORP effect may not explain the shape of either Bennu or Ryugu. Both asteroids have large impact craters on their equators, and their size suggests that these craters are some of Bennu's oldest surface features. Since the craters cover the equatorial ridges, their spinning-top shapes must also have been formed much earlier.

"Using computer simulations that model the impact that broke up Bennu's parent body, we show that these asteroids either formed directly as top-shapes, or achieved the shape early after their formation in the main asteroid belt," said Ronald Ballouz, co-lead author and OSIRIS-REx postdoctoral research associate at the UArizona. "The presence of the large equatorial craters on these asteroids, as seen in images returned by the spacecraft, rules out that the asteroids experienced a recent re-shaping due to the YORP effect. We would expect these craters to have disappeared with a recent YORP-induced re-shaping of the asteroid."

In addition to their shapes, Bennu and Ryugu also both contain water-bearing surface material in the form of clay minerals. Ryugu's surface material is less water-rich than Bennu's, which implies that Ryugu's material experienced more heating at some point.

Assuming Bennu and Ryugu formed simultaneously, the paper explores two possible explanations for the different hydration levels of the two bodies based on the team's computer simulations. One hypothesis suggests that when the parent asteroid was disrupted, Bennu formed from material closer to the original surface, while Ryugu contained more material from near the parent body's original center. Another possible explanation for the difference in hydration levels is that the fragments experienced different levels of heating during the parent asteroid's disruption. If this is the case, Ryugu's source material is likely from an area near the impact point, where temperatures were higher. Bennu's material would have come from a region that didn't undergo as much heating, and was likely farther from the point of impact. Analysis of the returned samples and further observational analysis of the asteroids' surfaces will provide a clearer idea of the possible shared history of the two asteroids.

"These simulations provide valuable new insights into how Bennu and Ryugu formed," said Dante Lauretta, OSIRIS-REx principal investigator and UArizona professor of planetary sciences. "Once we have the returned samples of these two asteroids in the lab, we may be able to further confirm these models, possibly revealing the true relationship between the two asteroids."

Scientists anticipate that the samples will also provide new insights into the origins, formation and evolution of other carbonaceous asteroids and meteorites. The Japan Aerospace Exploration Agency's Hayabusa2 mission is currently making its way back to Earth, and is scheduled to deliver its samples of Ryugu late this year. The OSIRIS-REx mission will perform its first sample collection attempt at Bennu on Oct. 20 and will deliver its samples to Earth on Sep. 24, 2023.

Credit: 
University of Arizona