Tech

Exhaled biomarkers can reveal lung disease

CAMBRIDGE, MA -- Using specialized nanoparticles, MIT engineers have developed a way to monitor pneumonia or other lung diseases by analyzing the breath exhaled by the patient.

In a study of mice, the researchers showed that they could use this system to monitor bacterial pneumonia, as well as a genetic disorder of the lungs called alpha-1 antitrypsin deficiency.

"We envision that this technology would allow you to inhale a sensor and then breathe out a volatile gas in about 10 minutes that reports on the status of your lungs and whether the medicines you are taking are working," says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science at MIT.

More safety testing would be needed before this approach could be used in humans, but in the mouse study, no signs of toxicity in the lungs were observed.

Bhatia, who is also a member of MIT's Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science, is the senior author of the paper, which appears today in Nature Nanotechnology. The first author of the paper is MIT senior postdoc Leslie Chan. Other authors are MIT graduate student Melodi Anahtar, MIT Lincoln Laboratory technical staff member Ta-Hsuan Ong, MIT technical assistant Kelsey Hern, and Lincoln Laboratory associate group leader Roderick Kunz.

Monitoring the breath

For several years, Bhatia's lab has been working on nanoparticle sensors that can be used as "synthetic biomarkers." These markers are peptides that are not naturally produced by the body but are released from nanoparticles when they encounter proteins called proteases.

The peptides coating the nanoparticles can be customized so that they are cleaved by different proteases that are linked to a variety of diseases. If a peptide is cleaved from the nanoparticle by proteases in the patient's body, it is later excreted in the urine, where it can be detected with a strip of paper similar to a pregnancy test. Bhatia has developed this type of urine test for pneumonia, ovarian cancer, lung cancer, and other diseases.

More recently, she turned her attention to developing biomarkers that could be detected in the breath rather than the urine. This would allow test results to be obtained more rapidly, and it also avoids the potential difficulty of having to acquire a urine sample from patients who might be dehydrated, Bhatia says.

She and her team realized that by chemically modifying the peptides attached to the synthetic nanoparticles, they could enable the particles to release gases called hydrofluoroamines that could be exhaled in the breath. The researchers attached volatile molecules to the end of the peptides in such a way that when proteases cleave the peptides, they are released into the air as a gas.

Working with Kunz and Ong at Lincoln Laboratory, Bhatia and her team devised a method for detecting the gas from the breath using mass spectrometry. The researchers then tested the sensors in mouse models of two diseases -- bacterial pneumonia caused by Pseudomonas aeruginosa, and alpha-1antitrypsin deficiency. During both of these diseases, activated immune cells produce a protease called neutrophil elastase, which causes inflammation.

For both of these diseases, the researchers showed that they could detect neutrophil elastase activity within about 10 minutes. In these studies, the researchers used nanoparticles that were injected intratracheally, but they are also working on a version that could be inhaled with a device similar to the inhalers used to treat asthma.

Smart detection

The researchers also demonstrated that they could use their sensors to monitor the effectiveness of drug treatment for both pneumonia and alpha-1 antitrypsin deficiency. Bhatia's lab is now working on designing new devices for detecting the exhaled sensors that could make them easier to use, potentially even allowing patients to use them at home.

"Right now we're using mass spectrometry as a detector, but in the next generation we've been thinking about whether we can make a smart mirror, where you breathe on the mirror, or make something that would work like a car breathalyzer," Bhatia says.

Her lab is also working on sensors that could detect more than one type of protease at a time. Such sensors could be designed to reveal the presence of proteases associated with specific pathogens, including perhaps the SARS-CoV-2 virus.

Credit: 
Massachusetts Institute of Technology

How governments resist World Heritage 'in Danger' listings

A study published today found national governments repeatedly resisted the placement of 41 UNESCO World Heritage sites--including the Great Barrier Reef--on the World Heritage in Danger list. This resistance is despite the sites being just as threatened, or more threatened, than those already on the in Danger list.

The study was co-authored by a team of scientists from Australia, the UK and the US.

World Heritage sites represent both natural and cultural heritage for global humanity. Their protection sits within the jurisdiction of individual countries. An in Danger listing is intended to raise awareness of threats to these sites and encourage investment in mitigation measures, such as extra protection.

Lead author Professor Tiffany Morrison from the ARC Centre of Excellence for Coral Reef Studies at James Cook University (Coral CoE at JCU) says national governments responsible for these World Heritage sites use political strategies of rhetoric and resistance to avoid a World Heritage in Danger listing.

"Avoiding an in Danger listing happens through partial compliance and by exerting diplomatic pressure on countries that are members of the World Heritage Committee," Prof Morrison said.

She says World Heritage in Danger listings are increasingly politicised. However, until now, little was known about what that politicisation entailed, and what to do about it.

The study found the net number of in Danger listings plateaued since the year 2000. At the same time, low visibility political strategies--such as industrial lobbying and political trade-offs associated with the listings--intensified.

"Our results also challenge the assumption that poor governance only happens in less technologically advanced economies. Rich countries often have poor governance too," Prof Morrison said.

"We show that the influence of powerful industries in blocking environmental governance is prevalent in many regions and systems."

The Great Barrier Reef, under the custodianship of the Australian Government, is just one of the threatened sites that continues to evade the World Heritage in Danger list.

Professor Terry Hughes, also from Coral CoE at JCU, says there is no doubt that coral reefs are in danger from man-made climate change.

"The study makes no recommendation on which World Heritage sites should be formally recognised as in Danger but points out that virtually all sites are increasingly impacted by anthropogenic climate change," Prof Hughes said.

"The Great Barrier Reef was severely impacted by three coral bleaching events in the past five years, triggered by record-breaking temperatures," he said.

World Heritage in Danger listings are frowned upon by high-value natural resource industries such as mining, forestry and environmental tourism. Prof Morrison says the in Danger listings restrict the social license of fossil fuel industries to operate.

"Industry coalitions therefore often lobby governments, UNESCO and World Heritage Committee member countries," she said.

"They claim an in Danger listing diminishes their nation's international reputation and restricts foreign investment, national productivity, and local employment. Some also challenge the World Heritage system itself and undermine reports by scientists, non-governmental organisations and the media."

These lobbying efforts heighten a government's sense of political threat by linking the listings to national economic performance, as well as to the individual reputations of politicians and senior bureaucrats.

"At the same time, UNESCO is acutely aware of these dynamics and concerned about threats to its own reputation," Prof Morrison said.

"Politicians and bureaucrats often work to conceal these dynamics, resulting in poor governance and continued environmental degradation."

Prof Morrison says revealing and analysing these dynamics is a step closer to moderating them.

The study provides new evidence for how interactions, from 1972 until 2019, between UNESCO and 102 national governments, have shaped the environmental governance and outcomes for 238 World Heritage ecosystems. It also provides examples of how concerned stakeholders can, and are, experimenting with countervailing strategies that harness these politics.

"Given the global investment in environmental governance over the past 50 years, it is essential to address the hidden threats to good governance and to safeguard all ecosystems," the study concludes.

Credit: 
ARC Centre of Excellence for Coral Reef Studies

Battery breakthrough gives boost to electric flight and long-range electric cars

image: Researchers at Berkeley Lab and Carnegie Mellon University have designed new solid electrolytes that light the path to wider electrification of transportation.

Image: 
Courtesy of Jinsoo Kim

In the pursuit of a rechargeable battery that can power electric vehicles (EVs) for hundreds of miles on a single charge, scientists have endeavored to replace the graphite anodes currently used in EV batteries with lithium metal anodes.

But while lithium metal extends an EV's driving range by 30-50%, it also shortens the battery's useful life due to lithium dendrites, tiny treelike defects that form on the lithium anode over the course of many charge and discharge cycles. What's worse, dendrites short-circuit the cells in the battery if they make contact with the cathode.

For decades, researchers assumed that hard, solid electrolytes, such as those made from ceramics, would work best to prevent dendrites from working their way through the cell. But the problem with that approach, many found, is that it didn't stop dendrites from forming or "nucleating" in the first place, like tiny cracks in a car windshield that eventually spread.

Now, researchers at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), in collaboration with Carnegie Mellon University, have reported in the journal Nature Materials a new class of soft, solid electrolytes - made from both polymers and ceramics - that suppress dendrites in that early nucleation stage, before they can propagate and cause the battery to fail.

The technology is an example of Berkeley Lab's multidisciplinary collaborations across its user facilities to develop new ideas to assemble, characterize, and develop materials and devices for solid state batteries.

Solid-state energy storage technologies such as solid-state lithium metal batteries, which use a solid electrode and a solid electrolyte, can provide high energy density combined with excellent safety, but the technology must overcome diverse materials and processing challenges.

"Our dendrite-suppressing technology has exciting implications for the battery industry," said co-author Brett Helms, a staff scientist in Berkeley Lab's Molecular Foundry. "With it, battery manufacturers can produce safer lithium metal batteries with both high energy density and a long cycle life."

Helms added that lithium metal batteries manufactured with the new electrolyte could also be used to power electric aircraft.

A soft approach to dendrite suppression

Key to the design of these new soft, solid-electrolytes was the use of soft polymers of intrinsic microporosity, or PIMs, whose pores were filled with nanosized ceramic particles. Because the electrolyte remains a flexible, soft, solid material, battery manufacturers will be able to manufacture rolls of lithium foils with the electrolyte as a laminate between the anode and the battery separator. These lithium-electrode sub-assemblies, or LESAs, are attractive drop-in replacements for the conventional graphite anode, allowing battery manufacturers to use their existing assembly lines, Helms said.

To demonstrate the dendrite-suppressing features of the new PIM composite electrolyte, the Helms team used X-rays at Berkeley Lab's Advanced Light Source to create 3D images of the interface between lithium metal and the electrolyte, and to visualize lithium plating and stripping for up to 16 hours at high current. Continuously smooth growth of lithium was observed when the new PIM composite electrolyte was present, while in its absence the interface showed telltale signs of the early stages of dendritic growth.

These and other data confirmed predictions from a new physical model for electrodeposition of lithium metal, which takes into account both chemical and mechanical characteristics of the solid electrolytes.

"In 2017, when the conventional wisdom was that you need a hard electrolyte, we proposed that a new dendrite suppression mechanism is possible with a soft solid electrolyte," said co-author Venkat Viswanathan, an associate professor of mechanical engineering and faculty fellow at Scott Institute for Energy Innovation at Carnegie Mellon University who led the theoretical studies for the work. "It is amazing to find a material realization of this approach with PIM composites."

An awardee under the Advanced Research Projects Agency-Energy's (ARPA-E) IONICS program, 24M Technologies, has integrated these materials into larger format batteries for both EVs and eVTOL (electric vertical takeoff and landing) aircraft.

"While there are unique power requirements for EVs and eVTOLs, the PIM composite solid electrolyte technology appears to be versatile and enabling at high power," said Helms.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Neanderthals of Western Mediterranean did not become extinct because of changes in climate

image: Researchers sampled this 50-cm long stalagmite in the Pozzo Cucù cave, in the Castellana Grotte area (Bari) and they carried out 27 high-precision datings and 2,700 analyses of carbon and oxygen stable isotopes.

Image: 
Photo: O. Lacarbonara

Homo Neanderthaliensis did not become extinct because of changes in climate. At least, this did not happen to the several Neanderthals groups that lived in the western Mediterranean 42,000 years ago. A research group of the University of Bologna came to this conclusion after a detailed paleoclimatic reconstruction of the last ice age through the analysis of stalagmites sampled from some caves in Apulia, Italy.

The researchers focused on the Murge karst plateau in Apulia, where Neanderthals and Homo Sapiens coexisted for at least 3,000 years, from approximately 45,000 to 42,000 years ago. This study was published in Nature Ecology & Evolution. Data extracted from the stalagmites showed that climate changes that happened during that time span were not particularly significant. "Our study shows that this area of Apulia appears as a 'climate niche' during the transition from Neanderthals to Homo Sapiens" explains Andrea Columbu, researcher and first author of this study. "It doesn't seem possible that significant climate changes happened during that period, at least not impactful enough to cause the extinction of Neanderthals in Apulia and, by the same token, in similar areas of the Mediterranean".

THE CLIMATE CHANGE HYPOTHESIS

The hypothesis that a changing climate was a factor in Neanderthals extinction (that happened, in Europe, nearly 42,000 years ago) found considerable support among the scientific community. According to this theory, during the last ice age, sharp and rapid changes in climate were a decisive factor in Neanderthals' extinction because of the increasingly cold and dry weather.

We can find confirmation of these sharp changes in the analysis of ice cores from Greenland and from other paleoclimatic archives of continental Europe. However, when it comes to some Mediterranean areas where Neanderthals had lived since 100,000 years ago, the data tell a different story. The Western Mediterranean is rich in prehistorical findings and, until now, no one ever carried out a paleoclimatic reconstruction of these Neanderthals-occupied areas.

THE IMPORTANCE OF STALAGMITES

Where to find answers about the climate past of the Western Mediterranean? The research group of the University of Bologna turned to the Murge plateau in Apulia. "Apulia is key to our understanding of anthropological movements: we know that both Neanderthals and Homo Sapiens lived there approximately 45,000 years ago", says Andrea Columbu. "Very few other areas in the world saw both species co-existing in a relatively small space. This makes the Murge plateau the perfect place to study the climate and the bio-cultural grounds of the transition from Neanderthal to Sapiens".

How is it possible to provide a climate reconstruction of such a remote period? Stalagmites have the answer. These rock formations rise from the floor of karst caves thanks to ceiling water drippings. "Stalagmites are excellent paleoclimatic and paleoenvironmental archives", explains Jo De Waele, research coordinator and professor at the University of Bologna. "Since stalagmites form through rainwater dripping, they provide unquestionable evidence of the presence or absence of rain. Moreover, they are made of calcite, which contains carbon and oxygen isotopes. The latter provide precise information about how the soil was and how much it rained during the formation period of stalagmites. We can then cross these pieces of information with radiometric dating, that provide an extremely precise reconstruction of the phases of stalagmites' formation".

A (RELATIVELY) STABLE CLIMATE

The pace at which stalagmites formed is the first significant result of this study. Researchers found out that Apulian stalagmites showed a consistent pace of dripping in the last and previous ice ages. This means that no abrupt change in climate happened during the millennia under investigation. A draught would have been visible in the stalagmites.

Among all the stalagmites that were analysed, one was particularly relevant. Researchers sampled this 50-cm long stalagmite in the Pozzo Cucù cave, in the Castellana Grotte area (Bari) and they carried out 27 high-precision datings and 2,700 analyses of carbon and oxygen stable isotopes. According to dating, this stalagmite formed between 106,000 and 27,000 years ago. This stalagmite represents the longest timeline of the last ice age in the western Mediterranean and in Europe. Moreover, this stalagmite did not show any trace of abrupt changes in climate that might have caused Neanderthals' extinction.

"The analyses we carried out show little variation in rainfall between 50,000 and 27,000 years ago, the extent of this variation is not enough to cause alterations in the flora inhabiting the environment above the cave", says Jo De Waele. "Carbon isotopes show that the bio-productivity of the soil remained all in all consistent during this period that includes the 3,000 years-long coexistence between Sapiens and Neanderthals. This means that significant changes in flora and thus in climate did not happen".

THE TECHNOLOGY HYPOTHESIS

The results seem to show that the dramatic changes in the climate of the last ice age had a different impact on the Mediterranean area than in continental Europe and Greenland. This may rule out the hypothesis that climate changes are responsible for Neanderthals dying out.

How do we explain their extinction after a few millennia of coexistence with Homo Sapiens? Stefano Benazzi, a palaeontologist at the University of Bologna and one of the authors of the paper, provides an answer to this question. "The results we obtained corroborate the hypothesis, put forward by many scholars, that the extinction of Neanderthals had to do with technology", says Benazzi. "According to this hypothesis, the Homo Sapiens hunted using a technology that was far more advanced than Neanderthals', and this represented a primary reason to Sapiens' supremacy over Neanderthals, that eventually became extinct after 3,000 years of co-existence".

THE AUTHORS OF THE STUDY

The study was published in Nature Ecology & Evolution with the title "Speleothem record attests to stable environmental conditions during Neanderthal- modern human turnover in southern Italy". Representing the University of Bologna, we have Andrea Columbu, Veronica Chiarini and Jo De Waele from the Department of Biological, Geological and Environmental Sciences, and Stefano Benazzi from the Department of Cultural Heritage.

Other scholars also participated in the study: from the University of Innsbruck (Austria) where the isotopic analyses were carried out, from Melbourne University (Australia) and Xi'an Jiaotong University (China), that carried out the radiometric dating.

Grotte di Castellana, the Apulian Speleology Association and, for the major part, local speleology groups provided funding for this study.

Credit: 
Università di Bologna

First in-depth insights into parturition in rhinos

video: Calf is developed front feet and head first.

Image: 
Salzburg Zoo

When exactly is a rhino offspring born? How long does the birth actually take? Does parturition proceed normally? Answers to these and similar questions are difficult for experts in zoological gardens, since baseline knowledge of the reproduction cycle of all rhinoceros species, especially its final stage, the parturition, is scarce. Scientists from the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW) together with zoo veterinarians closely monitored 19 pregnant white rhinos in six European zoos and recorded timelines for pre-birth development, milk production, hormone levels, gestation length and documented the onset of parturition, different stages of labour and foetal position at birth. These data significantly improves the knowledge base for birth management and obstetrics in rhinos and will help to reduce the number of stillbirths or perinatal problems in zoological gardens. The results are published in the scientific journal Theriogenology.

The reproductive investment of slow reproducing species such as the rhinoceros is tremendous. The reproduction cycle in rhinos covers 4 to 6 weeks of oestrous cycle, about 16 months gestation and up to 6 months of lactation, summing up to a total of 1.5 to 2.5 years - one of the longest reproduction cycles in terrestrial mammals. Delivering a live and healthy calf is of key importance to ensure that the resources the female invested are not lost in the very last minute. "Birth and perinatal period represent the eye of the needle of the reproductive cycle to establish a new generation and to secure the long term survival of all rhino species," says rhino specialist Prof Robert Hermes from the Leibniz-IZW's Department of Reproduction Management. "Even though the birth of a live calf is particularly important in species with long reproduction cycles, there have been no data available that allow the prediction of physiological events during normal parturition in rhinoceros."

Therefore, the IZW scientists and their zoo colleagues thoroughly monitored the Achilles heel of the reproduction cycle, the parturition. For the prediction of parturition, they recorded timelines for pre-birth udder development, genital swelling, milk production, behavioural unrest, the decrease of the pregnancy controlling hormone progesterone, and the gestation length in 19 pregnant white rhinoceros. Onset of parturition, different stages of labour, foetal position at birth and further important milestones during the perinatal period were recorded to describe normal parturition.

"The parturition of a white rhinoceros calf after 16 months and 3 weeks of gestation took on average 7 hours and 38 minutes," says Hermes. "Most of the rhinoceros labour is very subtle from the outside and may even remain unnoticed for the inexperienced observer. Yet, the last part of rhinoceros birth, the expulsion of the calf, happens on average in less than 25 min." When rhinoceros calves were born head first (which was the case in 84 percent of recorded births), the final expulsion took less than 10 minutes. When the calf was born with its hind feet first, final expulsion took up to 45 minutes. "Short expulsion of the foetus, which weighs between 40 and 70 kilograms, is presumably important to avoid attracting unwanted attention from predators," Hermes concludes. It took about 1h until the calf stood up and about 3.5 hours until it sucked milk for the first time. Exceptions from this norm may indicate birth complications that need obstetrical intervention.

Despite the fact that one out of six births in greater one-horned rhinoceros and white rhinoceros in zoological gardens results in stillbirth or perinatal death, there were only a few individual reports on obstructed labour (dystocia) and stillbirth available. Predicting the onset of birth and progress of parturition are fundamental for the recognition of dystocia and perinatal problems. "The results and data from our investigation will help caretakers of rhinoceros populations in human care to better predict and manage parturition, thereby reducing the current high stillbirth and perinatal death rates,", Hermes says. "This is a pivotal step in establishing a guideline for better birth management and obstetrics in rhinoceros species."

Credit: 
Forschungsverbund Berlin

A look inside a battery

image: In a measuring cell developed in-house, the Oldenburg team examined lithium electrodes using electrochemical scanning microscopy.

Image: 
Bastian Krueger

What happens inside a battery at the microscopic level during charging and discharging processes? A team of scientists led by Prof. Dr. Gunther Wittstock of the University of Oldenburg's Chemistry Department recently presented a new technique for live observation of processes that until now have been largely unobservable in the scientific journal ChemElectroChem.

According to the researchers, this new technique could accelerate the search for suitable materials for innovative batteries, with the ultimate objective of developing eco-friendlier energy storage devices that are more durable and have a higher power density. Scientists from the battery research centre MEET (Münster Electrochemical Energy Technology) at the University of Munster are also on the team.

Batteries convert chemical energy into electrical energy. During this process, charged particles cross from a positively charged electrode, the cathode, to the negative anode. In many modern batteries and rechargeable batteries, reactive metal lithium is an important component of the anode. During operation, ultra-thin layers form on the surface which protect both the electrode and the battery fluid from decomposition. Until now, however, it has been almost impossible to directly observe the changes that take place in these complex layers - just a few millionths of a metre (micrometres) thick - during charging and discharging cycles.

The team has developed a new measuring principle to obtain local, high-resolution information about the surface of metallic lithium electrodes during battery operation. "Over time, chemical processes on the electrode's surface can have a major impact on the durability and performance of a battery," said Wittstock.

The researchers used scanning electrochemical microscopy (SECM) for their analysis. This procedure involves scanning a measuring probe across the surface of a sample to collect chemical information at intervals of a just a few micrometres. Special software then translates the measured data into a coloured image. "By repeating this process several times we can track changes on the sample's surface like in a flipbook," Wittstock explained.

Bastian Krueger, a PhD student of Wittstock's Physical Chemistry research group, developed a special measuring cell in which experimental conditions - such as current intensity - essentially corresponded to those in a real battery. The chemist tested various cell assemblies which he produced using 3D printers and CNC micro-milling machines. Luis Balboa, another PhD student of the same group, carried out computer simulations to optimise the cell geometry that recreate realistic experimental conditions. The team from Munster contributed reference samples.

With this setup, the scientists were able to observe the processes on the lithium anode with an unprecedented degree of accuracy. They observed how, at high charging speeds, lithium from the battery fluid was deposited on the anode. These locally reinforced deposits can develop into so-called dendrites - branching extensions of lithium on the electrode. Such formations limit the durability of batteries and in extreme cases can even cause their destruction.

"The breakthrough in our study consists in the fact that for the first time ever we were able to carry out such processes at realistic current densities directly within the measuring apparatus and visually monitor their effects," Wittstock stressed. The technique could also be used on other types of electrodes, he added, explaining that the long-term objective was to study how different pre-treatment steps influence the formation of a protective boundary layer on electrodes.

Credit: 
University of Oldenburg

Using techniques from astrophysics, researchers can forecast drought up to ten weeks ahead

Researchers at the University of Sussex have developed a system which can accurately predict a period of drought in East Africa up to ten weeks ahead.

Satellite imagery is already used in Kenya to monitor the state of pastures and determine the health of the vegetation using a metric known as the Vegetation Condition Index. These are conveyed to the decision makers in arid and semi-arid regions of Kenya through drought early warning systems.

However, these systems, operated by the National Drought Management Authority (NDMA), only allows organisations and communities to intervene when the impacts of a drought have already occurred. By that point, such extreme weather would already have had a devastating effect on the livelihood of local people.

Instead, a team of researchers from the University of Sussex and the NDMA have developed a new system called Astrocast.

Part-funded by the Science and Technology Facilities Council, the project allows humanitarian agencies and drought risk managers to be proactive when it comes to dealing with the impacts of extreme weather by forecasting changes before they occur.

In a research paper published in Remote Sensing of Environment, they explain how an interdisciplinary team of data scientists (astronomers and mathematicians) with geographers used techniques from astronomy science; processing data directly from space telescopes before using advance statistical methods to forecast extreme weather.

Dr Pedram Rowhani, Senior Lecturer in Geography and co-founder of Astrocast, said: "In many cases, the first signs of a drought can be seen on natural vegetation, which can be monitored from space.

"Our approach measures past and present Vegetation Condition Index (VCI), an indicator that is based on satellite imagery and often used to identify drought conditions, to understand trends and the general behaviour of the VCI over time, to predict what may happen in the future."

Joint first author on the paper and Lecturer in Machine Learning and Data Science, Dr Adam Barrett said: "After conversations in corridors with Dr Rowhani about AstroCast, I saw an opportunity to apply methodology I'd been developing in theoretical neuroscience to a project with potential for real humanitarian impact.

"With Sussex actively encouraging interdisciplinary working, we decided to combine skillsets. It's been eye-opening to see how our techniques can be applied to a real-world problem and improve lives."

There has been a growing demand within the humanitarian sector to develop systems that focus on advance warnings and encourage a more proactive approach to disasters.

The Kenyan NDMA already provides monthly drought bulletins for every county, which state detected changes in the vegetation and are used to make decisions about whether to declare a drought alert.

But with Astrocast forecasts, these bulletins could also include a prediction of what the VCI is likely to be in a few weeks' time, giving farmers and pastoralist valuable time to prepare.

Seb Oliver, Professor of Astrophysics and co-founder of Astrocast, said: "A large part of my astrophysics research requires processing data from astronomical space telescopes, like the Herschel Space Observatory. Earth observation satellites are not that different.

"We often use cutting-edge statistics and machine-learning approaches to interpret our astronomical data. In this case we've used machine-learning approaches, and we've been able to forecast the state of the vegetation up to ten weeks ahead with very good confidence.

"We imagine that our reports might be used to define a new warning flag allowing county leaders to make decisions earlier and so prepare better. But this information could also be used by humanitarian organisations like the Kenya Red Cross as well as other organisations like the Kenya Met Department.

"Earlier preparation is well known to be much more effective than reactive response."

Credit: 
University of Sussex

Data assimilation significantly improves forecasts of aerosol and gaseous pollutants across China

image: Evaluation indicators (a) CORR, (b) RMSE, and (c) MFE for Control (blue) and Assimilation experiments (red) as a function of forecast range regarding pollutants PM2.5, PM10, SO2, NO2, CO, and O3, respectively, from top-to-bottom.

Image: 
©Science China Press

Aerosols are important components of the atmosphere and have an adverse impact on atmospheric visibility and human health, which also affect the climate via direct radiative forcing and the interaction with clouds and precipitation. In recent years, regional aerosol pollution incidents have occurred frequently in China, so enhancing early warning capability of air pollution is of great significance and has always been a concern of researchers. As an indispensable tool, air quality numerical models have been widely employed in air quality analysis and prediction and to forecast spatial-temporal evolutions of atmospheric pollutants. Data assimilation (DA) technology can organically combine observation information and model background field to develop a theoretically optimal analysis field, so as to improve the prediction accuracy by optimizing the model initial field. At present, the bulk of assimilation studies of pollutants, however, focused on the separated assimilation of gaseous pollutants or particulate matter PM2.5 and PM10 total mass, few researchers considered the chemical mechanism of aerosol multi-components in multi-particle size sections.

Recently, Master Wang Daichun, Dr. You Wei (corresponding author) and Associate Professor Zang Zengliang from the Institute of Meteorology and Oceanography, National University of Defense Technology, China used the three-dimensional variational assimilation algorithm to establish a chemical DA system, which included aerosol components such as elementary carbon, organic carbon, sulfate, nitrate, chloride, sodium salt, ammonium salt, inorganic and particle PM2.5, PM10 in addition to gaseous pollutants such as SO2, NO2, CO, O3 mass concentrations as control variables. Subsequently, simultaneous assimilation of hourly mass concentration observations of PM2.5, PM10, SO2, NO2, CO, and O3 released by the China National Environmental Monitoring Centre was performed to evaluate this system. The results show that this assimilation system significantly improves analyses and forecasts of both particulate matter and gaseous pollutant mass concentrations. The study was published in Science China Earth Sciences under the title "A three-dimensional variational data assimilation system for a size-resolved aerosol model: Implementation and application for particulate matter and gaseous pollutant forecasts across China".

The study revealed variable benefits from assimilation on different pollutants, as shown in Figure 1. DA significantly improves PM2.5, PM10, and CO forecasts leading to positive effects that last more than 48 h. The positive effects of DA on SO2 and O3 forecasts last up to 8 h but that remains relatively poor for NO 2 forecasts. After analysis, the positive effect of DA on pollutant forecasts has a certain relationship with the life cycle of pollutants. In the case of pollutants with a long lifespan, a longer forecast range due to DA can be expected than for pollutants with short life spans, such as NO2 and O3.

The study also showed that the influence of assimilation varies in different areas, as presented in Figure 2. It is possible that the positive effects of DA on PM2.5 and PM10 forecasts can last more than 48 h across most regions of China. Indeed, DA significantly improves SO2 forecasts within 48 h over north China, and much longer CO assimilation benefits (48 h) are found in most regions apart from north and east China and across the Sichuan Basin. Data show that DA is able to improve O3 forecasts within 48 h across China with the exception of southwest and northwest regions and the O3 DA benefits in southern China are more evident, while from a spatial distribution perspective, NO2 DA benefits remain relatively poor.

The results enrich the study of aerosol and gaseous pollutants. It not only has the reference value for the monitoring, prediction, and control of air pollutants, but also has the important scientific significance to deal with the pollution weather, the management, and prediction of atmospheric environment in China.

Credit: 
Science China Press

Geophysics: A first for a unique instrument

Geophysicists at Ludwig-Maximilians Universitaet (LMU) in Munich have measured Earth's spin and axis orientation with a novel ring laser, and provided the most precise determination of these parameters yet achieved by a ground-based instrument without the need for stellar range finding.

Buried amid the pastures and cropland near the town of Fürstenfeldbruck to the west of Munich is a scientific instrument that is 'one of a kind'. It's a ring laser named ROMY, which is essentially a rotation sensor. On its completion three years ago, the prestigious research journal Science hailed ROMY as "the most sophisticated instrument of its type in the world". The acronym refers to one of its uses - detecting rotational motions in seismology. But in addition to quantifying ground rotation caused by earthquakes, ROMY can sense minute alterations in the Earth's rotational velocity as well as changes in its axis of orientation. These fluctuations are caused not only by seismic events but by factors such as ocean currents and shifts in the distribution of ice masses, among other factors. Now a group of geophysicists led by Professors Heiner Igel (LMU) and Ulrich Schreiber (Technical University of Munich) report the results of the first continuous high-precision measurements of the Earth's rotational parameters in the journal Physical Review Letters. The authors refer to the data as a 'proof of concept' - and the results demonstrate that ROMY has passed its first real test with flying colors. "It's the most precise instrument for the measurement of ground rotations in the world," says Igel, Professor of Seismology at LMU. Accurate quantification of rotational motions is also important for determining the contribution of seismic noise to the data acquired by the two gravitational wave detectors currently in operation (LIGO and LIGO Virgo). So ROMY's applications extend well beyond observational seismology on our planet.

With the aid of a grant from the European Research Council (ERC), Igel and Schreiber developed the concept for the ROMY ring laser. The construction of the observatory, which was largely financed by LMU Munich, was an extremely challenging undertaking. Even the concrete structure in which ROMY is housed had to be erected with millimeter precision. ROMY is made up of a set of four ring lasers that form the faces of an inverted tetrahedron (and each side is 12 m long). Two laser beams circulate in opposite directions around each face of the instrument. The beam traveling in the direction of rotation takes longer than its counterpart to complete each lap. This in turn causes its wavelength to be stretched, while other is compressed. The difference in wavelength depends on the precise orientation of each face with respect to the direction and orientation of Earth's rotation. Data from three of the four rings suffice to determine all the parameters of planetary rotation.

The fact that the ring laser has more than met its design criteria is naturally a relief - and a source of great satisfaction - for Igel. "We are able to measure not only the orientation of the Earth's axis of rotation, but also its rate of spin," he explains. The method so far employed to measure these parameters with high accuracy relies on very long baseline interferometry (VLBI). This requires the use of a worldwide network of radio telescopes, which use changes in the relative timing of pulsed emissions from distant quasars to determine their own positions. Owing to the involvement of multiple observatories, the VLBI data can only be analyzed after several hours. ROMY has some considerable advantages over this approach. It outputs data virtually in real time, which allows it to monitor short-term changes in rotation parameters. Thus, the new study is based on continuous observations over a period of more than 6 weeks. During this time, ROMY detected changes in the mean orientation of the Earth's axis of less than 1 arc second.

In future and with further improvements, ROMY's high-precision measurements will complement the data obtained by the VLBI strategy, and will serve as standard values for geodesy and seismology. The measurements are also of potential scientific interest in fields such as the physics of earthquakes and seismic tomography, says Igel. "In the context of seismology, we have already obtained very valuable data on from earthquakes and seismic waves caused by ocean currents," he adds.

Credit: 
Ludwig-Maximilians-Universität München

Cheese making relies on milk proteins to form structure

Philadelphia, July 20, 2020 - Cheese production relies on coagulation of milk proteins into a gel matrix after addition of rennet. Milk that does not coagulate (NC) under optimal conditions affects the manufacturing process, requiring a longer processing time and lowering the cheese yield, which, in turn, has economic impact. In an article appearing in the Journal of Dairy Science, scientists from Lund University studied the protein composition of milk samples with different coagulation properties to learn more about why only some milk coagulates with rennet.

The authors of this study analyzed protein composition in NC and coagulating milk samples from 616 Swedish Red cows. They reported that the relative concentrations, genetic variants, and posttranslational modifications of the proteins all contribute to whether rennet could induce coagulation in each sample. The NC milk had higher relative concentrations of α-lactalbumin and ß-casein and lower relative concentrations of ß-lactoglobulin and κ-casein when compared with coagulating milk.

"The non-coagulating characteristics of milk relate to protein composition and genetic variants of the milk proteins," said first author Kajsa Nilsson, PhD, Lund University, Lund, Sweden. "Roughly 18 percent of Swedish Red cows produce noncoagulating milk, which is a high prevalence. Cheese-producing dairies would benefit from eliminating the NC milk from their processes, and breeding could reduce or remove this milk trait," said Nilsson.

These results can be used to further understand the mechanisms behind NC milk, develop breeding strategies to reduce this milk trait, and limit use of NC milk for cheese processing.

Credit: 
Elsevier

Photos may improve understanding of volcanic processes

video: A team of Penn State researchers studied Telica Volcano, a persistently active volcano in western Nicaragua, to both observe and quantify small-scale intra-crater change associated with background and eruptive activity

Image: 
Penn State

The shape of volcanoes and their craters provide critical information on their formation and eruptive history. Techniques applied to photographs -- photogrammetry -- show promise and utility in correlating shape change to volcanic background and eruption activity.

Changes in volcano shape -- morphology -- that occur with major eruptions are quantifiable, but background volcanic activity, manifesting as small volume explosions and crater wall collapse, can also cause changes in morphology and are not well quantified.

A team of Penn State researchers studied Telica Volcano, a persistently active volcano in western Nicaragua, to both observe and quantify small-scale intra-crater change associated with background and eruptive activity. Geologists consider Telica 'persistently' active because of its high levels of seismicity and volcanic degassing, and it erupts on less than 10-year time periods.

The team used direct observations of the crater, photographic observations from 1994 to 2017 and photogrammetric techniques on photos collected between 2011 and 2017 to analyze changes at Telica in the context of summit crater formation and eruptive processes. They used structure-from-motion (SfM), a photogrammetric technique, to construct 3D models from 2D images. They also used point cloud differencing, a method used to measure change between photo sampling periods, to compare the 3D models, providing a quantitative measure of change in crater morphology. They reported their results in Geochemistry, Geophysics, Geosystems.

"Photos of the crater were taken as part of a multi-disciplinary study to investigate Telica's persistent activity," said Cassie Hanagan, lead author on the study. "Images were collected from our collaborators to make observations of the crater's features such as the location and number of fumaroles or regions of volcanic degassing in the crater. For time periods that had enough photos, SfM was used to create 3D models of the crater. We could then compare the 3D models between time periods to quantify change."

Using the SfM-derived 3D models and point cloud differencing allowed the team to quantify how the crater changed through time.

"We could see the changes by visually looking at the photos, but by employing SfM, we could quantify how much change had occurred at Telica," said Peter La Femina, associate professor of geosciences in Penn State's Department of Geosciences. "This is one of the first studies to look at changes in crater morphology associated with background and eruptive activity over a relatively long time span, almost a 10-year time period."

Telica's morphological changes were then compared to the timing of eruptive activity to investigate the processes leading to crater formation and eruption.

Volcanoes erupt when pressure builds beyond a breaking point. At Telica, two mechanisms for triggering eruptions have been hypothesized. These are widespread mineralization within the underground hydrothermal system that seals the system and surficial blocking of the vent by landslides and rock fall from the crater walls. Both mechanisms could lead to increases in pressure and then eruption, according to the researchers.

"One question was whether or not covering the vents on the crater floor could cause pressure build up, and if that would cause an explosive release of this pressure if the vent were sufficiently sealed," said Hanagan.

Comparing the point cloud differencing results and the photographic observations indicated that vent infill by mass wasting from the crater walls was not likely a primary mechanism for sealing of the volcanic system prior to eruption.

"We found that material from the crater walls does fall on the crater floor, filling the eruptive vent," said La Femina. "But at the same time, we still see active fumaroles, which are vents in the crater walls where high temperature gases and steam are emitted. The fumaroles remained active even though the talus from the crater walls covered the vents. This suggests that at least the deeper magma-hydrothermal system is not directly sealed by landslides."

The researchers further note that crater wall material collapse is spatially correlated to where degassing is concentrated, and that small eruptions blow out this fallen material from the crater floor. They suggest these changes sustain a crater shape similar to other summit craters that formed by collapse into an evacuated magma chamber.

"What we found is that during the explosions, Telica is throwing out a lot of the material that came from the crater walls," said La Femina. "In the absence of magmatic eruptions, the crater is forming through this background process of crater wall collapse, and the regions of fumarole activity collapse preferentially."

Credit: 
Penn State

Plato was right. Earth is made, on average, of cubes

image: The research team measured and analyzed fragmentation patterns of rocks they collected as well as from previously assembled datasets.

Image: 
Courtesy of Gablor Domokos and Douglas Jerolmack

Plato, the Greek philosopher who lived in the 5th century B.C.E., believed that the universe was made of five types of matter: earth, air, fire, water, and cosmos. Each was described with a particular geometry, a platonic shape. For earth, that shape was the cube.

Science has steadily moved beyond Plato's conjectures, looking instead to the atom as the building block of the universe. Yet Plato seems to have been onto something, researchers have found.

In a new paper in the Proceedings of the National Academy of Sciences, a team from the University of Pennsylvania, Budapest University of Technology and Economics, and University of Debrecen uses math, geology, and physics to demonstrate that the average shape of rocks on Earth is a cube.

"Plato is widely recognized as the first person to develop the concept of an atom, the idea that matter is composed of some indivisible component at the smallest scale," says Douglas Jerolmack, a geophysicist in Penn's School of Arts & Sciences' Department of Earth and Environmental Science and the School of Engineering and Applied Science's Department of Mechanical Engineering and Applied Mechanics. "But that understanding was only conceptual; nothing about our modern understanding of atoms derives from what Plato told us.

"The interesting thing here is that what we find with rock, or earth, is that there is more than a conceptual lineage back to Plato. It turns out that Plato's conception about the element earth being made up of cubes is, literally, the statistical average model for real earth. And that is just mind-blowing."

The group's finding began with geometric models developed by mathematician Gábor Domokos of the Budapest University of Technology and Economics, whose work predicted that natural rocks would fragment into cubic shapes.

"This paper is the result of three years of serious thinking and work, but it comes back to one core idea," says Domokos. "If you take a three-dimensional polyhedral shape, slice it randomly into two fragments and then slice these fragments again and again, you get a vast number of different polyhedral shapes. But in an average sense, the resulting shape of the fragments is a cube."

Domokos pulled two Hungarian theoretical physicists into the loop: Ferenc Kun, an expert on fragmentation, and János Török, an expert on statistical and computational models. After discussing the potential of the discovery, Jerolmack says, the Hungarian researchers took their finding to Jerolmack to work together on the geophysical questions; in other words, "How does nature let this happen?"

"When we took this to Doug, he said, 'This is either a mistake, or this is big,'" Domokos recalls. "We worked backward to understand the physics that results in these shapes."

Fundamentally, the question they answered is what shapes are created when rocks break into pieces. Remarkably, they found that the core mathematical conjecture unites geological processes not only on Earth but around the solar system as well.

"Fragmentation is this ubiquitous process that is grinding down planetary materials," Jerolmack says. "The solar system is littered with ice and rocks that are ceaselessly smashing apart. This work gives us a signature of that process that we've never seen before."

Part of this understanding is that the components that break out of a formerly solid object must fit together without any gaps, like a dropped dish on the verge of breaking. As it turns out, the only one of the so-called platonic forms--polyhedra with sides of equal length--that fit together without gaps are cubes.

"One thing we've speculated in our group is that, quite possibly Plato looked at a rock outcrop and after processing or analyzing the image subconsciously in his mind, he conjectured that the average shape is something like a cube," Jerolmack says.

"Plato was very sensitive to geometry," Domokos adds. According to lore, the phrase "Let no one ignorant of geometry enter" was engraved at the door to Plato's Academy. "His intuitions, backed by his broad thinking about science, may have led him to this idea about cubes," says Domokos.

To test whether their mathematical models held true in nature, the team measured a wide variety of rocks, hundreds that they collected and thousands more from previously collected datasets. No matter whether the rocks had naturally weathered from a large outcropping or been dynamited out by humans, the team found a good fit to the cubic average.

However, special rock formations exist that appear to break the cubic "rule." The Giant's Causeway in Northern Ireland, with its soaring vertical columns, is one example, formed by the unusual process of cooling basalt. These formations, though rare, are still encompassed by the team's mathematical conception of fragmentation; they are just explained by out-of-the-ordinary processes at work.

"The world is a messy place," says Jerolmack. "Nine times out of 10, if a rock gets pulled apart or squeezed or sheared--and usually these forces are happening together--you end up with fragments which are, on average, cubic shapes. It's only if you have a very special stress condition that you get something else. The earth just doesn't do this often."

The researchers also explored fragmentation in two dimensions, or on thin surfaces that function as two-dimensional shapes, with a depth that is significantly smaller than the width and length. There, the fracture patterns are different, though the central concept of splitting polygons and arriving at predictable average shapes still holds.

"It turns out in two dimensions you're about equally likely to get either a rectangle or a hexagon in nature," Jerolmack says. "They're not true hexagons, but they're the statistical equivalent in a geometric sense. You can think of it like paint cracking; a force is acting to pull the paint apart equally from different sides, creating a hexagonal shape when it cracks."

In nature, examples of these two-dimensional fracture patterns can be found in ice sheets, drying mud, or even the earth's crust, the depth of which is far outstripped by its lateral extent, allowing it to function as a de facto two-dimensional material. It was previously known that the earth's crust fractured in this way, but the group's observations support the idea that the fragmentation pattern results from plate tectonics.

Identifying these patterns in rock may help in predicting phenomenon such as rock fall hazards or the likelihood and location of fluid flows, such as oil or water, in rocks.

For the researchers, finding what appears to be a fundamental rule of nature emerging from millennia-old insights has been an intense but satisfying experience.

"There are a lot of sand grains, pebbles, and asteroids out there, and all of them evolve by chipping in a universal manner," says Domokos, who is also co-inventor of the Gömböc, the first known convex shape with the minimal number--just two--of static balance points. Chipping by collisions gradually eliminates balance points, but shapes stop short of becoming a Gömböc; the latter appears as an unattainable end point of this natural process.

The current result shows that the starting point may be a similarly iconic geometric shape: the cube with its 26 balance points. "The fact that pure geometry provides these brackets for a ubiquitous natural process, gives me happiness," he says.

"When you pick up a rock in nature, it's not a perfect cube, but each one is a kind of statistical shadow of a cube," adds Jerolmack. "It calls to mind Plato's allegory of the cave. He posited an idealized form that was essential for understanding the universe, but all we see are distorted shadows of that perfect form."

Credit: 
University of Pennsylvania

USTC finds ultimate precision limit of multi-parameter quantum magnetometry

Quantum magnetometry, one of the most important applications in quantum metrology, aims to measure the magnetic field in highest precision. Although estimation of one component of a magnetic field has been well studied over many decades, the highest precision that can be achieved with entangled probe states for the estimation of all three components of a magnetic field remains uncertain.

In particular, the specific questions include how to balance the precision tradeoff among different parameters, what is the ultimate precision, can this precision limit be achieved, and how to achieve it.

Under the lead of Prof. GUO Guangcan, Prof. LI Chuanfeng and Prof. XIANG Guoyong from University of Science and Technology of China (USTC) of the Chinese Academy of Sciences, together with Prof. YUAN Haidong from the Chinese University of Hong Kong, obtained the ultimate precision for the estimation of all three components of a magnetic field with entangled probe states under the parallel scheme. The study was published online in Physical Review Letters.

The researchers found that the tradeoff comes from the incompatibility of the optimal probe states, and presented an approach to quantify the tradeoff induced by the incompatibility of the optimal probe states. Using this approach, they obtained the minimal tradeoff and the ultimate precision for the multi-parameter quantum magmetometry under the parallel scheme.

Furthermore, they demonstrated that this ultimate precision limit can be achieved and they constructed the optimal probe states and measurements to achieve it.

The ultimate precision of quantum magnetometry under the parallel scheme is of fundamental interest and importance in quantum metrology. It can also be directly used as the benchmark for the performance of quantum gyroscope and quantum reference frame alignment.

This approach connects the tradeoff directly to the constraints on the probe states and the generators, which can lead to many useful bounds in various scenarios of multi-parameter quantum estimation.

Credit: 
University of Science and Technology of China

Music on the brain

image: By auditory statistical learning, people become able to comprehend language and music.

Image: 
© 2020 Flickr/Wally Gobetz by CC-2.0

A new study looks at differences between the brains of Japanese classical musicians, Western classical musicians and nonmusicians. Researchers investigated specific kinds of neural behavior in participants as they were exposed to unfamiliar rhythms and nonrhythmic patterns. Trained musicians showed greater powers of rhythmic prediction compared to nonmusicians, with more subtle differences between those trained in Japanese or Western classical music. This research has implications for studies of cultural impact on learning and brain development.

"Music is ubiquitous and indispensable in our daily lives. Music can reward us, comfort us and satisfy us emotionally," said Project Assistant Professor Tatsuya Daikoku from the International Research Center for Neurointelligence at the University of Tokyo. "So it's no surprise the effect of music on the brain is well-researched. However, many studies focus on Western classical music, pop, jazz, etc., whereas ours is the first study that investigates neural mechanisms in practitioners of Japanese classical music, known as gagaku."

Many Japanese performance arts, such as in Noh or Kabuki theater, include music that does not necessarily follow a regular beat pattern as Western classical music typically does. That is, Japanese classical music sometimes expands or contracts beats without mathematical regularity. This time interval is often referred to as ma, which is an important notion throughout Japanese culture.

Daikoku and his research partner, Assistant Professor Masato Yumoto from the Graduate School of Medicine, explored how different groups of trained musicians and nonmusicians responded to different rhythm patterns. The idea was to see how musical training might influence statistical learning, the way our brains interpret and anticipate sequential information: in this case, rhythms.

The researchers recorded participants' brain activity directly using a technique called magnetoencephalography, which looks at magnetic signals in the brain. From the data, Daikoku and Yumoto were able to ascertain that statistical learning of the rhythms took place in the left hemisphere of participants' brains. And importantly, there was a greater level of activity in those with musical training, be it in Japanese or Western classical music.

"We expected that musicians would exhibit strong statistical learning of unfamiliar rhythm sequences compared to nonmusicians. This has been observed in previous studies which looked at responses to unfamiliar melodies. So this in itself was not such a surprise," said Daikoku. "What is really interesting, however, is that we were able to pick out differences in the neural responses between those trained in Japanese or Western classical music."

These differences between Japanese and Western classical musicians are far more subtle and become apparent in the higher-order neural processing of complexity in rhythm. Though it is not the case that one culture or another performed better or worse than the other, this finding does imply that different cultural upbringings and systems of education can have a tangible effect on brain development.

"This research forms part of a larger puzzle we wish to explore -- that of differences and similarities between the languages and music of cultures and how they affect learning and development," said Daikoku. "We also look into music as a way to treat developmental disorders such as language impairment. Personally, I hope to see a rejuvenation of interest in Japanese classical music; perhaps this study will inspire those unfamiliar with such music to hear and cherish this key part of Japanese cultural history."

Credit: 
University of Tokyo

Cell death in porpoises caused by environmental pollutants

image: Cell death and risk assessment of finless porpoise fibroblasts by exposure to environmental pollutants.

Image: 
Reprinted with permission from Environmental Science & Technology. © 2020 American Chemical Society

A recent study just published in Environmental Science & Technology identified the toxicological risks of environmental pollutants to finless porpoises (Neophocaena asiaeorientalis). Man-made chemicals synthesized for human activities threaten the health of marine mammals. These chemicals, including persistent organic pollutants (POPs), have long been known to accumulate at high levels in many dolphin species. The POPs levels of finless porpoises inhabiting the Seto Inland Sea are higher than those of other cetacean species distributed in the waters near Japan, and the effects of toxicity have been a concern. Nevertheless, ecotoxicological studies of wild dolphins are difficult due to legal and ethical considerations, and information is lacking. Researchers in the Center for Marine Environmental Studies (CMES), Ehime University, together with collaborators, have successfully isolated the fibroblast cells from a finless porpoise stranded in the Seto Inland Sea, Japan revealing the toxicological risk of pollutants of concern in the local population.

Cell culture and exposure to pollutants

Fibroblasts of a finless porpoise were collected from a stranded individual. Seventeen chemicals including dioxin (2,3,7,8-tetrachlorodibenzo-p-dioxin, TCDD), industrial chemicals (polychlorinated biphenyls, PCBs), metabolites of PCBs (hydroxylated PCBs, OH-PCBs), frame retardants (polybrominated biphenyls, PBDEs), insecticides (dichlorodiphenyltrichloroethane and their metabolites, DDTs), and methylmercury were tested for their cellular toxicities.

Effects of pollutants on fibroblasts

Most pollutants induced cell death at higher concentrations, and dioxin-like compounds (TCDD and dioxin-like PCBs) were more toxic than the other chemicals tested. Toxic potencies of OH-PCBs and its precursor PCBs were different for each endpoint, and these compounds may contribute to cell damage by different mechanisms. Dose-dependent cell damage was also observed with DDTs, which accumulated in relatively high concentrations in many whale species. Among DDTs, p,p'-DDT was the most potent for the cytotoxicity, whereas p,p'-DDE notably affected the cell viability. Methylmercury also induced cellular necrosis at the highest test concentration (100 μM).

Risk assessment at the population level

To assess the risk of the porpoise population inhabiting the Seto Inland Sea, the research group estimated the EARs (exposure-activity ratios). EAR is the emerging concept of finding high-risk chemical substances by comparing the concentrations at which cytotoxicity was observed with the concentration of the chemicals in animal bodies. Collectively, PCBs and DDTs were shown to be at high risks and could cause cytotoxicity, apoptosis, and reduced cell viability in the porpoise population in the Seto Inland Sea.

This study successfully evaluated the risks of environmental pollutants using fibroblasts isolated from a dead porpoise. There is an urgent need to better and comprehensively understand the risks of pollutants not only in this species but also in other marine mammals, and it is important to implement measures to reduce the load of high-risk pollutants in the marine environment.

Credit: 
Ehime University