Tech

UNF archaeology uncovering lost Indigenous NE Florida settlement of Sarabay

image: UNF Archaeology Lab at the dig site

Image: 
University of North Florida

UNF archaeology researchers are uncovering the lost Indigenous NE Florida settlement of Sarabay

Jacksonville, Fla. - The University of North Florida archaeology team is now fairly confident they have located the lost Indigenous northeast Florida community of Sarabay, a settlement mentioned in both French and Spanish documents dating to the 1560s but had not been discovered until now.

The type and amounts of Indigenous pottery the team is finding combined with the type and dates for European artifacts as well as cartographic map evidence strongly supports this location as the late 16th/early 17th century Mocama settlement.

The researchers have opened large excavation blocks with many exciting new artifact finds and are currently searching for evidence of houses and public architecture. The students, led by Dr. Keith Ashley, UNF Archaeology Lab director and assistant professor, have recently recovered more than 50 pieces of early Spanish pottery as well as Indigenous pottery that dates to the late 1500s or early 1600s. They have also recovered bone, stone and shell artifacts as well as burned corn cob fragments.

Expanding upon UNF excavations conducted at the southern end of Big Talbot Island in 1998, 1999, and 2020, the UNF research team has completed what is likely the most extensive excavations at a Mocama-Timucua site in northeastern Florida history.

This dig is part of the UNF Archaeology Lab's ongoing Mocama Archaeological Project. This study focuses on the Mocama-speaking Timucua Indians who lived along the Atlantic coast of northern Florida at the time on European arrival in 1562. The Mocama were among the first indigenous populations encountered by European explorers in the 1560s.

The team hopes to ultimately confirm the discovery of Sarabay by finding evidence of houses and public architecture. They will continue to explore and learn about Sarabay's physical layout during continuing fieldwork projects over the next three years.

Credit: 
University of North Florida

Femtosecond spectroscopy and first-principles calculations shed light on compositional dependence of

Researchers from Skoltech and Ludwig Maximilians-Universität (LMU) in Germany have studied the fundamental properties of halide perovskite nanocrystals, a promising class of optoelectronic materials. Using a combination of theory and experiment, they were able to show and explain an intricate connection between composition, light-induced lattice dynamics, and stability of the materials. The paper was published in the journal Nature Communications.

Perovskite nanocrystals (PNCs) are semiconductor nanocrystals that, thanks to their unique properties, have found a number of applications in optoelectronics, for instance, in lasers and LEDs. PNCs have a much higher photoluminescence quantum yield compared to bulk materials. Moreover, at nanoscale the quantum-confinement can be achieved, which could be used as an additional means of tuning optical properties of such materials. Metal halide perovskites have electronic properties that make the optical properties of nanocrystals made from these materials more tolerant to defects than other semiconducting materials.

Assistant Professor at the Skoltech Center for Energy Science and Technology (CEST) Sergey Levchenko and his colleagues used atomistic modelling to explain the results of femtosecond pump-probe spectroscopy, a method that allows to observe lattice dynamics in real time. They studied the coherent lattice vibrational dynamics - how atomic structure of PNCs evolves after excitation with a laser pulse with a duration shorter than the period of vibrational modes -- for hybrid halide PNCs.

They found, among other things, that energy transfer between vibrational modes in iodine-based perovskite nanocrystals is much more pronounced than in bromine-based ones due to a difference in interaction between the inorganic framework and the organic moiety in organic-inorganic halide PNCs.

"These results pave the way to a rational control over fundamental properties of such PNCs, including energy transfer upon optical excitation and charge-carrier relaxation, via compositional changes," Levchenko says.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Science and performing arts against stereotypes

image: Èpica Foundation - la Fura dels Baus has designed an experimental platform in which actors and scientists interact to create new knowledge.

Image: 
Èpica Foundation.

Stereotypes are knowledge structures integrated in our world representation, which have an influence on our decisions and which are hard to change. A team from the Faculty of Psychology of the University of Barcelona (UB) and the Bellvitge Biomedical Research Institute (IDIBELL), in collaboration with the Èpica Foundation - La Fura dels Baus analysed how a performing experience could have a positive impact in reducing the population's bias against physical illnesses. This performing experience is a pioneer one for it combines scientific training and theatre performance in the same working platform.

The study, published in the journal Frontiers in Psychology, shows that the participation in a 14-day performing arts program reduces these implicit cognitive biases. According to the researchers, these results shed light on the development of strategies with performing arts to help treat this social problem in the general population. Participants in the study are Josué García-Arch (IDIBELL-UB), as first signatory, and Lluís Fuentemilla, both researchers at the IDIBELL Cognition and Brain Plasticity Unit and the Institute of Neurosciences of the UB (UBNeuro). Other signatories of the article are Cèlia Ventura-Gabarró, from the Pompeu Fabra University, and Pedro Lorente and Pep Gatell, from the Èpica Foundation - La Fura dels Baus.

The challenge of modifying stereotypes

Previous studies show that representation structures from which stereotypes derive are malleable, but achieving a change through an intervention is very difficult. "Studies in cognitive neuroscience indicate that a memory (or representation structure) can be altered if it reactivates efficiently in the brain. Also, if the memory goes with emotional context, this can increase the reactivation-derived change", notes Lluís Fuentemilla, coordinator of the study.

"On the other hand, --continues the researcher--, we also know that people tend to understand and integrate how others are if we are able to simulate them as if we were them".

Èpica Foundation has designed a performing experience that has these elements: the ability to simulate artistically a concept through performing arts developed om a context of intense emotion. The participants were sixteen amateurs, selected of the activity, who had to prepare a theatre play on several illnesses. With this objective, for two weeks, they received advice from experts on cancer and degenerative disorders from the Germans Trias i Pujol Research Institute (IGTP), who helped them to take in the disease and understand it from a multi-dimensional perspective (physiological, psychological, social impact, etc.). For instance, after these exchanges with the researchers, they simulated the daily life of patients with degenerative diseases or the life of the caregivers. The activity ended with a public performance in front of more than 300 people. "This work protocol gathers many features that enable us to think that the activation of knowledge structures that are the basis for stereotypes could make them malleable: emotion, continuity and first-person experience. These features are hard to work with under laboratory conditions or different practices in other programs, since they limit the motivation and commitment of the participants in the task, therefore hardening the chance to make reliable changes in the implicit bias at an individual level", notes Lluís Fuentemilla.

The effects of the intervention were measured with the implicit association test (IAT), an experimental task that allows the calculation of how long it takes for an individual to associate an item, for instance, with a word or an image, and a conceptual category: the longer the reaction, the harder it is to establish a link between concepts. "For instance, we know it takes longer to link an image of someone playing sport to the idea of "bad" than the "good" category, and the contrary happens wit the image of someone who wears a hospital patient robe. This difference in the reaction time shows we have an integrated "good" and "bad" category associated with a category linked to "healthy" and "ill" people, a phenomenon that, as expected, we found in the actors before the workshop", notes Josué García Arch, researcher of the project.

Once the program ended, the differences in reaction time had decreased in the participants, showing that the allocation of these ideas had decreased as well. Also, this reduction did not happen in two control experiments carried out on people of the same age, sex and education levels. One of these groups received the same scientific and medical information but without the arts, and did not show a change in the reaction time regarding the initial test. "This response shows that the experience provided by the performing activity was essential to reduce the negative stereotype towards ill people", note the researchers.

A potential strategy to treat other prejudices

These results shed light on the development of strategies based on this performing experience to treat problems related to stereotypes in the general population. "The way in which society faces stereotypes is based on education-derived processes and awareness in order to prevent these knowledge structures from being created. However, the problem of stereotypes is that this knowledge is attached to many situations of our society and coping with this in education is not enough. The presented protocol of the study enables us to open the way to reverse this problem using performing arts", states the researcher.

Moreover, according to the authors of the study, tis protocol could be used to treat other problems. "In this case, we have worked on the stereotype associated with the disease, but the program enables us to treat any stereotype, race or gender for instance, using the theme of performing arts", notes Lluis Fuentemilla.

A unique platform to create new knowledge

This study emerges in the frame of the platform created by Èpica Foundation, in which performing arts, science and technology are grouped to create new knowledge. The objective of this initiative is to create a performance based on research questions by the researchers to create results from the interactions between creatives, scientists and the audience. "Knowledge can be learnt through language and experiences, and performing arts have both. In addition, in the language of FURA, which prioritizes the interaction with the audience, the experience is lived by actors, creators and the audience at the same time. Therefore, in our space, we can create realities that the audience can feel as real, and therefore, their reaction will be more spontaneous than in a laboratory or in a test carried out by research groups in the workshop", note the members of Èpica Foundation.

The UB-IDIBELL team has taken part in two workshops: Information vs Memory, where they analysed the impact of this show on memory, and Complex Systems, the origin of this project. Moreover, it is also part of the European Performing Science Night project, co-funded by the Marie Sk?odowska Curie actions, and led by the Èpica Foundation, which will apply the same work methodology. The result will be the different dissemination activities that will take place in Badalona in September 2021 as part of the European Researchers' Night.

Credit: 
University of Barcelona

Absorbent aerogels show some muscle

image: A simple chemical process developed at Rice University creates light and highly absorbent aerogels based on covalent organic frameworks for environmental remediation or as membranes for batteries and other applications.

Image: 
Jeff Fitlow/Rice University

HOUSTON - (June 8, 2021) - A simple chemical process developed at Rice University creates light and highly absorbent aerogels that can take a beating.

Covalent organic frameworks (COFs), crystal structures with strong molecular bonds, can form a porous aerogel for use as a custom membrane in batteries or other devices or as an absorbent to remove pollutants from the environment.

Conventional COFs are usually powders. Chemical and biomolecular engineer Rafael Verduzco, lead authors and Rice graduate students Dongyang Zhu and Yifan Zhu and their colleagues at Rice's Brown School of Engineering discovered a way to synthesize COF aerogels that can be made in any form at any size, limited only by the reaction chamber.

The process reported in the American Chemical Society's Chemistry of Materials employs COF monomers, a solvent and a catalyst. When mixed and heated to 80 degrees Celsius (176 degrees Fahrenheit), they become a uniform gel. Washing and drying the gel to remove the solvent leaves behind the scaffoldlike aerogel with pores between 20 and 100 microns.

"The big advantage of polymers is that you can dissolve them in a solvent, you can spray coat, spin coat and dip coat them, and they're easy and cheap to work with," Verduzco said. "But COFs are not. They're an insoluble powder and hard to do anything with, but they are really promising for applications because you can design or engineer them almost any way you want on the molecular level. They're like Lego blocks and you can pick the molecular shapes, sizes and characteristics you'd like to include in the final material.

"We were looking for ways to make COFs easier to work with, more like polymers, and we found that under particular reaction conditions they would form a gel," he said. "When you extract the solvent, you get this very light foam, or aerogel."

Verduzco said COF aerogels could be a valuable addition to industrial absorbents now in use for remediation because their porous structures can be customized.

The lab formulated six aerogels and found their remediation properties with various dyes, oils and gold nanoparticles were far better and faster than COF powders. In a test with iodine vapor, a product of nuclear fission, the aerogel absorbed 7.7 grams of iodine per gram of aerogel, significantly better than a COF powder of the same material.

The researchers found the aerogels could be washed and reused at least 10 times without deforming. "They're pretty soft but you can squish them by hand and they spring back," Verduzco said.

He sees even greater potential for COFs as membranes to separate components in advanced batteries, the subject of a recent review paper led by Dongyang Zhu in Advanced Functional Materials.

They could also mimic biological membranes. "Nobody's figured out how to efficiently separate a mixture of ions or molecules that are about the same size and shape, but with this class of materials, we can precisely control the pore sizes and shapes," Verduzco said.

"Biological membranes separate ions of the same size and charge through small changes in pore functionality that preferentially bind one ion or the other," he said. "I think we can start to make synthetic materials that have similar properties."

The lab is developing a library of COF aerogels to test in applications. "There's really a lot to explore here," Verduzco said.

Credit: 
Rice University

Optimizing immunization with Sanaria® PfSPZ-CVac malaria vaccine

ROCKVILLE, MD, USA - June 8, 2021 - The PfSPZ malaria vaccines of Sanaria Inc. are unique in vaccine development as they are composed of weakened (attenuated) forms of the live parasite cells that cause malaria. These parasite cells are called eukaryotic cells and there are no vaccines against any infectious disease composed of such cells. Furthermore, there are no licensed vaccines against any infectious disease caused by a eukaryotic pathogen. Thus, Sanaria and its collaborators have had to take a step by step empirical approach to optimizing immunization with PfSPZ vaccines to achieve a safe, effective, durable, and broadly protective malaria vaccine.

Two recent landmark malaria vaccine studies have moved the optimization process forward and highlighted the strong protective efficacy of Sanaria® PfSPZ-CVac in malaria-naïve adults. In a study published in Nature Communications, 77% (10/13) of subjects vaccinated with a 3-dose regimen administered within a 4 week period were protected 12 weeks later when challenged with live malaria parasites of a strain genetically quite distant (heterologous) from the vaccine strain.

"This high-level efficacy of 3 months would be excellent for protecting travelers to malaria endemic regions against the disease," said Professor Peter Kremsner, Director of the Institute of Tropical Medicine, Travel Medicine and Human Parasitology at the University of Tübingen, Germany who led the study. "We have demonstrated excellent efficacy with just three co-administrations of PfSPZ-CVac and chloroquine, paving the way for a product that can be used for travelers worldwide."

The second study, published in PLoS Pathogen, revealed the profound negative effects of the presence of blood stage malaria parasites on vaccine efficacy. Vaccinations given 7 days apart conferred no protection in study subjects, and this timing coincided with the emergence of parasites into the blood from the liver after previous doses. By changing the timing between doses to 5 days, a regimen first reported in Nature by the Tübingen team, PfSPZ-CVac protective efficacy dramatically increased to 75%. "This study demonstrates the capacity of the malaria parasite to manipulate immune responses of the human host in favor of its own survival and demonstrates how we can optimize the spacing of doses of PfSPZ-CVac to overcome this negative impact," said Dr. Sean Murphy, first author of the paper and Associate Professor at the Department of Laboratory Medicine and Pathology, University of Washington. "Given the high prevalence of malaria infection, these results also have profound implications for malaria vaccine immunization strategies in Africa."

Sanaria® PfSPZ-CVac is a chemo-attenuated, live whole parasite vaccine in which an anti-malarial drug is co-administered with the parasite cells (PfSPZ), to kill parasites. Efficacy in these studies was measured by controlled human malaria infection (CHMI) in which well-characterized, infectious malaria parasites were administered to vaccinated subjects. This is a highly rigorous measure of efficacy because a 100% infective dose of disease-causing parasites is administered and the heterologous challenge strain used in Tübingen is genetically more distant from the vaccine strain than parasites encountered naturally in Africa. Additionally, CHMI is not subject to seasonal variations, subject behaviors, or other unknown variables, as is the case in observational field trials.

Credit: 
Sanaria Inc.

Machine learning reduces microscope data processing time from months to just seconds

image: Group picture of Prof. Gabriel Gomila at the Institute for Bioengineering of Catalonia (IBEC)

Image: 
IBEC

Ever since the world's first ever microscope was invented in 1590 by Hans and Zacharias Janssen --a Dutch father and son-- our curiosity for what goes on at the tiniest scales has led to development of increasingly powerful devices. Fast forward to 2021, we not only have optical microscopy methods that allow us to see tiny particles in higher resolution than ever before, we also have non-optical techniques, such as scanning force microscopes, with which researchers can construct detailed maps of a range of physical and chemical properties. IBEC's Nanoscale bioelectrical characterization group, led by UB Professor Gabriel Gomila, in collaboration with members of the IBEC's Nanoscopy for nanomedicine group, have been analysing cells using a special type of microscopy called Scanning Dielectric Force Volume Microscopy, an advanced technique developed in recent years with which they can create maps of an electrical physical property called the dielectric constant. Each of the biomolecules that make up cells --that is, lipids, proteins and nucleic acids-- has a different dielectric constant, so a map of this property is basically a map of cell composition. The technique that they developed has an advantage over the current gold standard optical method, which involves applying a fluorescent dye that can disrupt the cell being studied. Their approach doesn't require the addition of any potentially disruptive external agent. However, the application of this technique requires a complex post-processing process to convert the measured observables into the physical magnitudes, which for eukaryotic cells involves huge amounts of computation time. In fact, it can take months to process just one image in a workstation computer, since the dielectric constant is analysed pixel by pixel using local reconstructed geometrical models.

Months to seconds

In this new study, recently published in the journal Small Methods, the researchers opted for a new technique to speed up the microscope data processing. This time, they used machine learning algorithms instead of conventional computational methods. The result was drastic: once trained, the machine learning algorithm was able to produce a dielectric biochemical composition map of the cells in just seconds. In the study, no external substances were added to the sample, a long-sought goal in the composition imaging in cell biology. They achieved these rapid results by using a powerful type of algorithm called neural networks, which mimics the way that neurons in the human brain operate.

The study was first-authored by Martí Checa, who carried it out as part of his PhD in Gomila's group at IBEC. He is now a postdoctoral researcher at the Catalan Institute of Nanoscience and Nanotechnology (ICN2). "It is one of the first studies to provide such a rapid label-free biochemical composition map of dry eukaryotic cells", Checa explains. Indeed, in this proof-of-concept study, the researchers used dried out cells, to prevent the huge effects of water in the dielectric measurements due to its high dielectric constant. In a recently published follow-up study, they also analysed fixed cells in their natural in-liquid state. Here they were able to compare the values obtained in the dry and liquid models in order to render an accurate map of the biomolecules that make up eukaryotic cells. These are the multi-structured cells that animals, plants, fungi and other organisms are composed of. "The next step in this research is to apply the method to electrically excitable living cells, such as neurons, where intense electrical activity occurs. We are excited to see what can be obtained with our technique in these systems" Prof. Gomila adds.

Biomedical applications

The researchers validated their methodology by comparing their findings to well-known facts about the composition of cells, such as the lipid-rich nature of the cell membrane or the high quantity of nucleic acids present in the nucleus. With this work, they have opened up the possibility of analysing large quantities of cells in record time.

This study is expected to provide an invaluable tool to biologists to conducting basic research, as well as, to open up potential medical applications. For example, changes in the dielectric properties of cells are currently being studied as possible biomarkers of some illnesses, such as cancer or neurodegenerative diseases.

"It is the first study to provide such a rapid nanoscale biochemical composition map from dielectric measurements of dry eukaryotic cells, which are classically seen as being extremely difficult to map due to their complex three-dimensional topography", declares Martí Checa, first author of the paper.

Credit: 
Institute for Bioengineering of Catalonia (IBEC)

Scientists develop the 'evotype' to unlock power of evolution for better engineering biology

image: Anima Techne

Image: 
Simeon Castle

A defining characteristic of all life is its ability to evolve. However, the fact that biologically engineered systems will evolve when used has, to date, mostly been ignored. This has resulted in biotechnologies with a limited functional shelf-life that fail to make use of the powerful evolutionary capabilities inherent to all biology.

Sim Castle, first author of the research, published in Nature Communications, and a PhD student in the School of Biological Sciences at Bristol, explained the motivation for the work: "The thing that has always fascinated me about biology is that it changes, it is chaotic, it adapts, it evolves. Bioengineers therefore do not just design static artefacts - they design living populations that continue to mutate, grow and undergo natural selection."

Realising that describing this change was key to harnessing evolution, the team developed the concept of the evotype to capture the evolutionary potential of a biosystem. Crucially, the evotype can be broken into three key parts: variation, function, and selection, with each of these offering a tuning knob for bioengineers to control the possible paths available to evolution.

Prof Claire Grierson, co-author and Head of the School of Biological Sciences at Bristol, added: "Learning how to effectively engineer with evolution is one of, if not the biggest, challenges facing bioengineers today. Our work provides a desperately needed framework to help describe the evolutionary potential of a biosystem and re-imagine biological engineering so that it works in harmony with life's ability to evolve."

Sim Castle further stated: "What was surprising was that many of the tools already available to bioengineers fitted nicely into our framework when considered from an evolutionary perspective. We therefore might not be too far from making evolution a core feature of future engineered biological systems."

Dr Thomas Gorochowski, senior author and a Royal Society University Research Fellow at Bristol, ended by saying: "Our concept of the evotype not only provides a means for developing biotechnologies that can harness evolution in new ways, but also opens exciting new avenues to think about and implement evolution in completely new contexts. Potentially, this could even lead to us designing new, self-adaptive technologies that evolve from scratch, rather than tinkering with biological ones that already do."

Credit: 
University of Bristol

Persistent Stereotypes Falsely Link Women's Self-Esteem to Their Sex Lives

New research published in the journal Psychological Science reveals a pervasive but unfounded stereotype: that women (but not men) who engage in casual sex have low self-esteem. This finding was consistent across six separate experiments with nearly 1,500 total participants.

"We were surprised that this stereotype was so widely held," said Jaimie Arona Krems, an assistant professor of psychology at Oklahoma State University and first author on the paper. "This stereotype was held by both women and men, liberals and conservatives, and across the spectrum in terms of people's levels of religiosity and sexism." But across the studies, Krems also observed that the stereotype was unfounded: There was virtually no relationship between participants' own self-esteem and sexual behavior.

In one study, Krems and her colleagues had participants read about a hypothetical man, woman, or unspecified person in their mid-20s who had casual sex (e.g., one-night stands), monogamous sex, or no reported sexual behavior. Participants were then asked to make some snap judgments about this individual's personality based on this information. Women who had casual sex were judged as having lower self-esteem. Participants did not connect men's self-esteem to their sexual behavior, however.

In another experiment, the researchers used a method called the conjunction fallacy, which was made famous by Noble Prize-winning psychologist Daniel Kahneman in now classic research from the 1980s. In this experiment, participants were asked if a man or a woman who had casual sex was more likely to have been (a) an English major or (b) an English major with low self-esteem. Most participants responded that the second of these two possibilities was more likely even though it was statistically less likely to be true.

The team also discovered that this stereotype persisted even when participants were confronted with contrary information. "When we explicitly told participants that the women who had casual sex were enjoying it and were satisfied with their sexual behavior, participants still stereotyped them as having lower self-esteem than women in monogamous relationships who were unsatisfied with their sexual behavior," said Krems.

Previous research has suggested that people perceived to have low self-esteem are less likely to be hired for jobs, voted into political office, or sought as friends or romantic partners.

"Although not grounded in reality, the stereotype documented in this work may have harmful effects. Stereotypes like this can have serious consequences in the real world," said Krems.

Credit: 
Association for Psychological Science

Peace accord in Colombia has increased deforestation of biologically-diverse rainforest

CORVALLIS, Ore. - Since the end of the long-running conflict in Colombia, large areas of forest have been rapidly converted to agricultural uses, suggesting the peace agreement presents a threat to conservation the country's rainforest, a new study from Oregon State University shows.

In 2016, Colombia officially signed a peace agreement ending the country's six-decade civil war, which mainly took place within the Andes-Amazon region, an extremely biodiverse rainforest and a critical biological corridor.

Some deforestation was expected after the peace accord was reached, but an analysis of 30 years of land transfers - a term used to describe changes in control and use of a parcel of land - showed a 40% increase in conversion from forest to agriculture in the post-conflict period.

"When the peace accord was finally signed in 2016, that was the moment to re-open conversations about the land," said the study's lead author, Paulo J. Murillo-Sandoval, who conducted the research as part of his doctoral dissertation at OSU. "The peace accord is 300 pages long and the word forest appears just three times. The forest was not taken into account."

The findings, which were just published in the journal Global Environmental Change, underscore the potential for negative environmental impacts when control over land changes hands and the need to build inclusive forest conservation planning future peace accords, said David Wrathall, an associate professor at OSU and co-author of the paper.

"There is an environmental cost to peace that was not previously understood. This work identifies an incredible policy need, not just in Colombia but in other areas of the world affected by armed conflicts, such as the Congo or Liberia," said Wrathall, a geographer in OSU's College of Earth, Ocean and Atmospheric Sciences. "Inclusive conservation governance has to be included in peace plans. People who live in the forests during conflict have to be empowered to make decisions about conservation after peace."

The conflict in Colombia dates back nearly 60 years. It finally came to an end in 2016 with an historic peace accord between the Colombian government and the Revolutionary Armed Forces of Colombia, known as FARC. But the peace accord had no strong mechanism for managing changes to land use and the environment.

Murillo-Sandoval, who earned his doctorate in geography from OSU's College of Earth Ocean, and Atmospheric Sciences last year, grew up in Colombia and witnessed the transition from conflict to peace unfold. His research was motivated by a desire to understand how the last 30 years of conflict, peace negotiations and the post-conflict period had affected land use, particularly in the Andes-Amazon Transition Belt.

The Andes-Amazon Transition Belt, the region where the Andes Mountains transition to the Amazon basin, is a unique corridor of tropical rainforest rich in biodiversity. It is also a region that has been subject to extensive deforestation and fragmentation of the natural habitat.

Murillo-Sandoval used satellite imagery and sophisticated computer mapping and modeling techniques to create and compare annual land maps from 1988 to 2019. He focused on the most common types of land use: urban; agriculture; forest; grassland; secondary forest, which are areas where forests were cut and have regrown; and water. Because the maps had high spatial detail and a pixel size of 30 meters, the researchers were able to track changes for land parcels one hectare - about 10,000 meters - or larger.

He and his colleagues found that ¬¬¬during the conflict period, land use remained relatively stable. But in the post-conflict period, the conversion of forest to agriculture increased by 40%. The conversion of forest to agricultural land has occurred almost exclusively in less populated regions.

The researchers also analyzed the relationship between land use changes and sites of armed conflict where people were killed. They found that forest cover decreased by 19% at sites within one kilometer of fighting during the conflict. In the post-conflict period, forest cover decreased by 30% in locations where fighting occurred.

"We are using the word 'cause.' It's not just a correlation. We designed the study to test whether incidents of conflict within forests during the civil war caused deforestation after the peace agreement was signed," Wrathall said. "We found that conflict itself causes deforestation."

The land changes are likely due to slow implementation of conservation governance in the region; the emergence of illegal land markets by people with wealth and power; and illicit land uses such as illegal cattle ranching and, to a much smaller degree, coca farming.

"During the conflict, FARC acted as a government, likely providing some stability for the region's land and also keeping people out of areas where conflict was occurring," Wrathall said. "After the peace accord was reached, the forests were safer but also had little or no government oversight, creating an opportunity for a people with money and power to grab land."

"Peace is not just for peace's sake. It's also a political and economic decision," he said. "What we see is that peace creates an opportunity for the powerful to make decisions over land."

One bright spot in the findings was an increase in secondary forests, which are areas of re-growth of forests following other uses of the land. That may be a result of land abandonment as people left farms for bigger cities following the end of the conflict, Murillo-Sandoval said.

"Forest recovery in the Amazon can occur very fast if the land is left alone," he said.

Co-authors include Jamon Van Den Hoek, Robert Kennedy and Emma Gjerdseth of OSU; Camilo Correa-Ayram of the Instituto de Investigacion de Recursos

Credit: 
Oregon State University

Mapping a successful recovery

image: A containment pond collects polluted runoff from the mine for treatment.

Image: 
Dave Herbst

Mining involves moving a lot of rock, so some mess is expected. However, mining operations can continue to affect ecosystems long after activity has ended. Heavy metals and corrosive substances leach into the environment, preventing wildlife and vegetation from returning to the area.

Fortunately, this damage can be reversed. A team of scientists, including UC Santa Barbara's Dave Herbst, investigated how river ecosystems respond to remediation efforts. The team combined decades of data from four watersheds polluted by abandoned mines. It took creative thinking to simplify the complex dynamics of nearly a dozen toxins on the myriad species in each river.

Ultimately, the team's clever methodology showed that restoration can improve some of the biggest problems of mining contamination. Their findings, published in the journal Freshwater Science, revealed strategies that worked well as recovery patterns across the four waterways. The results also suggest that regulations need to consider all contaminants together, rather than establish standards on an individual basis.

"There is a big problem that we have with legacy mine sites, not only in the U.S. but worldwide," said Herbst, a research biologist at the university's Sierra Nevada Aquatic Research Laboratory (SNARL) in Mammoth Lakes. "They are widespread, persistent and long-lasting problems. But the good news is that, with the investment and effort of programs like CERCLA Superfund, we can fix those problems."

Herbst's work focused on Leviathan Creek, a Sierran stream 25 miles southeast of Lake Tahoe which is the site of a restoration effort under CERCLA (the Comprehensive Environmental Response, Compensation, and Liability Act), also known also as Superfund. The area was mined not for precious metals, but to extract sulfur for making sulfuric acid to process minerals from other sites. The presence of sulfur-bearing minerals made for water that was naturally a bit acidic, but open-pit mining exposed these minerals to the elements. The result was stronger acid that leached trace metals like aluminum, cobalt and iron from the rock into the environment. The combined effects of increased acidity and toxic metals devastated the local aquatic ecosystem.

Sorting out standards

Each mining site produces a unique blend of pollutants. What's more, different rivers harbor different species of aquatic invertebrate, with hundreds of different types in each stream, Herbst said. This variability made comparisons a challenge.

So the researchers set to work establishing standards and benchmarks. They decided to track the effect of pollution and remediation on mayflies, stoneflies and caddisflies. These groups are critical to the aquatic food web and display a variety of tolerances to different toxins. Rather than compare closely related species, the scientists grouped together animals with shared characteristics -- like physical traits and life histories.

Next the team had to make sense of all the pollutants. They quickly realized it wouldn't be enough to track the toxicity of individual metals separately, as is often done in the lab. It's the combined impact that actually affects the ecosystem. Furthermore, scientists often measure toxicity based on a lethal dosage. And yet pollution can devastate ecology at much lower concentrations, Herbst explained. Chronic effects, like reduced growth and reproduction, can eliminate species from an area over time without actually killing any individuals.

Given the variety of toxins, the researchers decided on another standard for toxicity: the criterion unit. They defined 1 criterion unit (CU) as the concentration of a toxin that produced adverse effects on growth and reproduction of test organisms. Although the variety of responses makes the CU an approximation, it proved to be a surprisingly robust metric.

The concentration in 1 CU varies from substance to substance. For instance, the researchers used a value of 7.1 micrograms of cobalt per liter of water as a toxic threshold for aquatic life. So, 7.1 μg/L equals 1 CU of cobalt. Meanwhile, 150 μg/L of arsenic kept invertebrates from living their best lives, so 150 μg/L was set as 1 CU of arsenic.

This approach enabled the scientists to compare and combine the effects of completely different toxins, providing a validation of how total toxicity would be expected to occur in nature. So, 7.1 μg/L of cobalt by itself, or 150 μg/L of arsenic by itself, or even a combination of 3.55 μg/L of cobalt plus 75 μg/L of arsenic all produce a cumulative criteria unit (CCU) of 1, which spells similar problems for aquatic critters however it is reached.

This combined effect proved critical to understanding the real-world implications of mining pollution because animals are exposed to many toxins at once. "You need to consider these metals together, not individually, when evaluating the toxicity threshold in a field setting," Herbst said.

So despite the variety of metals at different locations, by expressing toxicity in cumulative criteria units, the scientists could compare across rivers. When total toxicity tops 1 CCU, invertebrate diversity unravels.

Judging their efforts

The team now had their subjects (aquatic invertebrates) and a simple way to measure pollution (the cumulative criteria unit). They also had over 20 years of field data from four watersheds where Superfund clean-ups have been underway. They used unpolluted streams near each river as a baseline to judge how well restoration was proceeding.

The authors found these projects were able to restore rivers to near natural conditions in 10 to 15 years. It was a wonderful surprise. "Regardless of the fact that there were different mining pollutants, different ways of remediating the problem and different sizes of stream, all the projects came to successful outcomes," Herbst said.

Much of the recovery happened in the first few years of treatment, he added. Since conditions are at their worst in the beginning, even a small effort will make a big difference.

"The other surprising part was the degree of commonality in the responses despite differing contaminants and remediation practices," Herbst said. The rate of recovery, order in which species returned (based on shared traits), and even the long-term timeframe was similar across all four rivers. These promising results and shared paths suggest that even daunting environmental problems can be solved with proper effort and investment.

Lessons and loose ends

Remediation at the four sites in California, Colorado, Idaho and Montana is ongoing. Many interventions, like treating acidic water with lime, require continuous attention. However, efforts like replacing contaminated soil, setting up microbial bioreactors and revegetating excavated and riparian areas will hopefully make remediation self-sustaining.

And a self-sustaining solution is the goal, because these sites can become inaccessible at certain times of year, leading to variable levels of pollution. For instance, snow prevents access to the Leviathan mine in winter, so remediation can occur only between spring and fall. The spring snowmelt also dissolves more metals, creating worse conditions than during drier times at the beginning of autumn.

Herbst plans to revisit the seasonal aspects of remediation in future research. As for now, he thinks that other abandoned mines should implement remediation and monitoring practices to evaluate the success of restoration.

These exciting discoveries would have been impossible without long-term monitoring at the four locations. "You seldom get monitoring studies of restoration projects that last more than a couple of years," Herbst said, "which is really a shame because most of them don't show any kind of response over that short a period of time."

And the only reason Herbst and his colleagues had these datasets was because they invested the time and resources themselves. "A lot of it is due to the dedication of individual researchers to these projects," he said. "There are other players that come and go along the way, but as long as there's some dedicated researcher collecting this data then it will be there in the future for us to base decisions on."

Aside from the importance of long-term monitoring, the message Herbst hopes the EPA and industry embrace is that we can't apply water quality standards for toxic metals individually. "We must be applying them collectively according to how they're acting together," he said.

Even if individual contaminants are under the required limits, their combined effect could be well over what wildlife can handle. The concept of cumulative criteria units provides a really simple way to account for this: If eight toxins in a stream are all at half of their CU value, they still add up to 4 CCUs.

Bottom line: There is reason to celebrate. "We're able to demonstrate through this research that these programs can be successful even for the biggest of problems," Herbst said, "which is exactly what Superfund projects are intended to fix."

Credit: 
University of California - Santa Barbara

UIC research paves way for next-generation of crystalline material screening devices

image: An illustration of the continuous-flow microfluidic device for rapid screening of crystals of active pharmaceutical ingredients. As crystals grow, automated data acquisition and parallel processing allow for high-throughput screening, which can help to engineer better medicines.

Image: 
Meenesh Singh/UIC

Researchers at the University of Illinois Chicago have developed a novel continuous-flow microfluidic device that may help scientists and pharmaceutical companies more effectively study drug compounds and their crystalline shapes and structures, which are key components for drug stability.

The device consists of a series of wells in which a drug solution - made up of an active pharmaceutical ingredient, or API, dissolved in solvent, such as water - can be mixed with an anti-solvent in a highly controlled manner. When mixed together, the two solutions allow for the API crystals to form a nucleus and grow. With the device, the rates and ratios at which the drug solution is mixed with the anti-solvent can be altered in parallel by scientists, creating multiple conditions for crystal growth. As the crystals grow in different conditions, data on their growth rates, shapes and structures is gathered and imported into a data network.

With the data, scientists can more quickly identify the best conditions for manufacturing the most stable crystalline form with a desirable crystal morphology -- a crystal with a plate-like shape instead of a crystal with a rod-like shape -- of an API and scale up the crystallization of stable forms.

The UIC researchers led by Meenesh Singh, in collaboration with the Enabling Technologies Consortium, have validated the device using L-histidine, the active ingredient in medications that can potentially treat conditions like rheumatoid arthritis, allergic diseases and ulcers. The results are reported in Lab on a Chip, a journal of the Royal Society of Chemistry.

"The pharmaceutical industry needs a robust screening system that can accurately determine API polymorphs and crystallization kinetics in a shorter time frame. But most parallel and combinatorial screening systems cannot control the synthesis conditions actively, thereby leading to inaccurate results," said Singh, UIC assistant professor of chemical engineering at the College of Engineering. "In this paper, we show a blueprint of such a microfluidic device that has parallel-connected micromixers to trap and grow crystals under multiple conditions simultaneously."

In their study, the researchers found that the device was able to screen polymorphs, morphology and growth rates of L-histidine in eight different conditions. The conditions included variations in molar concentration, percentage of ethanol by volume and supersaturation - important variables that influence crystal growth rate. The overall screening time for L-histidine using the multi-well microfluidic device was about 30 minutes, which is at least eight times shorter than a sequential screening process.

The researchers also compared the screening results with a conventional device. They found that the conventional device significantly overestimated the fraction of stable form and showed high uncertainty in measured growth rates.

"The multi-well microfluidic device paves the way for next-generation microfluidic devices that are amenable to automation for high-throughput screening of crystalline materials," Singh said. Better screening devices can improve API process development efficiency and enable timely and robust drug manufacturing, he said, which could ultimately lead to safer drugs that cost less money.

Credit: 
University of Illinois Chicago

X-ray flash imaging of laser-induced bubbles and shockwaves in water

image: An infrared laser pulse (shown as a dark red oscillating wave) has been tightly focused into pure water where a plasma (green cloud) is formed; a shock wave and a bubble (hemispheres) are then created. An acoustic signal recorded with a microphone is used to determine the deposited energy, a diverging X-ray beam (purple cone) is used to image a hologram captured by a detector.

Image: 
Markus Osterhoff

Everyone is familiar with tiny gas bubbles gently rising up in sparkling water. But the bubbles that were created by intense focused lasers in this experiment were ten times smaller and contained water vapour at a pressure around a hundred thousand times higher. Under these conditions, the bubble expands at supersonic speed and pushes a shockwave, consisting of a spherical shell of highly compressed water, ahead of itself. Now the research team led by the University of Göttingen, together with the Deutsches Elektronen-Synchroton (DESY) and the European X-Ray Free-Electron Laser (European XFEL), has created such an event and then, with an innovative technique that they developed using holographic flash imaging and nanofocused X-ray laser pulses, captured data and images. The research was published in Nature Communications.

The team first created tiny bubbles with a radius of a few thousandths of a millimetre by focusing an infrared laser pulse in water to create "cavitation" (a phenomenon in which small vapour-filled cavities, ie bubbles, form in a liquid). The researchers observed the expanding bubble with synchronized but carefully controlled delayed X-ray pulses. "In contrast to visible light, where refraction and scattering blur the image, X-ray imaging not only resolves the shape but also the density profile of the interior of both the bubble and the shockwave," explains Malte Vassholz, PhD student at the University of Göttingen and lead author of the publication. Vassholz goes on to say, "This enabled us to generate X-ray holograms of the tiny bubbles and record a large data stream with thousands of events which we then analyzed by a specially devised 'decoding algorithm' to obtain the density of the gas in the bubble and the shockwave around it." Thanks to the well-controlled time delay between the seeding laser pulse that created the effect and the X-ray pulse that measured it, the team could then record a 'movie' of the process.

The results of his experiment already challenge current scientific understanding and will help other scientists develop better models. Professor Tim Salditt, Professor of X-Ray Physics at the University of Göttingen, explains, "Even though water is the most important liquid on Earth, there is still much to learn about this mysterious and elusive substance. Thanks to the unique properties of the X-ray laser radiation generated at the European XFEL, and our new single shot holography method, we can now observe what really goes on in vapour and liquid water under extreme conditions."

This research technique provides insights for processes relevant in other applications: "Cavitation can be an undesirable effect in fluids in pumps or propellers for instance, but it can be harnessed for use in laser processing of materials or to modify chemical reactions," explains Dr Robert Mettin, an expert researching cavitation for many years at the Faculty of Physics, Göttingen University. "In laser surgery, shockwaves and compressed gases in tiny bubbles are created intentionally in tissue, by laser pulses," adds Salditt. "In the future, such processes could be 'filmed' in detail, using the methodology which we have developed, at a microscopic level and at high temporal resolution."

Original publication: Vassholz et al, Pump-probe X-ray holographic imaging of laser-induced cavitation bubbles with femtosecond FEL pulses, Nature Communications 2021. DoI: 10.1038/s41467-021-23664-1. Text also available at: https://www.nature.com/articles/s41467-021-23664-1

An animated explainer video on our website https://www.uni-goettingen.de/en/3240.html?id=6287 shows how the experiment was carried out by researchers at the University of Göttingen, Deutsches Elektronen-Synchroton (DESY) and the European X-Ray Free-Electron Laser (European XFEL). An infra-red laser beam is tightly focused into a water-filled container, igniting a plasma spark; the subsequent shock wave and cavitation bubble are imaged by an X-ray flash. From this, the density inside the bubble and the surrounding shock wave is computed.

Everyone is familiar with tiny gas bubbles gently rising up in sparkling water. But the bubbles that were created by intense focused lasers in this experiment were ten times smaller and contained water vapour at a pressure around a hundred thousand times higher. Under these conditions, the bubble expands at supersonic speed and pushes a shockwave, consisting of a spherical shell of highly compressed water, ahead of itself. Now the research team led by the University of Göttingen, together with the Deutsches Elektronen-Synchroton (DESY) and the European X-Ray Free-Electron Laser (European XFEL), has created such an event and then, with an innovative technique that they developed using holographic flash imaging and nanofocused X-ray laser pulses, captured data and images. The research was published in Nature Communications.

The team first created tiny bubbles with a radius of a few thousandths of a millimetre by focusing an infrared laser pulse in water to create "cavitation" (a phenomenon in which small vapour-filled cavities, ie bubbles, form in a liquid). The researchers observed the expanding bubble with synchronized but carefully controlled delayed X-ray pulses. "In contrast to visible light, where refraction and scattering blur the image, X-ray imaging not only resolves the shape but also the density profile of the interior of both the bubble and the shockwave," explains Malte Vassholz, PhD student at the University of Göttingen and lead author of the publication. Vassholz goes on to say, "This enabled us to generate X-ray holograms of the tiny bubbles and record a large data stream with thousands of events which we then analyzed by a specially devised 'decoding algorithm' to obtain the density of the gas in the bubble and the shockwave around it." Thanks to the well-controlled time delay between the seeding laser pulse that created the effect and the X-ray pulse that measured it, the team could then record a 'movie' of the process.

The results of his experiment already challenge current scientific understanding and will help other scientists develop better models. Professor Tim Salditt, Professor of X-Ray Physics at the University of Göttingen, explains, "Even though water is the most important liquid on Earth, there is still much to learn about this mysterious and elusive substance. Thanks to the unique properties of the X-ray laser radiation generated at the European XFEL, and our new single shot holography method, we can now observe what really goes on in vapour and liquid water under extreme conditions."

This research technique provides insights for processes relevant in other applications: "Cavitation can be an undesirable effect in fluids in pumps or propellers for instance, but it can be harnessed for use in laser processing of materials or to modify chemical reactions," explains Dr Robert Mettin, an expert researching cavitation for many years at the Faculty of Physics, Göttingen University. "In laser surgery, shockwaves and compressed gases in tiny bubbles are created intentionally in tissue, by laser pulses," adds Salditt. "In the future, such processes could be 'filmed' in detail, using the methodology which we have developed, at a microscopic level and at high temporal resolution."

Credit: 
University of Göttingen

Cell Reports publishes data supporting the importance of ion channel, Kv7.2/7.3 as a target in ALS

CAMBRIDGE, Mass.--()--QurAlis Corporation, a biotech company developing breakthrough precision medicines for ALS and other genetically validated neurodegenerative diseases, today announced the publication of an article in Cell Reports titled Human Amyotrophic Lateral Sclerosis Excitability Phenotype Screen: Target Discovery and Validation by QurAlis founders Kasper Roet, Ph.D., Clifford Woolf, M.D., Ph.D., and Kevin Eggan, Ph.D., who pioneered a high-content, live-cell imaging screen using ALS patient-derived motor neurons in combination with a compound library generated by Pfizer to identify drug targets to treat hyperexcitability induced neurodegeneration in ALS patients.

The publication describes a live-cell screening strategy targeting abnormal electrophysiological properties to reveal targets that modulate the intrinsic hyperexcitability of ALS motor neurons. This unbiased screen using human ALS motor neurons identified Kv7.2/7.3 as a strongly overrepresented drug target. Dysfunction of Kv7.2/7.3 in ALS motor neurons had been previously identified by Drs. Kevin Eggan, Clifford Woolf, Brian Wainger, and Evangelos Kiskinis, which led to the development of a precision medicine program at QurAlis to develop a selective Kv7.2/7.3 ion channel opener to treat ALS patients. The validity of Kv7.2/7.3 as a drug target for ALS patients was recently also strongly supported by results of a clinical study published in JAMA Neuro showing that Kv7 modulation can decrease spinal and cortical motor neuron excitability, both of which have been linked to patient survival. Through bioinformatic deconvolution, the screen, which was a concerted effort of scientists from the Boston Children’s Hospital, including shared first author Dr. Xuan Huang, The Harvard Stem Cell Institute, and the Pfizer Centers for Therapeutic Innovation also found AMPA receptors and D2 dopamine receptors as novel excitability modulating targets that contribute to ALS motor neuron excitability.

“These research results strengthen our hypothesis that the QurAlis selective Kv7.2/7.3 opener, QRL-101 (QRA-244), has the potential to be an effective therapy for patients suffering from hyperexcitability induced motor neuron degeneration,” said Kasper Roet, Ph.D., CEO and Founder of QurAlis. “Previous research has identified Kv7.2/7.3 as an ALS drug target and the unbiased nature of this screen further emphasizes the importance of Kv7.2/7.3 in ALS motor neuron dysfunction.”

“It is widely believed that by reducing motor neuron hyperexcitability in ALS patients, we may be able to slow the progression of the disease,” said Leonard van den Berg, M.D., Ph.D., Chairman of the European Network to Cure ALS. “This study shows that excitability phenotypic screening using patient derived motor neurons is a novel and powerful method for the identification of drug targets that act on abnormal excitability and offers the potential to produce more effective therapies with fewer side effects.”

Credit: 
LaVoieHealthScience

UMass Amherst researchers create intelligent electronic microsystems from green material

image: This illustration captures the essence of the newly developed electronic microsystem.

Image: 
UMass Amherst

A research team from the University of Massachusetts Amherst has created an electronic microsystem that can intelligently respond to information inputs without any external energy input, much like a self-autonomous living organism. The microsystem is constructed from a novel type of electronics that can process ultralow electronic signals and incorporates a device that can generate electricity "out of thin air" from the ambient environment.

The groundbreaking research was published June 7 in the journal Nature Communications.

Jun Yao, an assistant professor in the electrical and computer engineering (ECE) and an adjunct professor in biomedical engineering, led the research with his longtime collaborator, Derek R. Lovley, a Distinguished Professor in microbiology.

Both of the key components of the microsystem are made from protein nanowires, a "green" electronic material that is renewably produced from microbes without producing "e-waste." The research heralds the potential of future green electronics made from sustainable biomaterials that are more amenable to interacting with the human body and diverse environments.

This breakthrough project is producing a "self-sustained intelligent microsystem," according to the U.S. Army Combat Capabilities Development Command Army Research Laboratory, which is funding the research.

Tianda Fu, a graduate student in Yao's group, is the lead author. "It's an exciting start to explore the feasibility of incorporating 'living' features in electronics. I'm looking forward to further evolved versions," Fu said.

The project represents a continuing evolution of recent research by the team. Previously, the research team discovered that electricity can be generated from the ambient environment/humidity with a protein-nanowire-based Air Generator (or 'Air-Gen'), a device which continuously produces electricity in almost all environments found on Earth. The Air-Gen invention was reported in Nature in 2020.

Also in 2020, Yao's lab reported in Nature Communications that the protein nanowires can be used to construct electronic devices called memristors that can mimic brain computation and work with ultralow electrical signals that match the biological signal amplitudes.

"Now we piece the two together," Yao said of the creation. "We make microsystems in which the electricity from Air-Gen is used to drive sensors and circuits constructed from protein-nanowire memristors. Now the electronic microsystem can get energy from the environment to support sensing and computation without the need of an external energy source (e.g. battery). It has full energy self-sustainability and intelligence, just like the self-autonomy in a living organism."

The system is also made from environmentally friendly biomaterial - protein nanowires harvested from bacteria. Yao and Lovley developed the Air-Gen from the microbe Geobacter, discovered by Lovley many years ago, which was then utilized to create electricity from humidity in the air and later to build memristors capable of mimicking human intelligence.

"So, from both function and material," says Yao, "we are making an electronic system more bio-alike or living-alike."

"The work demonstrates that one can fabricate a self-sustained intelligent microsystem," said Albena Ivanisevic, the biotronics program manager at the U.S. Army Combat Capabilities Development Command Army Research Laboratory. "The team from UMass has demonstrated the use of artificial neurons in computation. It is particularly exciting that the protein nanowire memristors show stability in aqueous environment and are amenable to further functionalization. Additional functionalization not only promises to increase their stability but also expand their utility for sensor and novel communication modalities of importance to the Army."

Credit: 
University of Massachusetts Amherst

Keeping a closer eye on seabirds with drones and artificial intelligence

DURHAM, N.C. - Using drones and artificial intelligence to monitor large colonies of seabirds can be as effective as traditional on-the-ground methods, while reducing costs, labor and the risk of human error, a new study finds.

Scientists at Duke University and the Wildlife Conservation Society (WCS) used a deep-learning algorithm--a form of artificial intelligence--to analyze more than 10,000 drone images of mixed colonies of seabirds in the Falkland Islands off Argentina's coast.

The Falklands, also known as the Malvinas, are home to the world's largest colonies of black-browed albatrosses (Thalassarche melanophris) and second-largest colonies of southern rockhopper penguins (Eudyptes c. chrysocome). Hundreds of thousands of birds breed on the islands in densely interspersed groups.

The deep-learning algorithm correctly identified and counted the albatrosses with 97% accuracy and the penguins with 87%. All told, the automated counts were within 5% of human counts about 90% of the time.

"Using drone surveys and deep learning gives us an alternative that is remarkably accurate, less disruptive and significantly easier. One person, or a small team, can do it, and the equipment you need to do it isn't all that costly or complicated," said Madeline C. Hayes, a remote sensing analyst at the Duke University Marine Lab, who led the study.

Monitoring the colonies, which are located on two rocky, uninhabited outer islands, has until now been done by teams of scientists who count the number of each species they observe on a portion of the islands and extrapolate those numbers to get population estimates for the full colonies. Because the colonies are large and densely interspersed and the penguins are much smaller than the albatrosses (and, thus, easy to miss), counts often need to be repeated. It's a laborious process, and the presence of the scientists can disrupt the birds' breeding and parenting behaviors.

To conduct the new surveys, WCS scientists used an off-the-shelf consumer drone to collect more than 10,000 individual photos, which Hayes converted into a large-scale composite visual using image-processing software.

She then analyzed the image using a convolutional neural network (CNN), a type of AI that employs a deep-learning algorithm to analyze an image and differentiate and count the objects it "sees" in it - in this case, two different species of sea birds. These counts were added together to create comprehensive estimates of the total number of birds found in colonies.

"A CNN is loosely modeled on the human neural network, in that it learns from experience," said David W. Johnston, director of the Duke Marine Robotics and Remote Sensing Lab. "You train the computer to pick up on different visual patterns, like those made by black-browed albatrosses or southern rockhopper penguins in sample images, and over time it learns how to identify the objects forming those patterns in other images such as our composite photo."

Johnston, who is also associate professor of the practice of marine conservation ecology at Duke's Nicholas School of the Environment, said the emerging drone- and CNN-enabled approach is widely applicable "and greatly increases our ability to monitor the size and health of seabird colonies worldwide, and the health of the marine ecosystems they inhabit."

Guillermo Harris, senior conservationist at WCS, co-authored the study. He said, "Counting large seabird colonies of mixed species at remote locations has been an ongoing challenge for conservationists. This technology will contribute to regular population assessments of some species, helping us better understand whether conservation efforts are working."

Crafting and training the CNN can seem intimidating, Hayes noted, but "there are tons of online resources to help you, or, if you don't want to deal with that, you can use a free, pre-built CNN and customize it to do what you need. With a little patience and guidance, anyone could do it. In fact, the code to recreate our models is available online to help other researchers kickstart their work."

Credit: 
Duke University