Tech

New model simulates the tsunamis caused by iceberg calving

Johan Gaume, an EPFL expert in avalanches and geomechanics, has turned his attention to ice. His goal is to better understand the correlation between the size of an iceberg and the amplitude of the tsunami that results from its calving. Gaume, along with a team of scientists from other research institutes, has just unveiled a new method for modeling these events. Their work appears in Communications Earth & Environment, a new journal from Nature Research.

These scientists are the first to simulate the phenomena of both glacier fracture and wave formation when the iceberg falls into the water. "Our goal was to model the explicit interaction between water and ice - but that has a substantial cost in terms of computing time. We therefore decided to use a continuum model, which is very powerful numerically and which gives results that are both conclusive and consistent with much of the experimental data," says Gaume, who heads EPFL's Snow Avalanche Simulation Laboratory (SLAB) and is the study's corresponding author. The other institutes involved in the study are the University of Pennsylvania, the University of Zurich, the University of Nottingham, and Switzerland's WSL Institute for Snow and Avalanche Research.

Improving calving laws

The scientists' method can also provide insight into the specific mechanisms involved in glacial rupture. "Researchers can use the results of our simulations to refine the calving laws incorporated into their large-scale models for predicting sea-level rises, while providing detailed information about the size of icebergs, which represent a sizeable amount of mass loss," says Gaume.

Calving occurs when chunks of ice on the edge of a glacier break off and fall into the sea. The mechanisms behind the rupture generally depend on how high the water is. If the water level is low, the iceberg breaks off from the top of the glacier. If the water level is high, the iceberg is longer and breaks off from the bottom, before eventually floating to the surface owing to buoyancy. These different mechanisms create icebergs of different sizes - and therefore waves of different amplitudes. "Another event that can trigger a tsunami is when an iceberg's center of gravity changes, causing the iceberg itself to rotate," says Gaume. "We were able to simulate all these processes."

In Greenland, the scientists placed a series of sensors at Eqip Sermia, a 3-km-wide outlet glacier of the Greenland ice sheet that ends in a fjord with a 200 m ice cliff. Back in 2014, an iceberg measuring some 1 million m3 (the equivalent of 300 Olympic-sized swimming pools) broke off the front of the glacier and produced a 50 m-high tsunami; the wave was still 3 m high when it reached the first populated shoreline some 4 km away. The scientists tested their modeling method on large-scale field datasets from Eqip Sermia as well as with empirical data on tsunami waves obtained in a laboratory basin at the Deltares institute in the Netherlands.

Projects in the pipeline

Glacier melting has become a major focus area of research today as a result of global warming. One of the University of Zurich scientists involved in the study kicked off a new research project this year with funding from the Swiss National Science Foundation. This project will investigate the dynamics of Greenland's fastest-moving glacier, Jakobshavn Isbrae, by combining data from individual field experiments in Greenland with the results of simulations run using the SLAB model. "Our method will also be used to model chains of complex processes triggered by gravitational mass movements, such as the interaction between a rock avalanche and a mountain lake," says Gaume.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Solar energy collectors grown from seeds

image: Rice University chemical engineering graduate student Siraj Sidhik holds a container of 2D perovskite "seeds" (left) and a smaller vial containing a solution of dissolved seeds that can be used to produce thin films for use in highly efficient optoelectronic devices like high efficiency solar panels.

Image: 
Photo by Jeff Fitlow/Rice University

HOUSTON - (June 21, 2021) - Rice University engineers have created microscopic seeds for growing remarkably uniform 2D perovskite crystals that are both stable and highly efficient at harvesting electricity from sunlight.

Halide perovskites are organic materials made from abundant, inexpensive ingredients, and Rice's seeded growth method addresses both performance and production issues that have held back halide perovskite photovoltaic technology.

In a study published online in Advanced Materials, chemical engineers from Rice's Brown School of Engineering describe how to make the seeds and use them to grow homogenous thin films, highly sought materials comprised of uniformly thick layers. In laboratory tests, photovoltaic devices made from the films proved both efficient and reliable, a previously problematic combination for devices made from either 3D or 2D perovskites.

"We've come up with a method where you can really tailor the properties of the macroscopic films by first tailoring what you put into solution," said study co-author Aditya Mohite, an associate professor of chemical and biomolecular engineering and of materials science and nanoengineering at Rice. "You can arrive at something that is very homogeneous in its size and properties, and that leads to higher efficiency. We got almost state-of-the-art device efficiency for the 2D case of 17%, and that was without optimization. We think we can improve on that in several ways."

Mohite said achieving homogenous films of 2D perovskites has been a huge challenge in the halide perovskite photovoltaic research community, which has grown tremendously over the past decade.

"Homogeneous films are expected to lead to optoelectronic devices with both high efficiency and technologically relevant stability," he said.

Rice's seed-grown, high-efficiency photovoltaic films proved quite stable, preserving more than 97% of their peak efficiency after 800 hours under illumination without any thermal management. In previous research, 3D halide perovskite photovoltaic devices have been highly efficient but prone to rapid degradation, and 2D devices have lacked efficiency but were highly stable.

The Rice study also details the seeded growth process -- a method that is within the reach of many labs, said study co-author Amanda Marciel, a William Marsh Rice Trustee Chair and assistant professor of chemical and biomolecular engineering at Rice.

"I think people are going to pick up this paper and say, 'Oh. I'm going to start doing this,'" Marciel said. "It's a really nice processing paper that goes into depth in a way that hasn't really been done before."

The name perovskite refers both to a specific mineral discovered in Russia in 1839 and to any compound with the crystal structure of that mineral. For example, halide perovskites can be made by mixing lead, tin and other metals with bromide or iodide salts. Research interest in halide perovskites skyrocketed after their potential for high-efficiency photovoltaics was demonstrated in 2012.

Mohite, who joined Rice in 2018, has researched halide perovskite photovoltaics for more than five years, especially 2D perovskites -- flat, almost atomically thin forms of the material that are more stable than their thicker cousins due to an inherent moisture resistance.

Mohite credited study co-lead author Siraj Sidhik, a Ph.D. student in his lab, with the idea of pursuing seeded growth.

"The idea that a memory or history -- a genetic sort of seed -- can dictate material properties is a powerful concept in materials science," Mohite said. "A lot of templating works like this. If you want to grow a single crystal of diamond or silicon, for example, you need a seed of a single crystal that can serve as template."

While seeded growth has often been demonstrated for inorganic crystals and other processes, Mohite said this is the first time it's been shown in organic 2D perovskites.

The process for growing 2D perovskite films from seeds is identical in several respects to the classical process of growing such films. In the traditional method, precursor chemicals are measured out like the ingredients in a kitchen -- X parts of ingredient A, Y parts of ingredient B, and so on -- and these are dissolved in a liquid solvent. The resulting solution is spread onto a flat surface via spin-coating, a widely used technique that relies on centrifugal force to evenly spread liquids across a rapidly spun disk. As the solvent dissolves, the mixed ingredients crystalize in a thin film.

Mohite's group has made 2D perovskite films in this manner for years, and though the films appear perfectly flat to the naked eye, they are uneven at the nanometer scale. In some places, the film may be a single crystal in thickness, and in other places, several crystals thick.

"You end up getting something that is completely polydisperse, and when the size changes, the energy landscape changes as well," Mohite said. "What that means for a photovoltaic device is inefficiency, because you lose energy to scattering when charges encounter a barrier before they can reach an electrical contact."

In the seeded growth method, seeds are made by slow-growing a uniform 2D crystal and grinding it into a powder, which is dissolved into solvent instead of the individual precursors. The seeds contain the same ratio of ingredients as the classical recipe, and the resulting solution is spin-coated onto disks exactly as it would be in the original method. The evaporation and crystallization steps are also identical. But the seeded solution yields films with a homogeneous, uniform surface, much like that of the material from which the seeds were ground.

When Sidhik initially succeeded with the approach, it wasn't immediately clear why it produced better films. Fortunately, Mohite's lab adjoins Marciel's, and while she and her student, co-lead author Mohammad Samani, had not previously worked with perovskites, they did have the perfect tool for finding and studying any bits of undissolved seeds that might be templating the homogeneous films.

"We could track that nucleation and growth using light-scattering techniques in my group that we typically use to measure sizes of polymers in solution," Marciel said. "That's how the collaboration came to be. We're neighbors in the lab, and we were talking about this, and I was like, 'Hey, I've got this piece of equipment. Let's see how big these seeds are and if we can track them over time, using the same tools we use in polymer science.'"

The tool was dynamic light scattering, a mainstay technique in Marciel's group. It revealed that solutions reached an equilibrium state under certain conditions, allowing a portion of some seeds to remain undissolved in solution.

The research showed those bits of seed retained the "memory" of the perfectly uniform slow-grown crystal from which they were ground, and Samani and Marciel found they could track the nucleation process that would eventually allow the seeds to produce homogeneous thin films.

Mohite said the collaboration produced something that is often attempted and rarely achieved in nanomaterials research -- a self-assembly method to make macroscopic materials that live up to the promise of the individual nanoparticles of which they are composed.

"This is really the bane of nanomaterials technology," Mohite said. "At an individual, single element level, you have wonderful properties that are orders of magnitude better than anything else, but when you try to put them together into something macroscopic and useful, like a film, those properties just kind of go away because you cannot make something homogeneous, with just those properties that you want.

"We haven't yet done experiments on other systems, but the success with perovskites begs the question of whether this type of seeded approach might work in other systems as well," he said.

Credit: 
Rice University

'Flashed' nanodiamonds are just a phase

image: The mechanism by Rice University chemists for the phase evolution of fluorinated flash nanocarbons shows stages with longer and larger energy input. Carbon and fluorine atoms first form a diamond lattice, then graphene and finally polyhedral concentric carbon.

Image: 
Illustration by Weiyin Chen/Rice University

HOUSTON - (June 21, 2021) - Diamond may be just a phase carbon goes through when exposed to a flash of heat, but that makes it far easier to obtain.

The Rice University lab of chemist James Tour is now able to "evolve" carbon through phases that include valuable nanodiamond by tightly controlling the flash Joule heating process they developed 18 months ago.

Best of all, they can stop the process at will to get product they want.

In the American Chemical Society journal ACS Nano, the researchers led by Tour and graduate student and lead author Weiyin Chen show that adding organic fluorine compounds and fluoride precursors to elemental carbon black turns it into several hard-to-get allotropes when flashed, including fluorinated nanodiamonds, fluorinated turbostratic graphene and fluorinated concentric carbon.

With the flash process introduced in 2020, a strong jolt of electricity can turn carbon from just about any source into layers of pristine turbostratic graphene in less than a second. ("Turbostratic" means the layers are not strongly bound to each other, making them easier to separate in a solution.)

The new work shows it's possible to modify, or functionalize, the products at the same time. The duration of the flash, between 10 and 500 milliseconds, determines the final carbon allotrope.

The difficulty lies in how to preserve the fluorine atoms, since the ultrahigh temperature causes the volatilization of all atoms other than carbon. To overcome the problem, the team used a Teflon tube sealed with graphite spacers and high-melting-point tungsten rods, which can hold the reactant inside and avoid the loss of fluorine atoms under the ultrahigh temperature. The improved sealed tube is important, Tour said.

"In industry, there has been a long-standing use for small diamonds in cutting tools and as electrical insulators," he said. "The fluorinated version here provides a route to modifications of these structures. And there is a large demand for graphene, while the fluorinated family is newly produced here in bulk form."

Nanodiamonds are microscopic crystals -- or regions of crystals -- that display the same carbon-atom lattice that macro-scale diamonds do. When first discovered in the 1960s, they were made under heat and high pressure from detonations.

In recent years, researchers have found chemical processes to create the same lattices. A report from Rice theorist Boris Yakobson last year showed how fluorine can help make nanodiamond without high pressure, and Tour's own lab demonstrated using pulsed lasers to turn Teflon into fluorinated nanodiamond.

Nanodiamonds are highly desirable for electronics applications, as they can be doped to serve as wide-bandgap semiconductors, important components in current research by Rice and the Army Research Laboratory.

The new process simplifies the doping part, not only for nanodiamonds but also for the other allotropes. Tour said the Rice lab is exploring the use of boron, phosphorous and nitrogen as additives as well.

At longer flash times, the researchers got nanodiamonds embedded in concentric shells of fluorinated carbon. Even longer exposure converted the diamond entirely into shells, from the outside in.

"The concentric-shelled structures have been used as lubricant additives, and this flash method might provide an inexpensive and fast route to these formations," Tour said.

Co-authors of the paper are Rice graduate students John Tianci Li, Zhe Wang, Wala Algozeeb, Emily McHugh, Kevin Wyss, Paul Advincula, Jacob Beckham and Bo Jiang, research scientist Carter Kittrell and alumni Duy Xuan Luong and Michael Stanford. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and of materials science and nanoengineering at Rice.

The Air Force Office of Scientific Research and the Department of Energy supported the research.

Credit: 
Rice University

The Science of tsunamis

The word "tsunami" brings immediately to mind the havoc that can be wrought by these uniquely powerful waves. The tsunamis we hear about most often are caused by undersea earthquakes, and the waves they generate can travel at speeds of up to 250 miles per hour and reach tens of meters high when they make landfall and break. They can cause massive flooding and rapid widespread devastation in coastal areas, as happened in Southeast Asia in 2004 and in Japan in 2011.

But significant tsunamis can be caused by other events as well. The partial collapse of the volcano Anak Krakatau in Indonesia in 2018 caused a tsunami that killed more than 400 people. Large landslides, which send immense amounts of debris into the sea, also can cause tsunamis. Scientists naturally would like to know how and to what extent they might be able to predict the features of tsunamis under various circumstances.

Most models of tsunamis generated by landslides are based on the idea that the size and power of a tsunami is determined by the thickness, or depth, of the landslide and the speed of the "front" as it meets the water. In a paper titled "Nonlinear regimes of tsunami waves generated by a granular collapse," published online in the Journal of Fluid Mechanics, UC Santa Barbara mechanical engineer Alban Sauret and his colleagues, Wladimir Sarlin, Cyprien Morize and Philippe Gondret at the Fluids, Automation and Thermal Systems (FAST) Laboratory at the University of Paris-Saclay and the French National Centre for Scientific Research (CNRS), shed more light on the subject. (The article also will appear in the journal's July 25 print edition.)

This is the latest in a series of papers the team has published on environmental flows, and on tsunami waves generated by landslides in particular. Earlier this year, they showed that the velocity of a collapse -- i.e., the rate at which the landslide is traveling when it enters the water -- controls the amplitude, or vertical size, of the wave.

In their most recent experiments, the researchers carefully measured the volume of the granular material, which they then released, causing it to collapse as a cliff would, into a long, narrow channel filled with water. They found that while the density and diameter of the grains within a landslide had little effect on the amplitude of the wave, the total volume of the grains and the depth of the liquid played much more crucial roles.

"As the grains enter the water, they act as a piston, the horizontal force of which governs the formation of the wave, including its amplitude relative to the depth of the water," said Sauret. (A remaining challenge is to understand what governs the speed of the piston.) "The experiments also showed that if we know the geometry of the initial column [the material that flows into the water] before it collapses and the depth of the water where it lands, we can predict the amplitude of the wave."

The team can now add this element to the evolving model they have developed to couple the dynamics of the landslide and the generation of the tsunami. A particular challenge is to describe the transition from an initial dry landslide, when the particles are separated by air, to an underwater granular flow, when the water has an important impact on particle motion. As that occurs, the forces acting on the grains change drastically, affecting the velocity at which the front of grains that make up the landslide enters the water.

Currently, there is a large gap in the predictions of tsunamis based on simplified models that consider the field complexity (i.e., the geophysics) but do not capture the physics of the landslide as it enters the water. The researchers are now comparing the data from their model with data collected from real-life case studies to see if they correlate well and if any field elements might influence the results.

Credit: 
University of California - Santa Barbara

Profiling gene expression in plant embryos one nucleus at a time

Following fertilization, early plant embryos arise through a rapid initial diversification of their component cell types. As a result, this series of coordinated cell divisions rapidly sculpts the embryo's body plan. The developmental phenomenon in question is orchestrated by a transcriptional activation of the plant genome. However, the underlying cellular differentiation programs have long remained obscured as the plant embryos were hard to isolate. In fact, previous attempts at creating datasets of the plant embryonic differentiation programs were incapable of overcoming two main obstacles: either the information gathered lacked cell specificity, or the datasets were contaminated with material from surrounding non-embryonic tissues. Now, a team of PhD students from GMI Group Leader Michael Nodine's lab developed a method to profile gene expression at the single cell level in Arabidopsis embryos.

The authors led by Ping Kao in collaboration with Michael Schon from the Nodine group use an approach that consists of coupling fluorescence-activated nuclei sorting together with single-nucleus RNA sequencing (snRNA-seq). Hence, they sort the individual cells in an early plant embryo and sequence the messenger RNA within the individual nuclei. This provides insights into the various transcription profiles, or transcriptomes, within each cell in the plant embryo. With this approach, the team was capable of surmounting the obstacles that undermined previous attempts at creating gene expression atlases in plant embryos.

To explain their team's unique approach, Michael Schon from Nodine's group readily finds a striking parallel: "If you put a hamburger in a blender, it still has all the same components in all the same ratios, but you lose critical information about the burger's spatial organization. Elements essential to 'burger-ness' exist in the organization of the burger's parts, and these elements are lost in a 'burger smoothie'." Schon elaborates: "Earlier methods consisted of grinding entire plant embryos into an 'plant embryo smoothie'; these were still useful in telling us the molecular components of the plant embryo and their ratios, but important information about the organism's organization was lost. Our transcriptome atlas is an effort to restore the information that most likely got 'averaged-out' in previous attempts."

Using this approach, the Nodine group was able to show gene expression patterns that could clearly distinguish the early Arabidopsis embryonic cell types. Consequently, the current work opens the door for uncovering the molecular basis of pattern formation in plant embryos. "This is the beginning of an exciting era of developmental biology and approaches like ours promise to help reveal how emerging cell types are defined at the beginning of plant life", concludes Michael Nodine with a confident smile of satisfaction.

Credit: 
Gregor Mendel Institute of Molecular Plant Biology

Common perovskite superfluoresces at high temperatures

A commonly studied perovskite can superfluoresce at temperatures that are practical to achieve and at timescales long enough to make it potentially useful in quantum computing applications. The finding from North Carolina State University researchers also indicates that superfluorescence may be a common characteristic for this entire class of materials.

Superfluorescence is an example of quantum phase transition - when individual atoms within a material all move through the same phases in tandem, becoming a synchronized unit.

For example, when atoms in an optical material such as a perovskite are excited they can individually radiate light, create energy, and fluoresce. Each atom will start moving through these phases randomly, but given the right conditions, they can synchronize in a macroscopic quantum phase transition. That synchronized unit can then interact with external electric fields more strongly than any single atom could, creating a superfluorescent burst.

"Instances of spontaneous synchronization are universal, occurring in everything from planetary orbits to fireflies synchronizing their signals," says Kenan Gundogdu, professor of physics at NC State and corresponding author of the research. "But in the case of solid materials, these phase transitions were thought to only happen at extremely low temperatures. This is because the atoms move out of phase too quickly for synchronization to occur unless the timing is slowed by cooling."

Gundogdu and his team observed superfluorescence in the perovskite methyl ammonium lead iodide, or MAPbI3, while exploring its lasing properties. Perovskites are materials with a crystal structure and light-emitting properties useful in creating lasers, among other applications. They are inexpensive, relatively simple to fabricate, and are used in photovoltaics, light sources and scanners.

"When trying to figure out the dynamics behind MAPbI3's lasing properties, we noticed that the dynamics we observed couldn't be described simply by lasing behavior," Gundogdu says. "Normally in lasing one excited particle will emit light, stimulate another one, and so on in a geometric amplification. But with this material we saw synchronization and a quantum phase transition, resulting in superfluorescence."

But the most striking aspects of the superfluorescence were that it occurred at 78 Kelvin and had a phase lifetime of 10 to 30 picoseconds.

"Generally superfluorescence happens at extremely cold temperatures that are difficult and expensive to achieve, and it only lasts for femtoseconds," Gundogdu says. "But 78 K is about the temperature of dry ice or liquid nitrogen, and the phase lifetime is two to three orders of magnitude longer. This means that we have macroscopic units that last long enough to be manipulated."

The researchers think that this property may be more widespread in perovskites generally, which could prove useful in quantum applications such as computer processing or storage.

"Observation of superfluorescence in solid state materials is always a big deal because we've only seen it in five or six materials thus far," Gundogdu says. "Being able to observe it at higher temperatures and longer timescales opens the door to many exciting possibilities."

The work appears in Nature Photonics and is supported by the National Science Foundation (grant 1729383). NC State graduate students Gamze Findik and Melike Biliroglu are co-first authors. Franky So, Walter and Ida Freeman Distinguished Professor of Materials Science and Engineering, is co-author.

Credit: 
North Carolina State University

Health disadvantages of LGB communities increase among younger generations

image: Sociologist Hui Lui hopes findings will demonstrate that advancements in civil rights and social acceptance for the LGBTQ+ community have not yet translated into health equity.

Image: 
Creative commons via WikiMedia

While the LGBTQ+ community has seen significant advancements in legal rights, political representation and social acceptance over recent years, mental and physical health disparities still exist for queer Americans - and are even worse among younger generations, says a new study from Michigan State University.

In the first-ever population-based national study comparing mental and physical health of lesbian, gay and bisexual (LGB) Americans to their straight counterparts, MSU sociologist Hui Liu and research partner Rin Reczek, professor of sociology from Ohio State University, found that when compared to their straight counterparts, LGB Millennials have worse health disadvantages than their older peers, though disparities persist throughout older generations as well.

"Because younger LGB generations have grown up in a more progressive era, we expected that they may experience lower levels of lifetime discrimination and thus have lower levels of health disadvantage than older LGB generations. However, our results showed the opposite to be true," Liu said.

The study, funded by the National Institute of Health and published in the journal Demography, examined five key indicators of physical and mental health - psychological distress, depression, anxiety, self-rated physical health and activity limitation - of nearly 180,000 study participants across Millennial, Generation X, Baby Boomer and pre-Boomer generational cohorts.

Surprisingly, Liu and Reczek found that health disadvantages for LGB individuals increased among more recent generational cohorts, with LGB Millennials suffering more health disadvantages than LGB Gen-Xers or Baby Boomers. Moreover, bisexual respondents experienced even worse health disparity trends across generations than their gay and lesbian peers.

For example, the study found that gay and lesbian Baby Boomers are 150% more likely to experience both anxiety and depression compared to straight peers; bisexual

Boomers are also about 150% more likely to experience anxiety than their straight peers but over twice as likely to experience depression.

Comparatively, for gay and lesbian Millennials, the likelihood of feeling anxious and depressed is almost 200% and 250% higher than that of their straight peers, respectively, and bisexual Millennials have an almost 300 and 380% increased likelihood than their straight peers.

"Older LGB people have experienced significant interpersonal and institutional discrimination throughout their lives, so they may perceive the current era to be relatively better than the past, and therefore may experience improved well-being as a result of this perception," Liu said.

She also suggests that the findings could be explained by the fact that older LGB people have had more time to develop better coping skills than their younger peers, and that more Millennials identify as LGB than older generations.

Liu is hopeful that this study will demonstrate that advancements in civil rights and social acceptance for the LGBTQ+ community have not yet translated into health equity.

"These health disparities may be a result of more insidious and deeply embedded factors in U.S. society that are not eradicated simply with changes in marriage or discrimination laws," Liu said.

"Instead, more drastic societal changes at both the interpersonal and institutional levels must take place. Public policies and programs should be designed and implemented to eliminate health and other major disadvantages among LGBTQ+ Americans."

Credit: 
Michigan State University

Ben-Gurion U. scientists invent an artificial nose for continuous bacterial monitoring

BEER-SHEVA, Israel, June 21, 2021 - A team of scientists at Ben-Gurion University of the Negev (BGU) have invented an artificial nose that is capable of continuous bacterial monitoring, which has never been previously achieved and could be useful in multiple medical, environmental and food applications.

The study was published in Nano-Micro Letters.

"We invented an artificial nose based on unique carbon nanoparticles ("carbon dots") capable of sensing gas molecules and detecting bacteria through the volatile metabolites the emit into the air," says lead researcher Prof. Raz Jelinek, BGU vice president for Research & Development, member of the BGU Department of Chemistry and the Ilse Katz Institute for Nanoscale Science and Technology, and the incumbent of the Carole and Barry Kaye Chair in Applied Science.

The patent-pending technology has many applications including identifying bacteria in healthcare facilities and buildings; speeding lab testing and breath-based diagnostic testing; identifying "good" vs. pathogenic bacteria in the microbiome; detecting food spoilage and identifying poisonous gases.

"BGU has a remarkable track record of sensor development, which has infinite possibilities for real-life application," says Americans for Ben-Gurion University (A4BGU) Chief Executive Officer Doug Seserman. "Our renowned multi-disciplinary research efforts continue to ignite innovation, addressing some of the world's most pressing issues."

The artificial nose uses chemical reactions and electrodes to sense and distinguish vapor molecules and record the changes in capacitance onto interdigitated electrodes (IDEs) coated with carbon dots (C dots). The resulting C-dot-IDE platform constitutes a versatile and powerful vehicle for gas sensing in general, and bacterial monitoring in particular. Machine learning can be applied to train the sensor to identify different gas molecules, individually or in mixtures, with high accuracy.

Credit: 
American Associates, Ben-Gurion University of the Negev

Poaching affects behavior of endangered capuchin monkeys in Brazilian biological reserve

image: In a habitat with high hunting pressure, the risk of predation influences the habits of these monkeys more than the availability of food.

Image: 
Irene Delval/IP-USP

A study conducted in the Una Biological Reserve in the state of Bahia, Brazil, shows that in a habitat with high hunting pressure the risk of predation has such a significant impact on the behavior of the Yellow-breasted capuchin monkey Sapajus xanthosternos that it even avoids areas offering an abundant supply of plant biomass and invertebrates, its main sources of food.

An article reporting the findings of the study is published in the American Journal of Primatology.

"Many theories in the field of primatology assume that pressure to find food is more important that predation pressure. In this study we were able to show that predation pressure in Una counts for more in deciding where to be than where food is most abundant. These animals spend less time where food is plentiful because they perceive a higher risk of predation there. Another very important point is that this risk isn't posed only by natural predators but also by human predators - by poachers. Because of hunting pressure, they spend less time in places where the most food is available," said Patrícia Izar, last author of the article. Izar is a professor in the Department of Experimental Psychology at the University of São Paulo's Institute of Psychology (IP-USP).

The study was part of a research project by Priscila Suscke, first author of the article, for her PhD in Una, which according to the researchers contains "a mosaic of habitats" including three predominant types of vegetation: mature forest, secondary forest, and an agroforestry system known as cabruca, in which cacao trees introduced to replace the understory thrive in the shade of the native forest.

"It's not that food doesn't influence use of the area, but that in these different forest landscape environments in the Una Biological Reserve each environment contributes different amounts of food, and each poses a different level of risk [in terms of predation and poaching]," Suscke said."Our analysis of the factors influencing the monkeys' use of these three environments showed that the group avoided the area with the largest food supply because of the risk involved."

The study was supported by São Paulo Research Foundation - FAPESP via a PhD scholarship awarded to Suscke and a Regular Research Grant awarded to Izar."All my research on primates for the past 20 years has been basically funded by FAPESP, although I've had support from other agencies," Izar said.

Data collection

To collect field data, Izar and three trained observers watched the group of capuchin monkeys, which varied between 32 and 37 individuals. They followed the group simultaneously and began collecting data only when interobserver agreement accuracy reached 85%. The training period lasted about three months. All observations were recorded with the aid of a GPS unit, so that all reported occurrences were georeferenced.

"In estimating the area actually used by the animals for survival, which was smaller than the area of the conservation unit, we took into account all georeferenced points, including foraging and sleeping sites," explained geographer Andrea Presotto, second author of the article and a professor in the Department of Geography and Geosciences at Salisbury University in the United States.

The researchers observed foraging behavior using fruit left on aluminum trays anchored to the ground and traps in the form of shallow pits into which invertebrates fell and out of which they were unable to climb.

Other behaviors besides feeding, such as resting, traveling, interacting with other monkeys, keeping watch, and so on, were recorded every 15 minutes for each individual. To reflect risk perception, the researchers noted alarm calls and vigilance behavior in each habitat, also georeferenced. The animals' reactions to the alarms were the basis for an analysis of perceived predation risk and its influence on their behavior.

"The study cross-referenced the data collected on foraging behavior, reactions to predators, and interactions with the environment, in conjunction with objective measurements of that environment, as well as food supply, and what we call absolute predation risk, based on the density of predators in the area," Izar said.

Landscape of fear

Presotto used the field data to produce maps for five spatial predation risk variables: hunting pressure, pressure from terrestrial or aerial predators, vigilance, and silence, each in relation to the three forest environments. This so-called "landscape of fear" approach consists of a visual model that helps explain how fear can change the use of an area by animals as they try to reduce their vulnerability to predation.

"The intensity of each variable was calculated in the GIS [geographic information system] using the kernel density method to estimate the number of occurrences in a specific area. For example, whenever an attack by an aerial predator was observed, the point was recorded using the GPS unit. The model told us where such occurrences occurred most," Presotto said.

The maps and statistical model produced by Presotto to display predation risk variables confirmed the group's initial hypotheses. "Evidence of hunting by humans was most abundant in the cabruca, but it was also found in the transition zones between mature and secondary forest and cabruca areas. Moreover, the monkeys were silent more frequently in the cabruca than in the other two landscapes. Perceived risk from terrestrial predators was strongest in secondary forest, and from aerial predators in cabruca and mature and secondary forest areas, especially transition zones. The monkeys were vigilant more frequently in the cabruca and a large secondary forest area," said Presotto, who is mounting a georeferenced database on the topic.

Reactions vary to different types of predator, Suscke noted. "What matters is perceived predation risk - how the prey perceives where in the landscape there's less or more risk of being predated," she said. "As we refined our analysis, we found that different predators affect the prey's perception and behavior differently, and we were able to make separate metrics for aerial and terrestrial predators, as well as poachers. We were also able to show the importance of hunting in determining the pattern of use of the area by these monkeys, and above all that the risk of being hunted negatively affected their use of the area."

The researchers are also studying capuchin monkeys in two other locations: Fazenda Boa Vista (Piauí state), and Carlos Botelho State Park (São Paulo state). "Because we have comparative studies, we can say that the monkeys in Una Biological Reserve display a higher predation risk perception in terms of more frequently occurring alarm behaviors, such as falling silent or freezing, and that these appear to be specific to hunting," Izar said, recalling that capuchin monkeys are naturally very noisy. "Our article points to yet another negative effect of anthropic pressure on animal behavior."

Monkeys are not pets

For Suscke, the article also points to thoughts about public policy. "Poaching has a major negative effect. Conservation units have been created for many years, and this is a commendable policy, but our findings point to the importance of proper surveillance in order to take good care of them," she said. "It's also important to educate the public, given the existence of recreational hunting as well as poaching, both opportunistically for food and systematically for animal trafficking. It's not uncommon to see monkeys kept as pets. In these cases, the poachers usually capture the mother and sell the infant. The Yellow-breasted capuchin is a critically endangered species, so the issue is hard to resolve and must be the object of tougher policies."

Izar stresses that the list of wild animals that can legally be sold as pets recently issued by Brazil's National Environmental Council (CONAMA) is a threat to primates. "The pressure on them is very strong in Brazil, and the Brazilian Primatology Society has launched a campaign entitled 'Monkeys Aren't Pets'. We know that legalization of the commercial breeding of wild animals leads to an increase in illegal trafficking of animals captured in their natural habitats, because commercially bred animals are much more expensive," she said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Creating cooler cities

If you've ever been in a city's central core in the middle of summer, you know the heat can be brutal--and much hotter than in the surrounding region.

Temperatures in cities tend to be several degrees warmer than in its rural areas, a phenomenon called the Urban Heat Island (UHI) effect. Many cities have been observed to be 2-4ºC warmer than the countryside in virtually every inhabited continent. This phenomenon occurs because urban infrastructure, especially pavements, absorbs a lot of heat as compared to natural vegetated surfaces. This heat pollution causes higher air conditioning and water costs, while also posing a public health hazard.

One mitigation strategy called gray infrastructure involves the modification of impermeable surfaces (walls, roofs, and pavements) to counter their conventional heating effect. Typical urban surfaces have a solar reflectance (albedo) of 0.20, which means they reflect just 20 percent of sunlight and absorb as much as 80 percent. By contrast, reflective concrete and coatings can be designed to reflect 30-50 percent or more. Cities like Los Angeles have already used reflective coatings on major streets to combat heat pollution, although the solution can be expensive to implement city-wide.

Researchers at the University of Pittsburgh Swanson School of Engineering used a Computational Fluid Dynamics model to find ways to decrease cost and increase usage of cooler surfaces. The paper, published in the journal Nature Communications, examined the possibility of applying cooler surfaces to just half the surfaces in a city.

"This could be an effective solution if the surfaces selected were upstream of the dominant wind direction," said lead author Sushobhan Sen, postdoctoral associate in the Department of Civil and Environmental Engineering. "A 'barrier' of cool surfaces preemptively cools the warm air, which then cools the rest of the city at a fraction of the cost. On the other hand, if the surfaces are not strategically selected, their effectiveness can decline substantially."

This research gives urban planners and civil engineers an additional way to build resilient and sustainable infrastructure using limited resources.

"It's important for the health of the planet and its people that we find a way to mitigate the heat produced by urban infrastructure," said coauthor Lev Khazanovich, the department's Anthony Gill Chair Professor of Civil and Environmental Engineering. "Strategically placed reflective surfaces could maximize the mitigation of heat pollution while using minimal resources."

Credit: 
University of Pittsburgh

Robot-assisted surgery: Putting the reality in virtual reality

Cardiac surgeons may be able to better plan operations and improve their surgical field view with the help of a robot. Controlled through a virtual reality parallel system as a digital twin, the robot can accurately image a patient through ultrasound without the hand cramping or radiation exposure that hinder human operators. The international research team published their method in IEEE/CAA Journal of Automatica Sinica.

"Intra-operative ultrasound is especially useful, as it can guide the surgery by providing real-time images of otherwise hidden devices and anatomy," said paper author Fei-Yue Wang, Director of the State Key Laboratory of Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences. "However, the need for highly specialized skills is always a barrier for reliable and repeatable acquisition."

Wang noted that the availability of onsite sonographers can be limited, and that many procedures requiring intra-operative ultrasound also often require X-ray imaging, which could expose the operator to harmful radiation. To mitigate these challenges, Wang and his team developed a platform for robotic intra-operative trans-esophageal echocardiography (TEE), an imaging technique widely used to diagnose heart disease and guide cardiac surgical procedures.

"Our result has indicated the use of robot with a simulation platform could potentially improve the general usability of intra-operative ultrasound and assist operators with less experience," Wang said.

The researchers employed parallel control and intelligence to pair an operator with the robot in a virtual environment that accurately represents the real environment. Equipped with a database of ultrasound images and a digital platform capable of reconstructing anatomy, the robot could navigate the target areas for the operator to better visualize and plan potential surgical corrections in computational experiments.

"Such a system can be used for view definition and optimization to assist pre-planning, as well as algorithm evaluations to facilitate control and navigation in real-time," Wang said.

Next, the researchers plan to further integrate the currently proposed parallel real/virtual system with specific clinical needs to assist the translational research of such imaging robots.

"The ultimate goal is to integrate the virtual system and the physical robot for in-vivo clinical tests, so as to propose a new diagnosis and treatment protocol using parallel intelligence in medical operations," Wang said.

Credit: 
Chinese Association of Automation

Greenhouse gas data deep dive reaches new level of 'reasonable and true'

image: Researcher Yushu Xia (pictured) and others from the University of Illinois and Argonne National Laboratory have mapped nitrous oxide emissions from corn fertilizers to the county level, allowing greater precision in life cycle analysis for corn ethanol.

Image: 
Yushu Xia, University of Illinois

URBANA, Ill. - For the most accurate accounting of a product's environmental impact, scientists look at the product's entire life cycle, from cradle to grave. It's a grand calculation known as a life cycle assessment (LCA), and greenhouse gas emissions are a key component.

For corn ethanol, most greenhouse gas emissions can be mapped to the fuel's production, transportation, and combustion, but a large portion of the greenhouse gas calculation can be traced right back to the farm. Because of privacy concerns, however, scientists can't access individual farm management decisions such as fertilizer type and rate.

Nitrogen fertilizer data are an important piece of the calculation because a portion of these fertilizers wind up in the atmosphere in the form of nitrous oxide, a highly potent greenhouse gas. Corn nitrogen fertilizer data are publicly available at the national and state levels, but scientists argue this level of resolution masks what's really being applied on farms across the country and could lead to inaccurate LCAs for corn ethanol.

In a new study from the University of Illinois and the U.S. Department of Energy's Argonne National Laboratory, researchers developed the first county-level nitrogen application datasets for corn, dramatically improving the accuracy of greenhouse gas calculations for the crop.

"Having good data is really important to foster both a shared discussion and greater confidence in LCAs. We've seen some abuses of life cycle analysis using really crude numbers, downscaling big averages that can really vary a lot. So even though the county level still isn't as precise as we would like, it's a big accomplishment to get to that scale," says Michelle Wander, professor in the Department of Natural Resources and Environmental Sciences at Illinois and co-author on the study.

Hoyoung Kwon, principal environmental scientist in the Systems Assessments Center at Argonne and co-author on the study, says the protocol and findings will help the agricultural and bioeconomy community better understand the impacts of high-resolution nitrogen fertilizer data on corn-based biofuel LCAs.

"Nitrous oxide makes up about half of the total greenhouse gases associated with corn farming," Kwon says. "Now we can differentiate nitrous oxide emission associated with corn farming on the county level, and can show how much these emissions vary with location and farming practice."

Yushu Xia, who led the analysis and recently finished her doctoral program with Wander, used two approaches to determine county-level nitrogen fertilizer and manure usage.

The first, which Xia calls the top-down approach, was a bit like putting a puzzle together using different-sized pieces. At the county level, she found data for nitrogen fertilizer and manure inputs, but the numbers were aggregated across all crops, not corn specifically. The state level dataset included fertilized area in corn, so it was a matter of matching county with state. The state dataset also included nitrogen inputs, but aggregated them across fertilizer types. Data validation, or double-checking state and country information, therefore became another puzzle.

"For the top-down approach, we used data derived from fertilizer sales, information compiled by the Association of American Plant Food Control Officials. So we assume these numbers are relatively accurate; somebody actually bought that nitrogen. Yushu went through painstaking effort, basically using that crop data layer like a jigsaw puzzle to figure out how much corn is where and in what rotation over time. And then also for the manure: How many animals are there? Where are they? What kind of animal waste and how much? It's literally a budgeting effort to try to find out what's reasonable and true," Wander says.

Xia's second approach took corn yield, crop rotations, and soil properties from the county level and estimated nitrogen inputs based on the amount of nitrogen it would take to achieve that yield. Comparing the results of the two approaches told Xia farmers are applying nitrogen in excess of what's needed.

"Nationally, the weighted averages of corn nitrogen inputs based on corn planted area exceeded nitrogen needs by 60 kilograms per hectare, with a nitrogen surplus found in 80% of all U.S. corn producing counties," Xia says.

Excess application was most pronounced in the Midwest, followed by the Northern Plains. The Southeast and Northwest had comparatively low nitrogen application rates and surplus levels. Western states were more variable overall.

Xia says the technique can be useful beyond nitrous oxide emissions estimations.

"Our approach can also be used to estimate nitrogen leaching, ammonia emissions, other greenhouse gas emissions, or the water and carbon footprint. These data improvements can really help to create and utilize better ecosystem models and life cycle analysis."

Kwon indicates the new approach could potentially be used by policymakers at the national level.

"The EPA's national greenhouse gas inventory report currently uses state-level nitrogen fertilizer data to generate national estimates of nitrous oxide emissions from fertilizer. If they apply these high-resolution county-level data, they can refine those numbers on a national scale."

The results could also help farmers make more informed management decisions.

"Fertilizer prices are sky high right now, so since our results suggest some farmers are over-applying up to a third of their nitrogen, they could probably back off a bit and save some money," Wander says.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Separating natural and man-made pollutants in the air

image: (a) Dominant land cover types in the study domain (74°E, 27°N--80°E, 30°N) are urban areas, croplands and desert shrublands. (b) Timeline of lockdown policies, where BAU refers to business-as-usual conditions (for details see Table S1). (c) Mean NO2 column density (TROPOMI) during February 1 to March 20, 2020. Representative location of an urban area (Delhi) and rural background (Fatehabad) is shown in black box, in addition to other prominent emission sources (power-plant at Dadri and Harduaganj and an industrial cluster at Panipat) marked in a triangle. Figures generated using 'Cartopy' version 0.16 and 'Rasterio' version 1.2 modules of Python 3.6 (https://www.python.org/downloads/release/python-360/).

Image: 
Misra P. et al., 2021, <em>Scientific Reports</em>, Springer Nature doi.org/10.1038/s41598-021-87673-2

COVID-19 has changed the world in unimaginable ways. Some have even been positive, with new vaccines developed in record time. Even the extraordinary lockdowns, which have had severe effects on movement and commerce, have had beneficial effects on the environment and therefore, ironically, on health. Studies from all around the world, including China, Europe and India, have found major drops in the level of air pollution. However, to fully understand the impact of anthropogenic causes, it is important to separate them from natural events in the atmosphere like wind flow.

To demonstrate this point, a new study by researchers at the Research Institute for Humanity and Nature, Japan, uses satellite data and mathematical modeling to explain just how great the lockdown effect on nitrogen oxides has been in Delhi, India, one of the world's most polluted cities, and its surrounding area. This study was carried out under the activity named "Mission DELHIS (Detection of Emission Change of air pollutants: Human Impact Studies)" as a part of RIHN project, Aakash (meaning "Sky" in Hindi, originated from sanskrit) (https://www.chikyu.ac.jp/rihn_e/covid-19/topics.html#topics6).

"Nitrogen oxides are good chemical tracers for testing model hypothesis, because besides their health effects, they have a short lifetime. Therefore, it is unlikely wind will bring nitrogen oxides from far away." explains Professor Sachiko Hayashida, who led the study.

Nitrogen oxides naturally change due to dynamic and photochemical conditions in the atmosphere, and are emitted from the Earth's surface by both natural and anthropogenic activities. Therefore, Hayashida argues, looking simply at their concentration levels in the atmosphere provides only a crude impression of man-made contributions.

"COVID-19 pandemic has given us an opportunity of social experiment, when we can discriminate the anthropogenic effects on nitrogen oxides from the natural ones caused by atmospheric conditions and natural emissions, because only anthropogenic emissions decreased due to the lockdown. These confounders affect policy to control air quality" she says.

Strict lockdown was enforced in Delhi for two months in 2020, from the end of March to the end of May. This period coincides with the transition in atmospheric conditions, such as actinic flux, from low in spring to high in early summer, and also from stagnant winds to high ventilation across the entire northern India region.

The researchers analyzed seasonal and inter-annual changes using multi-year satellite data to predict what the levels would be had there been no lockdown. They estimated top-down emissions using a steady-state continuity equation. The study's findings clearly show that the natural conditions could not explain the dramatic drop in 2020 nitrogen oxide levels. Not even close.

"Our calculations suggested that 72% of nitrogen oxides emissions in urban centres are the resulted solely from traffic and factories," said Hayashida.

Interestingly, the levels recovered after the lockdown more quickly in rural levels than they did in urban ones, an effect attributed to agricultural activities, such as crop-residue burning, which resumed almost immediately. Unlike factories, the agricultural activities continued, albeit at a lesser pace, during the lockdown, which was less stringent on agriculture.

Hayashida says that her team's approach should have an impact on how we study harmful chemical species emitted to the atmosphere.

"Our findings show the importance of analyzing top-down emissions and not just atmospheric concentrations. We expect our approach to guide effective policy on air pollution," she said.

Credit: 
Research Institute for Humanity and Nature

There's a good reason online retailers are investing in physical stores

Researchers from Colorado State University, Amazon, and Dartmouth College published a new paper in the Journal of Marketing that examines the role of physical stores for selling "deep" products.

The study, forthcoming in the Journal of Marketing, is titled "How Physical Stores Enhance Customer Value: The Importance of Product Inspection Depth" and is authored by Jonathan Zhang, Chunwei Chang, and Scott Neslin.

While some traditional offline retailers are struggling and are closing stores (e.g., Macy's, Walgreens), online retailers are opening them (e.g., Amazon, Warby Parker). This conflicting trend raises the question, what is the physical store's role in today's multichannel environment?

The research team posits that products differ in the inspection depth - "deep" or "shallow" - customers require to purchase them. Deep products require ample inspection in order for the customer to make an informed decision. We propose that physical stores provide the physical engagement opportunity customers need to purchase deep products.

To test this thesis, the researchers conducted three studies. The first used transaction data from a national multichannel outdoor-product retailer. Two lab experiments demonstrated the same effect.

The large-scale transactional data involving 50,000 customers show that by using a "deep products in-store" promotional strategy to migrate new customers from a "low-value state" to a "high-value state," average spending per trip increases by 40%, long-term sales increases by 20%, and profitability increases by 22%.

The lab experiments show that:

-By onboarding new customers to purchase a "deep product in-store" as their first purchase from a new retailer, their re-patronage intention for this retailer increases by 12% compared to all other product/channel combinations.

-By directing new customers to purchase a "deep product in-store" as their first purchase from a new retailer, they are more likely to: 1) buy deep products in the future online, indicating that they generalize trust across channels; and 2) buy adjacent categories online, indicating that they generalize trust across categories.

The last decade has witnessed a marked increase in the opening of physical stores by online retailers, despite myriad changes in the retailing environment. This attests that these findings are not ephemeral. Zhang says "The general lesson of our research is for retailers to create a concrete, tangible, and multi-sensory experience for customers buying products that require this physical engagement. This sets the stage for favorable experiential learning and increased customer value." Retailers can do this in numerous ways:

First, when retailers find that a customer is buying deep products online but their spending is decreasing in value, they can provide a promotion for deep products in-store. This can increase customer value.

Second, retailers need to enhance physical engagement for deep products through merchandising and training sales personnel to walk customers through the engagement - e.g., by helping customers try and use deep products in-store.

Third, retailers cannot infer product inspection depth solely from predefined product categories because there is much variation in inspection depth within a particular category. Rather, management should infer inspection depth using the proposed measures, or expert, independent judges.

Fourth, retailers should use a deep/offline onboarding strategy for new customers. That is, they should use acquisition channels that encourage the first purchase to be deep/offline.

Zhang adds that "We also discuss related issues such as using stores versus showrooms; fielding full or limited staff; selling private label goods; designing loyalty and buy online, pickup in-store (BOPIS) programs; and leveraging technology to create physical engagement in online settings."

Credit: 
American Marketing Association

Carcinogen-exposed cells provide clues in fighting treatment-resistant cancers

image: New research from Massachusetts General Hospital explores how the mutation-independent effect of environmental carcinogens leads to the recruitment of CD8+ T cells, the dominant antitumor cell type. Here, T cells (red and green) attack carcinogen-exposed breast cancer cells (light blue).

Image: 
Mei Huang, PhD

BOSTON - Researchers from Massachusetts General Hospital (MGH) have discovered a biological mechanism that transforms cells exposed to carcinogens from environmental factors like smoking and ultraviolet light into immunogenic cells that can be harnessed therapeutically to fight treatment-resistant cancers. As reported in Science Advances, that mechanism involves spurring the release of small proteins known as chemokines which, in turn, recruit antitumor immune cells (CD8+ T cells) to the tumor site to block metastasis, potentially enhancing the effectiveness of a new generation of immunotherapies.

"Immunotherapeutics have shown tremendous promise in recent years, but the fact is their response rate for many types of cancers is very low," says senior author Shadmehr (Shawn) Demehri, MD, PhD, an investigator in the Center for Cancer Immunology and the Cutaneous Biology Research Center at MGH. "We showed how cells exposed to certain carcinogens become immunogenic, that is, become targets for the immune attack, and how that exposure might be exploited to treat such major forms of cancer as breast and other epithelial cancers."

CD8+ T cells are known to effectively attack the cells exposed to environmental carcinogens. But in the past, science has focused mainly on the mutations caused by these exposures in a patient's heritable DNA as the reason for the immune attack. In their laboratory work with mice, the MGH team demonstrated for the first time another consequence of carcinogen exposure that can have significant immunologic implications, namely, the nongenetic alteration of cells through such harmful environmental factors as smoking, ultraviolet light and pollution.

"This finding is particularly important because it could open the door to therapeutic interventions that aren't practical with a DNA approach, since no clinician wants to introduce even more genetic mutations into cancer cells just to make them more immunogenic," explains Demehri. "We learned if there was another immunogenic element associated with carcinogen exposure independent of or even complementary to the presence of the mutation, then you could deliver that factor into a 'cold' tumor to make it 'hot,' meaning it would become immunogenic and responsive to immunotherapies."

That factor is a chemokine known as CCL21, which MGH researchers found to be expressed in breast cancer cells in mice that were exposed to DMBA, a carcinogen similar to that found in cigarette smoke. "Through its signaling, CCL21 recruits CD8+ T cells which infiltrate the tumor, as previous work has shown, and are associated with a significant reduction in relative risk of distant metastasis," says lead author Kaiwen Li, MD, an investigator in MGH's Center for Cancer Immunology, MGH's Cutaneous Biology Research Center and the Department of Urology at Sun Yat-sen Memorial Hospital at China's Sun Yat-sen University. "Not only does CCL21 induce an antitumor immune response to prevent metastasis, but it overcomes other immune cells known as Tregs (immunosuppressive regulatory T cells) present in tumors from inhibiting the work of the CD8+ T cells."

As an example of how this unique mechanism could be used therapeutically, the MGH team reported that an injection of CCL21 into the tumor might be able to transform cold breast cancers into hot tumors responsive to current immunotherapies.

"We hope that researchers will use these findings to open a much wider field of investigation into cancer immunology," emphasizes Demehri. "Specifically, studies are needed to identify the full array of cytokines and chemokines that are induced by environmental carcinogens in various types of cancers with the goal of harnessing the most potent mediators of antitumor immunity."

Credit: 
Massachusetts General Hospital