Tech

Controlling your home by the power of thought

video: A rhesus monkey (Macaca mulatta) during training in the "Reach Cage". When the target lamp lights up it instructs the monkey to touch this target, but the monkey must wait until another light on the back of the room (in front of him) turns off (start signal). A sensor in the corresponding target detects the touch.

Image: 
Karin Tilch

Walking across the room to switch on a light - such a simple everyday activity involves enormously complex computations by the brain as it requires interpretation of the scene, control of the gait and planning upcoming movements such as the arm movement to the light switch. Neuroscientists at the German Primate Center (DPZ) - Leibniz Institute for Primate Research have now investigated in which brain areas the movements are coded for reaching distant targets that require both arm and walking movements, and how the movements are planned in the brain before execution. For this purpose, they have created a novel experimental environment, the "Reach Cage". First results with rhesus monkeys show that distant movement targets, which the animals have to walk to, are encoded in the same areas of the brain as close targets, even before the animal starts walking. This means that movement goals, near and far from the body, can be obtained from the same brain areas no matter if the goal requires walking or not. These findings could be harnessed to develop brain-machine interfaces that control smart homes (eLife).

Our highly developed nervous system enables versatile and coordinated movement sequences in complex environments. We only notice the impact on our daily life when we are no longer able to perform certain actions, for example, as a result of a paralysis caused by a stroke. A novel approach to put the patient back in control would be brain-computer interfaces that are able to read signals from the brain. Such signals can be used as control signals not only for neuroprosthetic devices, which aim at directly replacing the lost motor function, but also for any computerized devices such as smartphones, tablets or a smart home.

The development of brain-computer interfaces builds on decades of basic research on the planning and control of movements in the cerebral cortex of humans and animals, especially non-human primates. Up to now, scientists performed such experiments mostly to investigate the planning of controlled hand and arm movements to nearby targets within immediate reach. However, those experiments are too constrained to study action planning in large realistic environments, such as a one's home. For example, turning on the light switch on the opposite wall involves different types of overlapping movements with coordination of multiple parts of the body.

Experimental constraints so far prevented scientists from studying neural circuits involved in action planning during whole-body movements, since animals must be able to move freely during the brain recordings. Observing a combination of walking and reaching movements, such as in the case of distant targets, required a completely new experimental environment that was not available yet. The so-called "Reach Cage" provides a test environment that allows to register and interpret the movement behavior, and to link it to the related brain activity while the animals are able to move freely under highly controlled conditions.

For the experiment, two rhesus monkeys were trained to touch targets close to or distant from their body. For distant targets, a walking movement was required to bring the target within reach. Illumination of individual targets instructed the animals which target they should touch. Using multiple video cameras, the movements were observed in 3D with high temporal and spatial precision. So called deep-learning algorithms were used to automatically extract the movements of their head, shoulder, elbow and wrist in 3D from the video images. Simultaneously, brain activity was recorded wirelessly so that the animals were not restricted in their movements at any time. By measuring the activity of hundreds of neurons from 192 electrodes in three different brain regions, it is now possible to draw conclusions about how movements are planned and executed in parallel.

Over the course of the training the monkeys performed reaching and walking movements with increasing confidence and optimized their behavior to reach high precision even when the targets were at a greater distance. "In the video analysis we can track the movements very accurately. The wirelessly recorded brain signals are so precise and clear that the activity of individual neurons can be studied and linked to behavior", says Michael Berger.

The results show that motor planning areas of the brain process information about the goal of specific movements even if the goal is at the other end of the room and a whole-body movement is first required to get there. Alexander Gail, head of the Sensorimotor Group, adds: "Such knowledge is not only important to understand the deficits of patients who have difficulty in planning and coordinating actions. The new insights also might turn out particularly useful when developing brain-computer interfaces for controlling smart homes for which goals, such as doors, windows or lights, are distributed throughout a complex environment."

Credit: 
Deutsches Primatenzentrum (DPZ)/German Primate Center

Lyin' eyes: Butterfly, moth eyespots may look the same, but likely evolved separately

image: Doses of the blood thinner heparin altered the eyespot patterns in io and polyphemus moths. The fact that the two species responded differently to heparin suggests that wing pattern develops in distinct ways, even among moths that belong to the same family.

Image: 
Andrei Sourakov/Florida Museum

GAINESVILLE, Fla. --- The iconic eyespots that some moths and butterflies use to ward off predators likely evolved in distinct ways, providing insights into how these insects became so diverse.

A new study manipulated early eyespot development in moth pupae to test whether this wing pattern develops similarly in butterflies and moths. The results suggest that the underlying development of eyespots differs even among moth species in same family, hinting that moths and butterflies evolved these patterns independently.

Influencing how eyespots form can lead to a better understanding of the respective roles genetics and the environment play in moth and butterfly wing patterns, said lead author Andrei Sourakov.

"Moths stumbled on a very successful evolutionary design over 200 million years ago," said Sourakov, collections coordinator of the Florida Museum's McGuire Center for Lepidoptera and Biodiversity. "That's a long time for evolution to take place. It's easy to assume that things that look the same are the same. But nature constantly finds a way of answering the same question with a different approach."

Sourakov and co-author Leila Shirai, a biologist at the University of Campinas in Brazil, analyzed eyespot development in io and polyphemus moths, two species in the Saturniidae family. The eyespots in the two species responded differently to the study's treatments, though the findings suggest the same signaling pathways were active. The researchers also found moths' wing pattern development, which begins when they are caterpillars, slows just after they enter their pupal stage, a finding that echoes previous butterfly research.

Honing in on the signaling pathways involved in eyespot development - the molecular cascade that produces pigmentation and pattern in moths and butterflies - is central to determining the similarities and differences between moth and butterfly development, Sourakov said. Looking at DNA isn't enough. Instead, scientists need to determine what happens after a gene is expressed to see if seemingly identical wing patterns truly are the same.

"Genetically controlled variation can look identical to environmentally induced variation," Sourakov said. "Variation isn't really produced by genes themselves, but by the intermediate product of the gene - in this case, molecular pathways."

Sourakov and Shirai's research expands on a 2017 study by Sourakov that showed molecules in the blood thinner heparin influenced eyespot development in moths.

In the new study, heparin triggered various changes in moth eyespots, including smudging and a shift in proportion. Despite similar molecular interactions, however, the changes were inconsistent between the io and polyphemus moths, potentially due to the different ways their wing patterns are mapped out by genes.

Sourakov and Shirai were able to detect wing development was likely paused just after pupation by delivering varying doses of heparin to caterpillars and pupae at different developmental stages. They also found eyespot tissue transplanted to a different region of the wing during pupation could induce patterning.

Natural history collections are key resources in revealing which wing patterns took hold genetically and became visible in populations, Sourakov said.

"Collections are where it all starts and where it all ends, frankly," he said. "We can generally look at collections as a window into evolution, helping us understand which changes are just lab results and which ones can actually be observed in nature. Variation in genetics and physical characteristics is the toolbox for the evolution of diversity, and diversity is what we study at the museum. Collections help us understand that."

Credit: 
Florida Museum of Natural History

FSU researchers study Gulf of Mexico in international collaboration

When the Deepwater Horizon oil rig suffered a blowout in 2010 and began spilling oil into the Gulf of Mexico, scientists got to work understanding the effects of that disaster.

But limited data on the typical conditions in the Gulf made understanding the potential changes from the spill more difficult. To make sure scientists weren't caught unaware in the future, Florida State University and partner universities investigated current baseline conditions in the southern Gulf to create a series of maps and guides that detail the distribution of carbon, nitrogen and the carbon-14 isotope.

These elements are all important ecological factors that contribute to the natural habitat supporting untold number of plants, fish and other marine life.

The study was published in the journal PLOS ONE.

"The Gulf of Mexico is a productive system that is important both in ecology -- by providing unique habitats for various species-- and economy, for industries such as tourism, fishing and the oil industry," said Samantha Bosman, a research assistant in the Department of Earth, Ocean and Atmospheric Science and the paper's lead author. "The ecosystem may not go back to its pre-spill or pre-disturbance conditions, so having a baseline makes it easier to determine how much has changed after the disturbance. That helps you determine if conditions are returning to what was observed prior to a spill or if they are changing to a 'new normal.'"

Florida State researchers worked with colleagues from the University of South Florida, Eckerd College and the National Autonomous University of Mexico to complete fieldwork for the project in 2015 and 2016.

"This joint collaboration of Mexican and U.S. scientists brought together people with unique skill sets and significant local knowledge," said Jeff Chanton, a Robert O. Lawton professor of oceanography in the Department of Earth, Ocean and Atmospheric Science and a co-author of the paper. "They were able to access the environmental health of the southern Gulf, which is subject to significant oil and gas recovery."

The researchers surveyed the southern Gulf of Mexico, an area that had been home to the Ixtoc 1 well, which suffered a blowout and massive oil spill in 1979. Along with measuring the typical distribution of elements, the study looked for evidence of oil within the sediment that could have come from that spill, but they didn't find any signs of that disturbance remaining.

The oil industry and fishing industry exist side-by-side in the Gulf of Mexico. Millions of people live around its coast. An understanding of the baseline conditions in the ecosystem will help scientists looking at the impacts and recovery of the environment in the event of any future oil spills.

One particular area of interest to scientists was the composition of sediment on the seafloor. To understand typical conditions in the region, the researchers measured how much carbon, nitrogen and carbon-14 were in the sediment.

Before this research, there was limited data on the sediment composition in the southern Gulf. Understanding the composition provides scientists greater insight into when fossil fuels might have entered the environment.

For example, scientists can measure the carbon-14 found in organic material. Younger material has higher levels of the isotope, and older materials have lower levels. Right after an oil spill, scientists should find very low levels of carbon-14. As the oil degrades and the ecosystem recovers, the level will increase.

"In the event of an oil spill, that's a big slug of carbon emitted to the surface of the Earth," Chanton said. "And everything in the surface of the Earth has carbon-14 in it because it's pretty modern. Sediments have a modern date. When you add petroleum or some petroleum product to the sediments, they look older, and that's because they're being diluted with fossil fuel."

But that analysis is most useful when scientists know what the typical measurement is for a particular location, allowing them to understand when conditions have returned to normal.

As their sampling sites moved from near the coastline to further out to sea, the researchers found that the amount of carbon increased but the amount of carbon-14 decreased. This information showed them the sediment they were pulling up was older.

"The better you know the pre-existing conditions, the better off you are when something happens," Chanton said.

Credit: 
Florida State University

Unveiling the structure of SARS-CoV-2

While the novel coronavirus has ground much of daily life to a halt, researchers around the world are working overtime to find solutions. Since January, structural biologists have been busy modeling the virus' vital proteins, which could lead to therapeutic breakthroughs. Now, these scientists' efforts are detailed in a feature article in Chemical & Engineering News, the weekly newsmagazine of the American Chemical Society.

As soon as the genomic sequence of SARS-CoV-2, the virus that causes the COVID-19 disease, was mapped, researchers were off to the races in synthesizing its proteins and determining their structures. Unlike the highly complex genome sequence encoded in human DNA, the new coronavirus has a much shorter sequence and stores its genetic information in a single strand of RNA, writes Senior Editor Laura Howes. The encoded proteins help the virus attach to human cells and replicate, and knowledge of their structures is necessary for developing small molecules and other therapeutics to disrupt the proteins. The first structural models were uploaded to the Protein Data Bank, an international database for 3D structural data of large biomolecules, within five weeks of the earliest reported cases of COVID-19.

When it came to discovering the protein structures of SARS-CoV-2, researchers with expertise in other coronaviruses had the advantage. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) are both similar to the novel coronavirus, which scientists used as a basis for identifying protein sequences and shapes. Advances in technology have also greatly helped structural biologists in this effort, especially when it comes to imaging the proteins. X-ray crystallography, a key tool in structural biology, is largely automated, allowing for rapid and accurate data gathering. However, X-ray crystallography doesn't work for every protein. The relatively new cryo-electron microscopy has emerged as a standout method for capturing the proteins in SARS-CoV-2, reconstructing an array of 2D images into a clear 3D model. With much progress made in a few short months, researchers have abundant information to work with, but they caution that a market-ready treatment or vaccine will take time.

Credit: 
American Chemical Society

Car sharing minus the driver

In 15 years, the share of self-driving passenger vehicles on Moscow's roads will exceed 60%. However, this change will not have a significant impact if personal vehicle travel is not reduced and car sharing services are not expanded. For the first time, HSE University researchers have assessed the effects of self-driving cars on the city. In their study, Alexei Zomarev and Maria Rozhenko lay out predictions for 2030 and 2035. https://foresight-journal.hse.ru/en/2020-14-1/350698855.html

Scenarios for the Future

In the coming decade, self-driving vehicles will come into wider use. Researchers are looking more and more at not just driverless cars themselves, but at the potential of their shared use--so-called shared autonomous vehicles (SAV).

This potential affects two contemporary services: ride sharing (in which travelers share a vehicle for travel along a similar route) and car sharing (short-term car rental). When self-driving cars begin to be used for these services, these services will become one--minus the driver.

New technology will affect people's mobility, people's employment, road safety, the environment, living conditions, and the accessibility of road transport. Taking these factors into account, Alexei Zomarev and Maria Rozhenko created models of how Moscow will look in the near future.

Based on official city strategies and available data regarding the number of passengers per car, auto sales, the efficiency of road networks, and so on, the researchers created four scenarios for 2030 and 2035:

Stagnation

Shared Use

Robotization

Absolute Mobility

The scenarios are characterized by different rates of integrating driverless technology into city transport, as well as different possible states of the public vehicle market, including both traditional and self-driving vehicles.

The Stagnation and Robotization scenarios result from a low level of shared service development, while the Absolute Mobility and Shared Use scenarios are characterized by a high level.

The researchers explain their choices of 2030 and 2035 for their model thus: 2022 marks the beginning of the implementation of driverless taxi services, and 2024 is when self-driving cars will be permitted for private use. The years 2030 and 2035 are therefore optimal times for analyzing changes in transport behavior. Moreover, due to poor source data quality, 2035 represents the outer limit for conducting official forecasts.

Stagnation and Robotization

The pace at which self-driving vehicles will take over Moscow is different in these scenarios, but the speed at which shared car services become used more widely is equally slow: the fraction of shared vehicles on the road per day is insignificant.

In the Stagnation Scenario, self-driving vehicles in 2030 will amount to about 10% of vehicles on the road, and by 2035 this number will be 34%. In the Robotization Scenario, the share of self-driving cars will increase from 18% in 2030 to 61% in 2035. However, the benefits reaped from driverless technology in these scenarios will not reach their full potential.

Due to weaker technological development, the rate of automobile accidents in the Stagnation Scenario will decrease to a lesser extent than in other scenarios. Most cars in the city will not be self-driving, and the human factor will therefore play a larger role. In the Robotization Scenario, insufficient quality control over autonomous transport--its condition, location, and IT system security--will pose a serious obstacle.

This will increase tension on the roads due to growing congestion. By 2035, according to both scenarios, the number of cars will reach up to 6 million (compared to 4.7 million in 2019), as well as the number of car owners. While there are currently 293 cars per every 1,000 Muscovites, this number will reach up 464 by 2035.

Road congestion will increase by 13%, and time spent by drivers in traffic jams will increase by 5-10%. The shortage of parking spaces will increase by 1.7 million.

While costs per trip will decrease overall, costs will nonetheless go up due to service costs and the prices of more expensive self-driving cars.

Absolute Mobility and Shared Use

Unlike the previous scenarios, these scenarios are characterized by a large proportion of shared vehicles. In the Shared Use Scenario, the rate of integration of driverless technology into car sharing is low; sharing services instead rely mainly on human drivers. In the Absolute Mobility Scenario on the other hand, the rate is high: most transport will be carried out by self-driving vehicles, and the effects of car sharing will reach their optimal potential by 2035:

The number of passengers per car will double up to 2.3 passengers;

The average time spent per trip in Moscow will be 55 minutes, which is comparable to the average time spent per trip using a personal vehicle;

Daily car usage will increase from today's average of 6 trips per day to almost 14 trips per day;

Up to 32 people will use one SAV per day.

All this will reduce city residents' need for personal vehicles, save time, and eliminate the need to worry about parking (if parking privileges for shared cars are preserved).

Accessibility to shared services will not be determined by one's socio-economic status or health, though it will give rise to social tensions as these services reshape the labor market.

Self-driving cars will lead to job losses for 200,000 people, including drivers, couriers, traffic controllers, and traffic police.

The city will breathe easier: the number of cars will be reduced as much as possible--to 1.6 million. Traditional public transport will have fewer passengers, and some routes will be eliminated.

'Scenarios with a high proportion of shared vehicles will enable a smaller fleet of vehicles to satisfy a greater demand for passenger transportation,' the study authors conclude. According to their estimates, under the most favorable forecast for 2030, 58% of trips per day will be made in shared vehicles, and by 2035, this number will increase to 77%. These numbers correspond with global estimates of the effects sharing services in megacities.

In Order to Improve Something, Restrict It

Whether or not the scenarios become a reality depends on the measures taken by the authorities to regulate the auto market. The current transport policy, according to the researchers, is inefficient. It gives rise to weakly controlled personal vehicle ownership growth, and if city governments do not put restrictions in place, driverless cars will get stuck in the same traffic jams as those with drivers.

The number of self-driving vehicles should also not be allowed to grow unchecked. However, to stop the degradation of the urban environment, the city should invest in them as well as shared transport. Car-sharing companies should be encouraged to purchase self-driving vehicles, and city residents should be encouraged to forego owning personal vehicles in favor of using shared forms of transport.

In order to achieve this, the researchers propose a comprehensive set of measures, distributed in accordance with each scenario.

For the Shared Use and Absolute Mobility scenarios, both of which are characterized by a high usage of shared vehicles, the number of personal vehicles can be reduced if:

- Car ownership becomes more expensive due to the introduction of a transport tax;

- Legislation is passed to limit car ownership rights only to those who own or are renting a long-term parking spot that is within walking distance of one's home.

To make the Robotization Scenario a reality, it is important to increase the share of self-driving cars. This can be achieved by prohibiting the use of vehicles that are more than 10 years old or by increasing the vehicle tax.

Tax rates can be determined in accordance with the level of a vehicle's autonomy: the more autonomous it is, the lower the tax. In addition, the introduction of so-called e-pricing--fares for traveling within the city depending on the time of day and travel zone--can help reduce personal vehicle traffic.

Fiscal changes and new requirements for owners of personal vehicles are far from the only thing needed, but these are the changes city residents will feel most.

Initiatives will turn into social costs, including 'forced changes to transportation behavior models', increased travel costs, and the need to adapt to new technology.

Therefore, transport policy, among other things, should be proactive. It should anticipate adverse effects and keep citizens as informed as possible. Future benefits need to be clearly justified, and measures should be 'introduced gradually and announced in advance, several years before the decisions go into effect.'

Credit: 
National Research University Higher School of Economics

Children don't know how to get proper nutrition information online

audio: Children looking for health information online could end up more prone to obesity. A new study shows a lack of digital health literacy can lead children to misinterpret portions, adopt recommendations intended for adults, or take guidance from noncredible sources.

Image: 
Journal of Nutrition Education and Behavior

Philadelphia, May 6, 2020 - Children looking for health information online could end up more prone to obesity. A new study in the Journal of Nutrition Education and Behavior, published by Elsevier, shows a lack of digital health literacy can lead children to misinterpret portions, adopt recommendations intended for adults, or take guidance from noncredible sources.

Researchers recruited 25 children ages 9-11 years old from a summer youth camp, with their parents' permission. Parents said the children use the internet for an hour or two several days week, both at home and in school.

"We ran this study to see whether children could find the correct answers to obesity-related health questions online, plus see how they go about searching for such information," explained lead study author Paul Branscum, PhD, RD, of Miami University, Oxford, OH, USA.

What Professor Branscum and his colleague found surprised him. Even with the internet at their fingertips, only three children could correctly say how many food groups there are and name them, and none of the children could correctly say how much of each food group they should eat.

Each question was first posed to the children without using the internet to see how much they already knew on their own. On one question, "How much physical activity or exercise should you get each day?" the number of correct responses actually went down after they used the internet. Eight children changed their answers from correct responses to incorrect ones when they didn't recognize the difference between guidance for adults and children.

"What also surprised me that I hadn't expected at all was how often children went straight to Google Images to find the answers to certain questions," Professor Branscum said. "Some kids would do the search then not even look at the search results but click on the Images tab and just use that information, looking through the images to get their answer."

Researchers gave one parent per child a standard print survey known as the Health Literacy Skills Instrument. It tested their own nutritional knowledge, as studies have shown parents' nutritional literacy can impact children. All of them rated as either "basic" or "proficient" on a three-point scale.

Professor Branscum says this research points to real vulnerabilities in our nutrition education system and possible future problems in our public health system, as lack of knowledge in our children today can lead to health problems in our adults tomorrow.

He plans to continue his work in this field by developing a program to teach children these digital literacy skills, including how to tell which sources are credible, looking for child-specific recommendations, understanding portion size, and perseverance to keep searching until you find the information you are looking for.

Credit: 
Elsevier

New rules for the physical basis of cellular organelle composition

image: Researchers found that the formation of organelles called condensates heavily depends on multiple compounds present in the cell.

Image: 
the researchers

New findings about critical cellular structures have upended common assumptions about their formation and composition and provided new insight how molecular machines are built in living cells.

Organelles are the cell's organ-like compartments, which are implicated in many cellular functions including the formation of critical cellular machinery, while also having implications for disease and pathology. A large class of organelles can form without the need for membrane boundaries, and are increasingly referred to as condensates because they are widely believed to form through liquid condensation, like dew drops on grass. But since these organelles have no walls, researchers still do not understand the rules that govern what molecules get into condensates, and what are excluded.

One prediction of the liquid condensation model is that these structures form when the concentration of proteins and other biomolecules becomes high enough to cause them to "condense" from within the surrounding cellular milieu. "It's like adding salt to water. Salt goes into solution; but if you add enough, at some point it stops and salt crystals drop out," said Joshua A. Riback, a postdoctoral researcher in chemical and biological engineering at Princeton University and the article's co-first author, together with former graduate researcher Lian Zhu. "But when we looked at it, we found this is not the case."

The findings, by a researchers at Princeton and the St. Jude Children's Research Hospital in Memphis, were published online May 6 in the journal Nature.

Working with colleagues led by Clifford Brangwynne, a professor of chemical and biological engineering at Princeton and an Investigator at the Howard Hughes Medical Institute, Riback and Zhu found that the formation of the condensates also heavily depended on multiple compounds present in the cell. Researchers previously believed that condensates formed when enough of a single biomolecule, such as protein or RNA accumulated in cells. But the answer is more interesting than that.

"The ratio of different types of biomolecules is very important," Riback said. "It's called compositional dependence."

Or in Brangwynne's analogy: "It's like cooking: did I add too much salt to this recipe? Well, it depends on how many onions are in the pot already!"

One reason for the compositional dependence is the way that proteins and RNA interact at the molecular level. Condensates require a certain amount of interactions, which depends on the types of biomolecules and their compositions. The researchers found that interactions between different types of molecules, or heterotypic interactions, were essential for driving the formation of these structures. Too low or two high of a composition of one biomolecule limits the number of heterotypic interactions which can form.

The researchers demonstrated the importance of this composition dependence for the assembly of critical molecular machines in cells. One example is the creation of the ribosome -- critical for the production of all protein in the cell -- which form in liquid condensates called nucleoli. Riback said the formation of ribosomal subunits is similar to folding origami. When the shape is complete, the ribosomal subunit no longer has enough available regions which can form connections that make it stick to the surrounding liquid within nucleoli. So, it is ejected, allowing it to go out and perform its function throughout the cell.

"As the RNA folds into the origami swan, it can no longer contribute," Riback said. "So if it is properly folded, it is expelled."

Richard Kriwacki, a co-principal researcher for the project, said the findings provide important insight for cellular biology.

"This study highlights how the process of phase separation enables a complex membraneless organelle such as the nucleolus to respond to changing cellular conditions by linking its protein composition to its own functional output, ribosomes, which are the molecular machines that synthesize proteins," said Kriwacki, a member of the St. Jude faculty and co-leader of its Cancer Biology Program. "Our data suggest that, as nucleolar protein synthesis varies in cells, phase separation helps control nucleolar structure, dynamics and function."

The researchers performed the experiments by tagging proteins with fluorescent markers and using the fluorescence to see how varying the protein concentration affected condensates' formation. Riback said the inspiration for the experiment came after they tried to increase the size of condensates by triggering cells to over-express certain types of proteins. When this changed the composition and stability of the condensates, they began to examine the cause.

"I wanted to understand how proteins formed condensates," he said. "It turned out to be a lot more complicated in cells then in the test tube."

Credit: 
Princeton University, Engineering School

Research found a new way to make functional materials based on polymers of metal clusters

image: Figure a: Visualization of a linear polymer of the 34-atom silver-gold clusters with the inter-cluster metal-metal bonding in the horizontal direction (gold: orange, silver: green, ligand molecules (ethynyladamantane) are shown by grey sticks). Figure b: Shows the packing of metal atoms in the cluster polymer in a view rotated 90 degrees about the horizontal axis.

Image: 
Peng Yuan/Xiamen University

Researchers at the universities of Jyvaskyla (Finland) and Xiamen (China) have discovered a novel way to make functional macroscopic crystalline materials out of nanometer-size 34-atom silver-gold intermetallic clusters. The cluster material has a highly anisotropic electrical conductivity, being a semiconductor in one direction and an electrical insulator in other directions. Synthesis of the material and its electrical properties were investigated in Xiamen and the theoretical characterization of the material was carried out in Jyvaskyla. The research was published online in Nature Communications on May 6, 2020.

The metal clusters were synthesized by means of wet chemistry adding gold and silver salts and ethynyladamantane molecules in a mixture of methanol and either chloroform or dichloromethane. All syntheses produced the same 34-atom silver-gold clusters with an identical atomic structure, but surprisingly, the use of dichloromethane/methanol solvent initiated a polymerization reaction after cluster formation in solution and growth of human-hair-thick single crystals consisting of aligned polymeric chains of the clusters.

The crystals behaved as a semiconducting material in the direction of the polymer and as an electrical insulator in the cross directions. This behavior arises from metal-metal atomic bonding in the polymer direction while in the cross directions the metal clusters are isolated from each other by a layer of the ethynyladamantane.

Theoretical modeling of the cluster material by computer-intensive simulations using the density functional theory predicted that the material has an energy gap of 1.3 eV for electronic excitations. This was confirmed by measurements of optical absorption and electrical conductivity in a layout where single crystals we mounted as part of a field-effect transistor, which showed a p-type semiconductor property of the material. Electrical conductivity along the polymer direction was about 1800-fold as compared to the cross directions.

"We were quite surprised by the observation that the polymer formation can be controlled by simple means of changing the solvent molecules. We discovered this probably by good luck, but we hope that this result can be applied in future to design hierarchical nanostructured materials with desired functionality", says Professor Nanfeng Zheng from Xiamen University, who led the experimental work.

"This work shows an interesting example on how macroscopic material properties can be designed in the bottom-up synthesis of nanomaterials. Theoretical modeling of this material was quite challenging due to a large-scale model we had to build to account for the correct periodicity of the polymer crystal. To this end, we benefited very much of having access to some of the largest supercomputers in Europe", says Academy Professor Hannu Hakkinen from the University of Jyvaskyla, who led the theoretical work.

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

Fly ash geopolymer concrete: Significantly enhanced resistance to extreme alkali attack

image: Geopolymer concrete blocks, heat cured at 200 degrees Celsius and then immersed in an extreme alkali medium for 14 days at 80 degrees Celsius (a and b), resist the attack significantly better than blocks heat-cured at 600 degrees Celsius and subjected to the same treatment (c and d) in this series of scanning electron microscope images. The blocks show the presence of a gel-like substance, characteristic of alkali attack from the 3M NaOH solution. The heat-curing significantly reduced in the intensity of the attack but could not prevent it.
Fly ash generated by coal power generation can be repurposed into superior-grade geopolymer concrete. However, a critical durability problem has been low resistance to alkali attack.

UJ researchers have found that high temperature heat-treatment at 200 degrees Celsius can halve this harmful mechanism in fly ash geopolymer concretes.

Image: 
Dr Abdolhossein Naghizadeh, University of Johannesburg.

Fly ash generated by coal-fired power stations is an environmental headache, creating groundwater and air pollution from vast landfills and ash dams. Some of the waste product can be repurposed into geopolymer concrete, such as pre-cast heat-cured elements for structures.

However, a critical durability problem has been low resistance to extreme alkali attack. Researchers at the University of Johannesburg have found that high temperature heat-treatment (HTHT) can reduce this harmful mechanism in fly ash geopolymer concrete by half.

"In a previous study, we found that fly ash geopolymer concrete can be vulnerable under extreme alkaline conditions. The recommendation from the study, was that this material should not be employed in structures that are exposed to highly alkaline mediums, such as some chemical storage facilities.

"The findings of our new study show that the alkali resistance of geopolymer concrete can be significantly improved by exposing it to an evaluated temperature, optimally 200 degrees Celsius," says Dr Abdolhossein Naghizadeh.

The study forms part of Naghizadeh's doctoral research at the Department of Civil Engineering Science at the University of Johannesburg.

Extreme alkali medium

In the research published in Case Studies in Construction Materials, blocks of fly ash geopolymer mortars were variously heat-cured at 100, 200, 400 or 600 degrees Celsius for 6 hours. These were then immersed in water, a medium alkali medium or an extreme alkali medium; and stored at 80 degrees Celsius for 14 days or 28 days, depending on the performance measurement.

(The prolonged heat-curing for 28 days was conducted to compare the results with those obtained by the other studies, which employed the same curing regime. This long-term curing is suitable for research purposes, but not recommended for actual construction. The medium alkali medium was a 1M NaOH solution. The extreme alkali medium was a 3M NaOH solution.)

"The hardened blocks heat-cured at 200 degrees, and then immersed in the extreme alkali medium (the "200/3M" blocks), maintained about 50% residual strength at 22.6 MPa upon alkali attack. The blocks heat-cured at the other temperatures maintained much lower residual strengths at 10.3 - 14.6 MPa," says Naghizadeh.

"The 200/3M blocks immersed in extreme alkali medium displayed only limited fine cracking indicating low expansion, compared to the others which displayed severe cracking. Leaching of silicone and aluminium was lowest for the 200/3M blocks.

"X-ray diffraction showed that crystalline minerals, albite and sillimanite, formed in the binder phase of 200/3M blocks. Scanning electron microscope images of the 200/3M binders show the presence of a gel-like substance, characteristic of alkali attack. The heat-curing significantly reduced in the intensity of the attack but could not prevent it," he says.

"The High Temperature Heat Treatment (HTHT) at 200 degrees created this effect by inhibiting the dissolution of unreacted fly ash particles within the hardened geopolymer concrete matrix. However, the HTHT also reduced the compressive strength for these blocks by 26.7%."

Best used as precast

Fly ash geopolymer binders exhibit remarkable durability properties. Among these are high resistance to alkali-silica reaction; superior acid resistance; and high resistance to fire, low carbonation and limited sulfate attack, says Naghizadeh.

Fly ash geopolymer cement is suitable mostly for precast concrete manufactured at a factory or workshop. The reason is that strength development in geopolymer cement mixtures is generally slow under ambient temperatures.

This makes heat-curing necessary or essential for early strength gain. The practical methods established for heat-curing pre-cast Ordinary Portland cement (OPC) can be adapted for this.

This makes fly ash geopolymers suitable for precast concrete elements such as beams or girders for buildings and bridges, railway sleepers, wall panels, hollow core slabs, and concrete pipes.

For regular fly ash geopolymer concrete, a 24-hour period of heating at 60-80 degrees Celsius would be enough to achieve sufficient strength. This curing regime (temperature and duration) is common in cement industry, which is also used for some Portland cement concretes.

Although the use of geopolymer cement is growing every year, its application is still very small compared to OPC. Geopolymer has been employed as the binder in residential structures, bridges, and runways mostly in European countries, China, Australia, and the USA.

A new generation cement

Since the middle of the 18'th century, OPC has been used extensively to produce concrete. Its durability performance is well understood and its long-term behaviour can be predicted.

However, a new generation of cement is emerging as a suitable alternative to OPC in certain applications. These geopolymer cements (or geopolymer binders) have a nature and microstructure totally different from OPC.

A starting material used for geopolymer binder needs to be rich in alumina and silicate contents. On this criterion, multiple industrial waste or by-products qualify - such as rice husk ash, palm oil fuel ash and coal power plant fly ash.

However fly ash has two advantages for use as a geopolymer cement, says Naghizadeh.

Firstly, fly ash is available in millions of tons globally, also in developing countries. Repurposing fly ash as construction material can potentially reduce some of its environmental impacts. Currently, it is disposed of in vast ash dams and landfills close to coal-fuelled power plants, which generate air and ground-water pollution.

The second advantage for fly ash as starting material for geopolymer cement is its chemical composition. Typically, fly ash is rich enough in reactive silicon and aluminium oxides, which results in a better geopolymerization.

This in turn yields a binder with superior mechanical, physical and durability properties compared to the geopolymer concretes made using other waste products containing alumino-silicates.

More complex mix design

When designing a building, the engineer needs to ensure that the concrete used in the structure will have the expected strength for the service life. However, the physical and mechanical properties of concrete and other construction materials can change over time. Such changes can influence the material performance over the service life span of the construction.

Generally, an OPC concrete mixture includes cement, water and aggregate. The civil engineer develops an OPC mix design using specific proportions of these three ingredients for the intended structure.

"For fly ash-based geopolymer concrete activated by sodium silicate and sodium hydroxide, mix design is more complex than for OPC," says Naghizadeh.

"More parameters are involved: the amounts of fly ash, sodium silicate, sodium hydroxide, water, and aggregate; as well as the concentration of sodium hydroxide; the proportion and quality of glass within the alkali."

Fly ash from ash dams

In South Africa, research about using fly ash as a geopolymer cement is limited, says Prof Stephen Ekolu. Ekolu is a co-author of the study and former Head of the School of Civil Engineering and the Built Environment at University of Johannesburg.

"The existing research about fly ash geopolymer concrete uses fly ash supplied directly from power stations. Further research is needed about using fly ash from landfills and ash dams, technically referred to as "bottom ash" to produce geopolymer cement.

"The biggest research questions are issues of material quality, mix design, and developing the technology to allow curing at ambient conditions rather than the current practice of curing at elevated temperatures. Once these three scientific issues have been resolved, fly-ash and indeed most other forms of geopolymer cements can be better placed as OPC replacements worldwide," says Ekolu.

Not a concrete extender

Currently, a small amount of fly ash is used as a common cement extender. In South Africa that amount is 10% of the 36 million tons produced annually. It is mixed with clinker to produce Pozzolanic Portland Cement (PPC).

Though fly ash is used as a common OPC extender, fly ash-based geopolymer concrete (FA-GC) is not combined with OPC-based concrete.

The reason is that the hydration process of OPC is completely different from the geopolymerization reaction of FA-GC. Also, OPC-based concrete and geopolymer concrete each requires a different curing condition.

Different production than OPC

The major phases in OPC production are the calcination and grinding processes.

Unlike OPC, geopolymer production does not require these phases. Fly ash-based geopolymer binders consist of two components: The fly ash and an alkali activator. Usually, fly ash is used as produced in the power station, with no need for further treatment.

Alkali activator solutions such as sodium silicate and sodium hydroxide are also extensively produced in the industry. These are used for multiple purposes, such as detergent and textile production.

"Greener" concrete

"The long-term durability of geopolymer cement under different environmental conditions needs further research. Also, the construction industry globally lacks technical knowledge of the production of geopolymers. To employ geopolymer binders, engineers, technicians and construction workers need training to design and produce geopolymer concrete mix designs with the required properties," says Naghizadeh.

"There is no doubt that production of Portland cement needs to be limited in future, due to its huge environmental impacts. This includes about 5-8% global anthropogenic carbon-dioxide emissions into the atmosphere, which contributes to climate change," adds Ekolu.

Several studies, including those from the University of Johannesburg, have shown that fly ash geopolymer can exhibit superior, or similar properties to Portland cement. This makes it a suitable alternative to replace Portland cement in certain applications.

Moreover, the availability of fly ash worldwide, especially in developing countries, provides an opportunity to produce more economic concrete "greener" than Ordinary Portland Cement, from the viewpoint of potential repurposing of a problematic waste product.

Credit: 
University of Johannesburg

Shedding new light on nanolasers using 2D semiconductors

image: Cun-Zheng Ning, a professor of electrical engineering in the Ira A. Fulton Schools of Engineering at Arizona State University, and collaborators from Tsinghua University in China discovered a process of physics that enables low-power nanolasers to be produced in 2D semiconductor materials. Understanding the physics behind lasers at nanoscale and how they interact with semiconductors can have major implications for high-speed communication channels for supercomputers and data centers.

Image: 
Graphic by Rhonda Hitchcock-Mast/ASU

In his latest line of research, Cun-Zheng Ning, a professor of electrical engineering in the Ira A. Fulton Schools of Engineering at Arizona State University, and his peers explored the intricate balance of physics that governs how electrons, holes, excitons and trions coexist and mutually convert into each other to produce optical gain. Their results, led by Tsinghua University Associate Professor Hao Sun, were recently published in the Nature publication Light: Science & Applications.

"While studying the fundamental optical processes of how a trion can emit a photon [a particle of light] or absorb a photon, we discovered that optical gain can exist when we have sufficient trion population," Ning says. "Furthermore, the threshold value for the existence of such optical gain can be arbitrarily small, only limited by our measurement system."

In Ning's experiment, the team measured optical gain at density levels four to five orders of magnitude -- 10,000 to 100,000 times -- smaller than those in conventional semiconductors that power optoelectronic devices, like barcode scanners and lasers used in telecommunications tools.

Ning has been driven to make such a discovery by his interest in a phenomenon called the Mott transition, an unresolved mystery in physics about how excitons form trions and conduct electricity in semiconductor materials to the point that they reach the Mott density (the point at which a semiconductor changes from an insulator to a conductor and optical gain first occurs).

But the electrical power needed to achieve Mott transition and density is far more than what is desirable for the future of efficient computing. Without new low-power nanolaser capabilities like the ones he is researching, Ning says it would take a small power station to operate one supercomputer.

"If optical gain can be achieved with excitonic complexes below the Mott transition, at low levels of power input, future amplifiers and lasers could be made that would require a small amount of driving power," Ning says.

This development could be game-changing for energy-efficient photonics, or light-based devices, and provide an alternative to conventional semiconductors, which are limited in their ability to create and maintain enough excitons.

As Ning observed in previous experiments with 2D materials, it is possible to achieve optical gain earlier than previously believed. Now he and his team have uncovered a mechanism that could make it work.

"Because of the thinness of the materials, electrons and holes attract each other hundreds of times stronger than in conventional semiconductors," Ning says. "Such strong charge interactions make excitons and trions very stable even at room temperatures."

This means the research team could explore the balance of the electrons, holes, excitons and trions as well as control their conversion to achieve optical gain at very low levels of density.

"When more electrons are in the trion state than their original electron state, a condition called population inversion occurs," Ning says. "More photons can be emitted than absorbed, leading to a process called stimulated emission and optical amplification or gain."

SOLVING NANOLASER MYSTERIES, ONE STEP OF FUNDAMENTAL SCIENCE AT A TIME

While this new discovery added a piece to the Mott transition puzzle -- it uncovered a new mechanism that researchers can exploit to create low-power 2D semiconductor nanolasers -- Ning says that they are not yet sure if this is the same mechanism that led to the production of their 2017 nanolasers.

Work is still ongoing in resolving the remaining mysteries.

Similar trion experiments were conducted in the 1990s with conventional semiconductors, Ning says, "but the excitons and trions were so unstable, both experimental observation and, especially, utilization of this optical gain mechanism for real devices are extremely difficult."

"Since the excitons and trions are much more stable in the 2D materials, there are new opportunities to make real-world devices out of these observations."

This interesting development by Ning and his research team is only at the fundamental science level. However, fundamental research can lead to exciting things.

"Basic science is a worldwide endeavor and everyone benefits if the best people from everywhere can be involved. ASU has provided an open and free environment, especially for international collaborations with top research groups in China, Germany, Japan and worldwide," Ning says.

His team has more work left to do to study how this new mechanism of optical gain works at different temperatures -- and how to use it to create the nanolasers purposefully.

"The next step is to design lasers that can operate specifically using the new mechanisms of optical gain," Ning says.

With the physics foundations laid, they could eventually be applied to create new nanolasers that could change the future of supercomputing and data centers.

"The long-term dream is to combine lasers and electronic devices in a single integrated platform, to enable a supercomputer or data center on a chip," Ning says. "For such future applications, our present semiconductor lasers are still too large to be integrated with electronic devices."

Credit: 
Arizona State University

Filtering out toxic chromium from water

Hexavalent chromium continues to contaminate water sources around the world, with one US company fined just this February for having put employees at risk. Hexavalent chromium is considered to be extremely toxic, especially when inhaled or ingested, and its use is regulated in Europe and in many countries around the world. It is thought to be genotoxic, leading to DNA damage and the formation of cancerous tumors.

Now, chemists at EPFL are developing energy efficient processes for removing contaminants, this time hexavalent chromium, from water. The results are published today in the Journal of Materials Chemistry A.

"Providing access to clean water is one of the most important challenges of our time," says lead author Wendy Queen of EPFL's Laboratory of Functional Materials. "The development of energy efficient processes able to rapidly remove water contaminants play an important role in our effort to globally improve human health and environmental well-being."

Queen and colleagues are developing sponge-like materials that can collect specific substances from solution. Their materials are actually crystals, called metal-organic frameworks (MOF), and the scientists are tailoring these crystalline structures to capture a particular substance.

The materials are extremely porous and the contact surface area contained in one gram of these MOFs can be as large as that of a football field. The target substance then enters these pores and sticks to the internal surface area in a process called adsorption.

The scientists have previously shown that their materials can efficiently adsorb other substances dissolved in solution, like gold, mercury and lead. For instance, 1 gram of MOF yields almost 1 gram of gold. Queen, in collaboration with EPFL scientist Berend Smit and EPFL PhD student Bardiya Valizadeh, has demonstrated the extraction of hexavalent chromium from water. With hexavalent chromium, a relatively light substance, the MOFs can extract approximately 208 milligrams per gram of MOF. Also, if you shine light on the MOF, it then transforms the highly toxic hexavalent chromium into a relatively nontoxic trivalent chromium.

Further developments are required in order to implement the technology for decontaminating water outside of the laboratory.

"The great thing about our sponges is they are relatively easy and cheap to make," explains Queen. "The next step is to test our sponges at larger scales."

For example, the scientists expect 1 kg of MOFs to cost roughly 15 CHF to make, whereas 1kg of gold is worth roughly 55 000 CHF.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Severe coral loss leaves reefs with larger fish but low energy turnover

Research on the Great Barrier Reef has found severe coral loss to be associated with substantial increases in the size of large, long-living herbivorous fish. However, decreased recycling of this fish biomass could leave the ecosystem vulnerable to crashing. The research is published in the British Ecological Society journal Functional Ecology.

By comparing reef surveys from 2003-2004 and 2018, an international team of researchers led by James Cook University, found severe coral loss, of up to 83% in some areas, was associated with increases in fish biomass, productivity and consumed biomass, meaning the reef now has more energy stored in the form of fish weight, is able produce more fish weight and these fish are being consumed by predators.

Renato Morais, lead author of the study, said "It's as if the herbivorous fish community has been scaled up, with larger fish growing and providing more food for predators when they die. However, this does not come without a cost."

Superficially, the increased biomass may seem positive from a human perspective, with the presence of bigger fish after a coral reef collapse suggesting a stable population. However, the researchers warn that reduced turnover, or recycling of biomass, in the reef could mean that this trend benefiting large fish might not last long.

"The fish have not multiplied. Instead, there are more bigger fish and less smaller ones." Said Mr Morais "This suggests that many of these long-living herbivorous fishes, such as surgeon fish which can live up to 40 years, could have been there before the corals died, only growing bigger. Eventually, these older fish will die and, if not replaced by young ones, productivity could collapse."

The increased growth of large fish like surgeonfish, parrotfish and rabbitfish is likely to have been made possible by the accessibility and quality of algal turf, the preferred food of these herbivores. These algal 'lawns' grow abundantly over the skeletons of dead coral. A recovery of the coral could result in a reduction in this food source and collapse of these herbivores.

Erosion of dead coral structures and subsequent loss of refuges for fish could also cause a population crash. Although the researchers did observe a decline in coral structure in the reef area studied, it is possible that it had not reached a level where fish biomass would start to decline.

Between 2014 and 2017 the reefs around lizard island, where the research took place, were subjected to two back-to-back mass bleaching events and two severe cyclones that decimated the coral populations. Combined, these events led to an 80% decline in coral cover throughout the islands.

Previous research has mainly taken a static look at the impacts of coral loss on fish. In this study the researchers looked at the cumulative effects of coral mortality over time using metrics often absent from coral reef studies: fish growth, mortality and energy turnover.

To record the data, the researchers carried out fish and benthic (sea floor) surveys at Lizard Island, on Australia's Great Barrier Reef in 2003 to 2004, and 14 years later in 2018. The benthic surveys quantified live coral cover and algal turf cover. Fish surveys recorded 12 common types of reef fish families.

Because the study comprises of two snapshot assessments 15 years apart the researchers were limited to looking at long term trends. "If there were changes to the energetic balance of the fish assemblages at that reef that happened between surveys but did not have a lasting effect, they would have gone unnoticed." Explained Mr Morais.

Mr Morais also cautions that the findings apply to the one reef the researchers surveyed and other reefs could behave differently, although a number of features suggest similar changes may have taken place elsewhere. For instance, increases in herbivore populations are common in post-coral reefs. Collecting the same data on other reefs will help to establish if this is the case.

The researchers are looking to follow this reef to see if new energetic shifts occur. "Any further shifts will depend on what happens to the reef" said Mr Morais, "Will there be a recovery of corals? Or will this degraded state be maintained? Then, will these large and old herbivorous fishes be replaced by younger ones? There are many aspects in this story to be investigated."

Credit: 
British Ecological Society

Does posting edited self photos on social media increase risk of eating disorders?

New research revealed a consistent and direct link between posting edited photos on Instagram and risk factors for eating disorders. Specifically, digitally editing pictures to improve personal appearance before posting photos to Instagram increased weight and shape concerns in college students.

The study, which is published in the International Journal Eating Disorders, also found that posting photos (edited or unedited) contributed to greater anxiety and reinforced urges to restrict food intake and exercise compared with not posting photos.

"As more people turn to social media to stay connected, it's critically important to let others see you as you are. Compared with edited photos, we saw no decrease in the number of likes or comments for unedited photos on Instagram; knowing this could reduce harmful pressures to change how you look," said co-author Pamela K. Keel, PhD, of Florida State University.

Credit: 
Wiley

Cognitive therapy can help treat anxiety in children with autism

Cognitive behavioural therapy and other psychosocial interventions are effective for treating anxiety in school-aged children with autism spectrum disorder, according to an analysis of all relevant studies published in 2005-2018. The findings are published in Campbell Systematic Reviews.

The analysis included 24 studies: 22 of the studies used a cognitive behavioural therapy intervention, one used peer-mediated theatre therapy, and one examined the benefits of Thai traditional massage.

Overall, the interventions showed a statistically significant moderate to high effectiveness for treating anxiety compared with treatment-as-usual.

"These are exciting results as they actually show evidence that some of the things that can be done at home or at school to reduce anxiety in school-aged children actually work," said co-author Petra Lietz, Principal Research Fellow of the Australian Council for Educational Research.

Credit: 
Wiley

Interleukin-12 electroporation may sensitize 'cold' melanomas to immunotherapies

Bottom Line: Combining intratumoral electroporation of interleukin-12 (IL-12) DNA (tavokinogene telseplasmid, or TAVO) with the immune checkpoint inhibitor pembrolizumab (Keytruda) led to clinical responses in patients with immunologically quiescent advanced melanoma, according to results from a phase II trial.

Journal in Which the Study was Published: Clinical Cancer Research, a journal of the American Association for Cancer Research

Author: Adil Daud, MD, clinical professor at the University of California San Francisco (UCSF) and director of melanoma clinical research at the UCSF Helen Diller Family Comprehensive Cancer Center

Background: "Immune checkpoint inhibition has become a common first-line treatment for melanoma in recent years," said Daud. "However, approximately 40 percent of melanomas are considered to be 'cold,' meaning that they lack sufficient infiltration of immune cells within the tumor and therefore have poor responses to this therapy." It is estimated that only 12 to 15 percent of "cold" melanomas respond to immune checkpoint inhibition, added Daud. "The big question in the field is how to turn these 'cold' melanomas into 'hot' ones that will respond to immune checkpoint inhibition."

How the Study was Conducted: In this single-arm phase II trial, Daud and colleagues examined the impact of treating patients with "cold" melanomas with a combination of the immune checkpoint inhibitor pembrolizumab and a DNA plasmid encoding IL-12 (TAVO). IL-12 is a cytokine that triggers the recruitment of immune cells. To help reduce toxicities associated with systemic IL-12 administration, Daud and colleagues used electroporation to deliver TAVO directly into melanoma lesions.

The trial enrolled 23 adult patients with unresectable or metastatic melanoma who had accessible lesions and who were predicted to respond poorly to pembrolizumab, based on the proportion of checkpoint-positive immune cells in their tumors. Patients underwent TAVO electroporation on days 1, 5, and 8 of every six-week cycle, and they received pembrolizumab every three weeks. Patients remained on treatment until confirmed disease progression, up to two years.

Results: Responses were observed in nine of 22 evaluable patients, for an objective response rate of 41 percent. Thirty-six percent of patients experienced a complete response. The median progression-free survival was 5.6 months, and the median overall survival was not reached after a median follow-up of 19.6 months. In addition to regression of electroporated lesions, regression was also observed in 29.2 percent of untreated lesions. Responses in untreated lesions may be due to proliferation and circulation of cancer-specific immune cells throughout the body, explained Daud.

Grade 3 or higher adverse effects were limited and included pain, chills, sweats, and cellulitis, in addition to the toxicities typically observed with pembrolizumab alone, Daud noted.

By examining pre- and post-treatment tissue samples, Daud and colleagues found that the combination treatment increased the number of immune cells in the tumor microenvironment, compared with baseline levels. This increase was observed for both responders and nonresponders; however, nonresponders also had greater numbers of immunosuppressive cells. Gene expression analyses revealed that the combination treatment led to upregulation of immune-activating genes in patients' tumor cells. Furthermore, treatment enhanced the number of proliferating immune cells in peripheral blood of both responders and nonresponders, indicating activation of a systemic immune response.

Author's Comments: "Combining pembrolizumab with TAVO electroporation improved responses for these patients who were predicted to have very poor responses to single-agent immune checkpoint inhibition," said Daud.

"By using electroporation to deliver TAVO locally, we were able to avoid many of the toxicities associated with systemic IL-12 administration, while still attaining clinical responses and inducing immune-cell infiltration in treated and untreated melanoma lesions," Daud added.

Based on these findings, ongoing work from Daud and colleagues aims to understand how to induce responses in the patients who did not respond to the TAVO and pembrolizumab combination. Additionally, Daud and colleagues are currently conducting a phase II study of intratumoral TAVO plus pembrolizumab in patients who have progressed on pembrolizumab or nivolumab. Daud is also interested in examining the impact of the combination therapy for other "cold" tumor types, including breast cancer.

Study Limitations: "While our results are promising, a key limitation to this approach is that approximately 60 percent of patients still did not respond," explained Daud. An additional limitation is that electroporation would not be an option for patients with inaccessible lesions.

Credit: 
American Association for Cancer Research