Tech

Enabling longer space missions

The 50th anniversary of the Apollo 11 moon landing has reignited interest in space travel. However, almost any mission beyond the moon, whether manned or unmanned, will require the spacecraft to remain fully operational for at least several years. The Hall thruster is a propulsion system that is often used by craft involved in long missions. A recent study by Andrey Shashkov and co-workers at the Moscow Institute of Physics and Technology, Russia has shown how the operating lives of these systems can be further extended; their work was recently published in EPJ D.

The speed or direction of a spacecraft operating in a vacuum can be changed using an ion drive, which creates thrust by accelerating cations. The Hall thruster is a type of ion drive in which the acceleration is provided by an electric field rather than chemical fuel. It is recommended only for use in space missions longer than 3-5 years; currently, these typically involve satellites. When these thrusters do stop working, it is generally because of surface erosion caused by the propellant; the pattern of surface erosion depends on where, in the Hall thruster channel, ions are formed and then accelerated: the ionisation and acceleration regions (IARs).

Shashkov and his colleagues used computer modelling to investigate how changing the rate of gas flow and the size of the magnetic field affects the location of these regions. They then tested their findings by measuring the parameters on a laboratory-scale Hall thruster unit in a vacuum. Importantly, they found that it was possible to keep the IARs at the same, optimal locations. Stationary IARs are known to prolong the life of Hall thrusters, suggesting that these drives could be used in spacecraft on even longer missions: many times further than the moon.

Credit: 
Springer

Wave climate projections predict risks to Aussie coastlines

A team of researchers led by Griffith University has mapped out how much waves are likely to change around the globe under climate change and found that if we can limit warming to 2 degrees, signals of wave climate change are likely to stay within the range of natural climate variability.

However, if we don't limit warming and continue with business as usual, around 48 per cent of the world's coast is at risk of wave climate change, with changes in either wave height, period or direction.

Published in Nature Climate Change, researchers from Griffith's Cities Research Institute and Griffith Centre for Coastal Management conducted a comprehensive assessment of existing, community-driven, multi-method global wave climate projections.

Wave climate is the nature of wave characteristics and how they occur naturally in the ocean, and how they're distributed in time and space around the world. The three main parameters used to describe wave climate are height, length and direction.

The southern coasts of Australia are one of the areas predicted to be at risk of increasing wave heights is we don't limit climate change to 2 degrees.

Under a high-emission scenario, Griffith's lead author Joao Morim, Dr Cartwright and Dr Fernando Andutta demonstrated widespread ocean regions with robust changes in annual mean significant wave height and mean wave period of 5-15% and shifts in mean wave direction of 5-15°.

The study has found that there is agreement amongst wave climate projections for approximately 50% of the world's coastline, with ~40% revealing robust changes in at least two variables.

Furthermore, we find that uncertainty in current projections is dominated by climate model-driven uncertainty, and that single-method modelling studies are unable to capture up to ~50% of the total associated uncertainty.

The analysis considered numerous projections for a time slice at the end of the century (2081-2100) compared with the present-day climate to quantify the results.

Dr Morim said given the increasing evidence for historical changes in wave conditions, the team was interested in how projected future changes in atmospheric circulation would alter the characteristics of wind-waves around the world.

As part of the Coordinated Ocean Wave Climate Project, 10 international groups, using a range of different statistical and dynamical wave models, used outputs from several climate models, under different future climate scenarios, to determine how waves may change in the future.

'While we identified some differences between different studies, we found that if the 2° C Paris Agreement target is kept, signals of wave climate change are unlikely to exceed the magnitude of natural climate variability," Dr Morim said.

"However, under a business-as-usual future climate scenario, we found agreement in the projected future changes in wave heights, lengths and/or directions along 50% of the world's coasts.

"These changes varied by region, with regional differences in increase/decrease in wave height and length of up to 10 and 5% respectively, and rotation of wave direction of up to 17 degrees."

The authors also found that less than 5% of the global coastline is at risk of future increasing wave heights. These regions were the southern coasts of Australia, and segments of the Pacific coast of South and Central America.

Regions where wave heights remain unchanged, but wave lengths or periods show projected increase will experience increased forces exerted on the coast or associated infrastructure.

One way this might be felt is via waves running further up a beach, increasing wave driven flooding. Similarly, waves travelling from a slightly altered direction in projected climate scenarios (suggested to occur over 20% of global coasts) can alter longshore transport of sediment along a coast.

"This is the first time there has been a compilation and reanalysis of the existing wave climate projections to identify two components: the agreement among the different projections, and where there is agreement, what changes should we expect to see," Dr Cartwright said.

"Where there's agreement among the models and what the extent of the change is, looking at different parametres such as wave height and direction in the offshore wave climate - that is critical information in terms of what might happen at the coastline."

Credit: 
Griffith University

Amazon rainforest absorbing less carbon than expected

image: View from the top of a measurement tower, where researchers monitor critical forest canopy processes such as photosynthesis, plant water fluxes, leaf characteristics, and growth.

Image: 
Joao M. Rosa, AmazonFACE

Agriculture, forestry, and other types of land use account for 23% of human-caused greenhouse gas emissions, yet at the same time natural land processes absorb the equivalent of almost a third of carbon dioxide emissions from fossil fuels and industry, according to the International Panel on Climate Change, which issued the first-ever comprehensive report on land and climate interactions earlier this month. How long will the Amazon rainforest continue to act as an effective carbon sink?

An international team of scientists, including climate scientists from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), investigated this question and found that accounting for phosphorus-deficient soils reduced projected carbon dioxide uptake by an average of 50% in the Amazon, compared to current estimates based on previous climate models that did not take into account phosphorus deficiency. The Amazon Basin is critical to help mitigate climate change due to its trees absorbing around a quarter of the CO2 released each year from the burning of fossil fuels.

The paper, "Amazon forest response to CO2 fertilization dependent on plant phosphorus acquisition," was published August 5 in the journal Nature Geoscience.

"Most predictions of the Amazon rainforest's ability to resist climate change are based on models that have outdated assumptions; one of those is that a sufficient supply of nutrients such as phosphorus exist in soils to enable trees to take in additional CO2 as global emissions increase," said Berkeley Lab research scientist and study co-author Jennifer Holm. "But in reality the ecosystem is millions of years old, highly weathered, and therefore depleted of phosphorus in many parts of the Amazon.

Researchers involved in the AmazonFACE project monitored tree growth and leaf development aboveground, and tracked root growth and activity within soils belowground at a study site north of Manaus, Brazil where ambient CO2 concentration is planned to be artificially elevated to enable a realistic investigation of how future CO2 concentrations will affect the ecosystem.

"Our improved models now take into account these complexities, and could serve to help paint a more realistic portrayal of how the Amazon, and the tropics in general, will be impacted by climate change and the ability of trees to remove greenhouse gases from the atmosphere," Holm said.

Credit: 
DOE/Lawrence Berkeley National Laboratory

A battery-free sensor for underwater exploration

To investigate the vastly unexplored oceans covering most our planet, researchers aim to build a submerged network of interconnected sensors that send data to the surface -- an underwater "internet of things." But how to supply constant power to scores of sensors designed to stay for long durations in the ocean's deep?

MIT researchers have an answer: a battery-free underwater communication system that uses near-zero power to transmit sensor data. The system could be used to monitor sea temperatures to study climate change and track marine life over long periods -- and even sample waters on distant planets. They are presenting the system at the SIGCOMM conference this week, in a paper that has won the conference's "best paper" award.

The system makes use of two key phenomena. One, called the "piezoelectric effect," occurs when vibrations in certain materials generate an electrical charge. The other is "backscatter," a communication technique commonly used for RFID tags, that transmits data by reflecting modulated wireless signals off a tag and back to a reader.

In the researchers' system, a transmitter sends acoustic waves through water toward a piezoelectric sensor that has stored data. When the wave hits the sensor, the material vibrates and stores the resulting electrical charge. Then the sensor uses the stored energy to reflect a wave back to a receiver -- or it doesn't reflect one at all. Alternating between reflection in that way corresponds to the bits in the transmitted data: For a reflected wave, the receiver decodes a 1; for no reflected wave, the receiver decodes a 0.

"Once you have a way to transmit 1s and 0s, you can send any information," says co-author Fadel Adib, an assistant professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science and founding director of the Signal Kinetics Research Group. "Basically, we can communicate with underwater sensors based solely on the incoming sound signals whose energy we are harvesting."

The researchers demonstrated their Piezo-Acoustic Backscatter System in an MIT pool, using it to collect water temperature and pressure measurements. The system was able to transmit 3 kilobytes per second of accurate data from two sensors simultaneously at a distance of 10 meters between sensor and receiver.

Applications go beyond our own planet. The system, Adib says, could be used to collect data in the recently discovered subsurface ocean on Saturn's largest moon, Titan. In June, NASA announced the Dragonfly mission to send a rover in 2026 to explore the moon, sampling water reservoirs and other sites.

"How can you put a sensor under the water on Titan that lasts for long periods of time in a place that's difficult to get energy?" says Adib, who co-wrote the paper with Media Lab researcher JunSu Jang. "Sensors that communicate without a battery open up possibilities for sensing in extreme environments."

Preventing deformation

Inspiration for the system hit while Adib was watching "Blue Planet," a nature documentary series exploring various aspects of sea life. Oceans cover about 72 percent of Earth's surface. "It occurred to me how little we know of the ocean and how marine animals evolve and procreate," he says. Internet-of-things (IoT) devices could aid that research, "but underwater you can't use Wi-Fi or Bluetooth signals ... and you don't want to put batteries all over the ocean, because that raises issues with pollution."

That led Adib to piezoelectric materials, which have been around and used in microphones and other devices for about 150 years. They produce a small voltage in response to vibrations. But that effect is also reversible: Applying voltage causes the material to deform. If placed underwater, that effect produces a pressure wave that travels through the water. They're often used to detect sunken vessels, fish, and other underwater objects.

"That reversibility is what allows us to develop a very powerful underwater backscatter communication technology," Adib says.

Communicating relies on preventing the piezoelectric resonator from naturally deforming in response to strain. At the heart of the system is a submerged node, a circuit board that houses a piezoelectric resonator, an energy-harvesting unit, and a microcontroller. Any type of sensor can be integrated into the node by programming the microcontroller. An acoustic projector (transmitter) and underwater listening device, called a hydrophone (receiver), are placed some distance away.

Say the sensor wants to send a 0 bit. When the transmitter sends its acoustic wave at the node, the piezoelectric resonator absorbs the wave and naturally deforms, and the energy harvester stores a little charge from the resulting vibrations. The receiver then sees no reflected signal and decodes a 0.

However, when the sensor wants to send a 1 bit, the nature changes. When the transmitter sends a wave, the microcontroller uses the stored charge to send a little voltage to the piezoelectric resonator. That voltage reorients the material's structure in a way that stops it from deforming, and instead reflects the wave. Sensing a reflected wave, the receiver decodes a 1.

Long-term deep-sea sensing

The transmitter and receiver must have power but can be planted on ships or buoys, where batteries are easier to replace, or connected to outlets on land. One transmitter and one receiver can gather information from many sensors covering one area or many areas.

"When you're tracking a marine animal, for instance, you want to track it over a long range and want to keep the sensor on them for a long period of time. You don't want to worry about the battery running out," Adib says. "Or, if you want to track temperature gradients in the ocean, you can get information from sensors covering a number of different places."

Another interesting application is monitoring brine pools, large areas of brine that sit in pools in ocean basins, and are difficult to monitor long-term. They exist, for instance, on the Antarctic Shelf, where salt settles during the formation of sea ice, and could aid in studying melting ice and marine life interaction with the pools. "We could sense what's happening down there, without needing to keep hauling sensors up when their batteries die," Adib says.

Next, the researchers aim to demonstrate that the system can work at farther distances and communicate with more sensors simultaneously. They're also hoping to test if the system can transmit sound and low-resolution images.

Credit: 
Massachusetts Institute of Technology

Studying animal cognition in the wild

image: This is researcher Karline Janmaat collecting data on the behaviour of one chimpanzee female using a voice recorder and GPS.

Image: 
© Ammie Kalan

Different types of cognitive abilities can lead to a variety of knowledge that can help an animal to find, access, and guard food and mates. One approach to gain insight into the evolution of such cognitive abilities is by inferring cognitive performances from observed behaviours across closely related species and to compare them. By linking differences in cognitive performances with differences in current socio-ecological circumstances, hypotheses about the evolutionary pressures that contributed to the selection of these abilities can be tested. This can then provide answers to the question why a trait, such as the ability to plan for the next day, evolved. Drawing inference about cognitive abilities from behaviour is, however, not straightforward. In her latest paper Janmaat describes a set of different approaches, addressing where and how one can make such inferences in a variety of species, with the focus on primates.

Advantages of fieldwork and challenges faced in captive-based research

While addressing the question of where and how cognition can be studied, the paper describes the advantages of field-based science as well as the challenges faced in captive-based experimental science. "My aim is not to devalue captive-based or experimental research. My aim is to make captive-based and experimental scientists think critically about the challenges of their approach and hopefully become more open to, or familiar with, the potential and advantages of observational fieldwork". Janmaat was motivated to write the paper by a finding which suggested that the majority of primate cognition work is conducted with captive primates and that field work studies are rarely cited in reviews about primate cognition, suggesting an imbalanced distribution and diffusion of knowledge between the two fields. Janmaat especially stresses the value of field work for obtaining insight into evolutionary function, to obtain knowledge on the conditions in which animals employ their cognitive skills and thus their potential benefits. "If we really want to understand the evolution of cognition and human intelligence, we need to study not only whether our closest relatives do or do not use certain cognitive skills, but also the conditions in which animals employ these skills in daily survival in their habitat", says Janmaat.

In addition, she addresses the issue of plasticity of the brain and stresses the need to study the variation in cognition by conducting comparable studies in both captive and wild settings. This is because in the wild certain cognitive abilities, such as spatial mapping, are more likely developed to their full extent, due to a particularly high variety of social and sensory input and large-scale movement abilities, while other abilities, such as causal understanding, may be more developed in captive animals that have more "free time" on their hands. Lastly, Janmaat stresses the urgency of field studies as natural habitats, especially the ones of tropical forest primates, are rapidly disappearing. "This rapid decline of rainforest environments and primate populations that depend on them creates a high level of urgency to study primates in their natural habitats", Janmaat says.

A five-step guideline

After summarizing the advantages of observational field-based studies, Janmaat addresses the question how we can study cognition in the wild, especially when working with highly endangered animals. She provides a five-step guideline for future data collection designs for scholars who want to investigate cognitive abilities in wild animals by observation and a synthesis of traditional and novel methods. She guides the reader through these steps by providing concrete examples from her own work on foraging cognition, such as future planning or intuitive statistics, in chimpanzee females in the Tai National Park, in Ivory Coast, whom she has been studying for the past 10 years. In animal behaviour studies priority is often given to the recording of behaviours that the target animal performs, such as its visit to a food source, how long it eats, how many other animals are present, or whom it grooms. Recording when it does not perform certain behaviours (for example, when it does not approach or inspect a tree, pick up a tool or groom an individual, or when it fails to find food or mates) can provide as valuable information about an animal's expectations, and thus cognition, as recording when it does perform the behaviour. For example, recording when a chimpanzee decides not to approach or inspect a food tree in which she has fed the year before, can provide insight into her expectations of what she will find at that tree, as much as recording when she does approach.

By identifying crucial contexts, collecting data on a suite of behaviours (what animals do not do, or only do when certain conditions are met), controlling interfering variables by conducting observational control (recording what animals fail to find [when there is no fruit]), and by combining this technique with well thought-out statistical models, based on decades of biological knowledge, Janmaat describes how we can infer conclusions about the cognitive abilities of wild animals. "What is important to always remember is that every approach has benefits and challenges. Consequently, using complementary approaches is more likely to yield novel insights in primate cognition and move the field in exciting new directions. Perhaps then, we can even make the 'impossible' possible", Janmaat says.

Credit: 
Max Planck Institute for Evolutionary Anthropology

A new path to cancer therapy: developing simultaneous multiplexed gene editing technology

image: The operating principles and immunotherapy mechanisms of simultaneous multiplexed gene editing technology.

Image: 
Korea Institute of Science and Technology (KIST)

Dr. Mihue Jang and her group at the Center for Theragnosis of the Korea Institute of Science and Technology (KIST, President Byung-gwon Lee) announced that they have developed a new gene editing system that could be used for anticancer immunotherapy through the simultaneous suppression of proteins that interfere with the immune system expressed on the surface of lymphoma cells* and activation of cytotoxic T lymphocyte**, based on the results of joint research conducted with Prof. Seokmann Hong and his group at Sejong University (President, Deg-hyo Bae).

*Lymphoma cell: a broad term for cancers of the blood system, referring to malignant tumors in the blood, hematopoietic organs, lymph nodes, and/or lymphoid organs

**Cytotoxic T lymphocyte (CTL): a type of T lymphocyte that directly kills tumor cells or cells infected with viruses by secreting cytotoxic substances

Gene editing technology is a technology that eliminates the underlying causes of and treats diseases by removing specific genes or editing genes to restore their normal function. In particular, CRISPR gene editing technology*** is now commonly used for immunotherapy by correcting the genes of immune cells to induce them to attack cancer cells selectively.

***CRISPR gene editing technology: Based on the proteins involved in bacterial immune reactions, this technology selectively corrects genes through the simultaneous action of Cas9 proteins with gene-cutting function and single guide RNA (sgRNA), which enables genome sequence selectivity.

Dr. Mihue Jang of KIST improved the CRISPR gene editing system to enable the penetration of the cell membrane without external carriers (※ACS Nano 2018, 12, 8, 7750-7760). However, there are various kinds of genes that regulate immune activity, and technology for inducing safe and convenient immunotherapy is not yet sufficiently developed. The collaborative research team of Dr. Jang's group at KIST and Prof. Hong' group at Sejong University developed a technology that is applicable to immunotherapy by further improving the CRISPR gene editing system to allow for the transfer of genes to lymphoma cells without external carriers as well as the correction of several genes at the same time.

Existing gene editing technology has been used to transfer genes into lymphoma cells, such as T lymphocytes, among immune cells, mainly using the viral transduction or electroporation methods. The viral transduction method often induces undesired immune responses and has a high chance of inserting genes into the wrong genome sequences. Also, the electric shock method requires separate and expensive equipment, has great difficulty correcting large numbers of cells at one time, and shows low cell viability.

This technology jointly developed by research teams of KIST and Sejong University simultaneously targets PD-L1 and PD-L2, among the inhibitory immune checkpoints**** known for suppressing the immune system. The treatment efficacy was confirmed, showing that targeting these immune checkpoints does not interfere with the immune system and that the cytotoxic T lymphocytes directly attack cancer cells to increase the anticancer immune response.

****Inhibitory immune checkpoints: a kind of mechanism that regulates the antigen recognition of T cell receptors (TCR) in the immune response process and interferes with the destruction of cancer cells

Dr. Mihue Jang of KIST said, "This newly developed gene editing system can be applied to various types of immune cells, and is thus expected to be used in the development of treatments for various diseases, including not only cancers but also autoimmune and inflammatory diseases."

Credit: 
National Research Council of Science & Technology

World's thinnest, lightest signal amplifier enables bioinstrumentation with reduced noise

image: Electrocardiac signals obtained using flexible organic differential amplifier:
(A) Conventional single-ended amplifier
(B) Differential amplifier developed in this study
(C) Electrocardiac signals obtained from a walking subject. In the electrocardiac signals obtained using a conventional single-ended amplifier, large noise caused by walking is included in the waveform. In contrast, such noise is removed from the waveform obtained using the developed flexible organic differential amplifier.

Image: 
Osaka University

Key research achievements

The research group in this study developed the world’s thinnest and lightest signal amplifier for bioinstrumentation and demonstrated high-precision monitoring of electrocardiac signals with reduced noise levels.

A thin and flexible organic differential amplifier that can precisely monitor weak biosignals without the user feeling any discomfort caused by the device attached to the bodies was realized.

Easy and high-precision bioinstrumentation was achieved by adding functions for reducing disturbance noise, such as noise caused by walking.
Using the developed differential amplifier, the creation of new valuable applications, such as advanced bioinstrumentation at home, is anticipated.

Summary

A research group led by Professor Tsuyoshi Sekitani and Associate Professor Takafumi Uemura of The Institute of Scientific and Industrial Research, Osaka University, succeeded in developing the world’s thinnest and lightest differential amplifier for bioinstrumentation.

Conventionally, bioinstrumentation circuits for health care and medical use have consisted of hard electronic devices, such as silicon transistors. However, when soft biological tissues, such as skin, come into contact with hard electronic devices, they tend to become inflamed. Therefore, monitoring of biosignals in everyday life over a long period of time proved difficult. The research group developed a flexible bioinstrumentation circuit that eliminates the discomfort caused by the device attached to the body of the user by integrating flexible electronic devices called organic transistors (*1) on a thin and flexible plastic film with a thickness of 1 μm (1 μm is 1 one-millionth of 1 m). The developed circuit is a signal-processing circuit called a differential amplifier (*2).

Compared with conventional single-ended amplifiers (*3), the flexible differential amplifier developed in this study can not only amplify very weak biopotential but also reduce disturbance noise (*4). This group demonstrated that the differential amplifier can be applied to human instrumentation and realize real-time monitoring of electrocardiac signals, which are important biosignals, with reduced noise levels.

This achievement is expected to lead to the monitoring of various weak biosignals (e.g. brain waves and cardiac sounds of a fetus) in everyday life in addition to electrocardiac signals without subjecting users to the discomfort caused by devices attached to the body.

Background of research

In Japan, with its declining birthrate and aging population, the application of flexible electronics such as organic transistors in the medical and health care fields has been actively promoted. Sensors and electronic circuits with a high compatibility with biological tissues such as skin and organs are realized by using soft organic materials.

Among these sensors and electronic circuits, flexible amplifiers with organic transistors integrated into them eliminate the discomfort felt by users caused by the devices attached to the body. The research and development of such amplifiers as sensors to continuously monitor very weak biosignals is currently ongoing. However, conventional organic amplifiers mainly have a single-ended structure that cannot distinguish the target biosignals from disturbance noise, making it difficult to monitor biosignals with a low noise level (Fig. 1). A differential amplifier is a circuit that can measure signals with the noise components removed. However, the variation in the quality of manufactured organic transistors is large compared with that of silicon transistors; thus, there have been no reports on flexible differential amplifiers that realize precise noise reduction.

The research group succeeded in developing a flexible organic differential amplifier with a noise reduction function by developing a compensation technique that can reduce the dispersion of current flowing in organic transistors inside the amplifier to as small as 2% or less. The amplifier was fabricated on a parylene film with a thickness of 1 μm. The amplifier does not break when the film is bent and can be attached to human skin without causing any discomfort (Fig. 2). Electrocardiac signals were amplified 25-fold and the noise was reduced to one-seventh or less by using this flexible differential amplifier for monitoring the signals. The group demonstrated that the noise caused by external power sources as well as large body-motion noise caused by walking are removed during the monitoring of electrocardiac signals (Fig. 1).

Impact of research achievements on society (significance of research achievements)

Smart watches and other wearable devices for monitoring biosignals, such as electrocardiac signals, in everyday life are already on the market. Bioinstrumentation is expected to become easier and more comfortable in various situations through the use of the high-precision flexible bioinstrumentation circuits without subjecting users to any discomfort caused by the devices attached to their body. For example, bioinstrumentation of people who are performing physically strenuous exercise, such as during sports, becomes possible owing to the improved wearability and adhesion between the device and skin. The real-time long-time bioinstrumentation data thus obtained will promote early detection of diseases and improve the efficiency of treatment, monitoring of the elderly and patients, and monitoring of exercise load. These achievements will further lead to the solution of various problems in Japan's aging society by way of reduced medical expenses and improved quality of life (QOL).

Credit: 
Osaka University

City parks lift mood as much as Christmas, Twitter study shows

image: New research from the University of Vermont shows that in cities, big green spaces are very important for people's sense of well-being, and can provide a boost in mood.

Image: 
Fancycrave.com, Pexels

Feeling unhappy and cranky? The treatment: take a walk under some trees in the park.

That may not be the exact prescription of your doctor, but a first-of-its-kind study shows that visitors to urban parks use happier words and express less negativity on Twitter than they did before their visit--and that their elevated mood lasts, like a glow, for up to four hours afterwards.

The effect is so strong--a team of scientists from the University of Vermont discovered--that the increase in happiness from a visit to an outpost of urban nature is equivalent to the mood spike on Christmas, by far the happiest day each year on Twitter.

With more people living in cities, and growing rates of mood disorders, this research may have powerful implications for public health and urban planning.

The new study was published August 20 in People and Nature, an open-access journal of the British Ecological Society.

GREEN MATTERS

For three months, a team of scientists from the University of Vermont studied hundreds of tweets per day that people posted from 160 parks in San Francisco. "We found that, yes, across all the tweets, people are happier in parks," says Aaron Schwartz, a UVM graduate student who led the new research, "but the effect was stronger in large regional parks with extensive tree cover and vegetation." Smaller neighborhood parks showed a smaller spike in positive mood and mostly-paved civic plazas and squares showed the least mood elevation.

In other words, it's not just getting out of work or being outside that brings a positive boost: the study shows that greener areas with more vegetation have the biggest impact. It's notable that one of the words that shows the biggest uptick in use in tweets from parks is "flowers."

"In cities, big green spaces are very important for people's sense of well-being," says Schwartz; meaning that efforts to protect and expand urban natural areas extend far beyond luxury and second-tier concerns--"we're seeing more and more evidence that it's central to promoting mental health," says Taylor Ricketts, a co-author on the new study and director of the Gund Institute for Environment at UVM.

In recent years, "a big focus in conservation has been on monetary benefits--like: how many dollars of flood damage did we avoid by restoring a wetland?" Ricketts says. "But this study is part of a new wave of research that expands beyond monetary benefits to quantify the direct health benefits of nature. What's even more innovative here is our focus on mental health benefits --which have been really underappreciated and understudied."

MEASURING HAPPINESS

The new study relied on the hedonometer. This online instrument--invented by a team of scientists at UVM and The MITRE Corporation, including Chris Danforth and Peter Dodds, professors at UVM's Complex Systems Center and co-authors on the new study--has been gathering and analyzing billions of tweets for more than a decade, resulting in numerous scientific papers and extensive global media coverage. The instrument uses a body of about 10,000 common words that have been scored by a large pool of volunteers for what the scientists call their "psychological valence," a kind of measure of each word's emotional temperature.

The volunteers ranked words they perceived as the happiest near the top of a 1-9 scale; sad words near the bottom. Averaging the volunteers' responses, each word received a score: "happy" itself ranked 8.30, "hahaha" 7.94, and "parks" 7.14. Truly neutral words, "and" and "the" scored 5.22 and 4.98. At the bottom, "trapped" 3.08, "crash" 2.60, and "jail" 1.76. "Flowers" scored a pleasant 7.56.

Using these scores, the team collects some fifty million tweets from around the world each day--"then we basically toss all the words into a huge bucket," says Dodds--and calculate the bucket's average happiness score.

PARK POSITION

To make the new study, the UVM team fished tweets out of this huge stream--from 4,688 users who publically identify their location--that were geotagged with latitude and longitude in the city of San Francisco. This allowed the team to know which tweets were coming from which parks. "Then, working with the U.S. Forest Service, we developed some new techniques for mapping vegetation of urban areas--at a very detailed resolution, about a thousand times more detailed than existing methods," says Jarlath O'Neil-Dunne, director of UVM's Spatial Analysis Laboratory in the UVM Rubenstein School of Environment and Natural Resources and a co-author on the new study. "That's what really enabled us to get an accurate understanding of how the greenness and vegetation of these urban areas relates to people's sentiment there."

"This is the first study that uses Twitter to examine how user sentiment changes before, during, and after visits to different types of parks," says Schwartz, a doctoral student in the Rubenstein School and Gund Institute graduate fellow. "The greener parks show a bigger boost."

Overall, the tweets posted from these urban parks in San Francisco were happier by a dramatic 0.23 points on the hedonometer scale over the baseline. "This increase in sentiment is equivalent to that of Christmas Day for Twitter as a whole in the same year," the scientists write.

THE CAUSE OF AFFECT

"Being in nature offers restorative benefits on dimensions not available for purchase in a store, or downloadable on a screen," says UVM's Chris Danforth, a professor of mathematics and fellow in the Gund Institute. He notes that a growing body of research shows an association between time in nature and improved mood, "but the specific causal links are hard to nail down."

The team of UVM scientists consider several possible mechanisms through which urban nature may improve mental health, including Green Mind Theory that suggests that the negativity bias of the brain, "which may have been evolutionarily advantageous--is constantly activated by the stressors of modern life," the team writes.

"While we don't address causality in our study, we do find that negative language--like 'not,' 'no,' 'don't,' 'can't,'--decreased in the period immediately after visits to urban parks," says Danforth, "offering specific linguistic markers of the mood boost available outside." Conversely, the study shows that the use of first-person pronouns--"I" and "me"--drops off dramatically in parks, perhaps indicating "a shift from individual to collective mental frame," the scientists write.

Of course, Twitter users are not a representative sample of all people--just who are the "twitter-afflicted" (as Adam Gopnik wrote in a recent issue of the New Yorker) who pick up their phone to tweet from a park? Still, Twitter users are a broad demographic, earlier research shows, and this approach to near-real-time remote sensing via Twitter posts--not based on self-reporting--gives a new window for scientists onto the shifting moods of very large groups.

The nature of happiness has been pondered by philosophers for centuries and studied by psychologists for decades, but this new study suggests it might be as clear as that: in nature, people tend to be more happy--and that's a finding "that may help public health officials and governments make plans and investments," says UVM's Aaron Schwartz.

Credit: 
University of Vermont

Alzheimer's drug reverses brain damage from adolescent alcohol exposure in rats

DURHAM, N.C. -- A drug used to slow cognitive decline in adults with Alzheimer's disease appears to reverse brain inflammation and neuron damage in rats exposed to alcohol during adolescence.

In a study described in the journal Scientific Reports, Duke Health researchers sought to understand how intermittent binge drinking changes the hippocampus -- a region long known to be critical for learning and memory, and also linked to anxiety -- and whether the drug, donepezil, could reverse those changes. Rats were used as a model for teens and young adults who binge drink a few times a week.

"Research has begun to show that human adolescents who drink early and consistently across the adolescent years have some deficits in brain function that can affect learning and memory, as well as anxiety and social behaviors," said senior author Scott Swartzwelder, Ph.D., professor of psychiatry at Duke.

"The changes can be subtle, but who wants even subtle deficits in their brain function or how they think and feel?" Swartzwelder said. "Studies in animal models show that adolescent alcohol exposure can change the ways nerve cells communicate with each other, and the level of plasticity in brain circuits -- compromising the ability of the brain to change and adapt. These changes can be seen in adulthood - long after the alcohol exposure has ended"

Because they can't ethically have young people drink alcohol to study its effects, researchers use the developing brains of rats to understand the effects of "intermittent alcohol exposure," resulting in blood-alcohol levels that are consistent with those achieved by human adolescent drinkers.

The scientists observed that in addition to brain inflammation, adolescent alcohol exposure inhibited the birth of new neurons in the hippocampus, Swartzwelder said, and may even accelerate neuronal death -- making it is easier to lose existing cells and more difficult to produce new ones.

Once the rats reached adulthood, they were given donepezil, a cognition-enhancing drug that is marketed under the brand name Aricept. After four days of treatment, the researchers studied the animals' brains, looking closely at the hippocampus. The rats that received donepezil in adulthood after adolescent alcohol exposure showed less inflammation and better ability to produce new neurons compared to rats that did not receive the donepezil treatment.

"We don't know if the reversal of these alcohol effects by donepezil is permanent, but it at least transiently reverses them," Swartzwelder said.

Swartzwelder said the study helps clarify the subtle health risk of heavy drinking among young adults, which has been difficult to ascertain.

"It's obvious that not everyone who drinks during adolescence grows up and completely fails at life," Swartzwelder said. "You might not notice the deficits in obvious ways every day, but you run the risk of losing your edge. Sometimes a small impairment of brain function can have a broad ripple effect in someone's life."

Importantly, the research demonstrates the potential to repair some types of damage caused by adolescent alcohol exposure, he said. But beyond that, it could also lead to a more specific understanding of the cellular mechanisms that make the developing brain particularly vulnerable to substances such as alcohol.

Credit: 
Duke University Medical Center

Smart sink could help save water

image: A Stanford experiment with a fake autonomous sink showed that a real smart sink could help conserve water.

Image: 
Kurt Hickman/Stanford News Service

Barely hidden from his study participants, William Jou, a former graduate student in mechanical engineering at Stanford University, pulled off a ruse straight out of The Wizard of Oz. Except, instead of impersonating a great and powerful wizard, Jou pretended to be an autonomous sink. He did this to test whether a sink that adapts to personal washing styles could reduce water use.

A faucet with anything close to the brains of a mechanical engineering student doesn't yet exist. So, Jou and his colleagues in the lab of Erin MacDonald, assistant professor of mechanical engineering, made the next best thing: a faucet that seemed to automatically adjust to a user's preferences, but was actually controlled by Jou.

The results of their sly experiment, detailed in a paper presented Aug. 20 at the International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, support the idea that thoughtfully designed smart sinks could help conserve water by regulating water use and nudging users to develop more water-conscious habits.

"We looked at the faucet because that's where a lot of water usage in the home occurs, but when you compare your sink to other products in the house - a thermostat or refrigerator - you see that there haven't been updates to how the sink works in a very long time," said MacDonald, who is senior author of the paper. "There have been small updates but nothing that really harnesses the power of technology."

Participants in this experiment had to wash dishes three times, with Jou secretly controlling the temperature and flow of the sink during the second washing only. With Jou involved, participants used about 26 percent less water compared to their first washing. In the third round, they still used 10 percent less water compared to the first round, even though the sink was back to being brainless. This shift in water use happened without participants knowing the experiment was about water conservation.

"Water conservation is particularly relevant given our location in California," said Samantha Beaulieu, a graduate student and co-author of the paper. "We also wanted to see if people's habits were adjustable; if interacting with this faucet could then change how people interact with a manual faucet. The results we found seem to indicate that's possible."

Pay no attention to the grad student

In order to create a situation where people would trust - and hopefully enjoy - a sink that makes water decisions for them, Jou closely monitored the participants' washing styles during their first round of cleaning so he could emulate them in the second round.

"As the algorithm, I'm trying to use that information to leverage their cognitive style or user behavior style to see if I can help them use less water while still keeping them happy," described Jou, who is lead author of the paper. "Whereas a lot of products today are made for general use, this is a product that's learning about you and adapting to what your style is."

In surveys after the experiment, 96 percent of participants who interacted with the smart sink (there was a control group that washed dishes three times without Jou) said they thought there was potential for smart faucets to save water. Many of them even expressed interest in buying such a product.

"Most people were pretty amazed by the sink," said Beaulieu. "A lot of people left the experiment asking what the algorithm was or asking how it worked or how to see more. We basically told them we'd have to wait until the end of the experiment to answer those questions."

While the results from and reaction to washing with Jou's assistance were impressive, the researchers were particularly heartened by how such a brief interaction with the "autonomous" function changed participants' water use.

"We didn't even plan on having that third step until very late in the research, when we were pilot testing," said MacDonald. "I never would have thought that having just one experience with 'William the Algorithm' people would retain the training and wash their dishes differently."

The sink of the future

The researchers imagine a future where hospital sinks encourage employees to wash their hands properly and our personal sink and shower preferences can be transferred to hotels and friends' houses. Schools and neighborhoods could organize competitions to save water and raise water conservation awareness. Through additional features, the sink could even detect leaks.

That being said, creating this one sink required years of work and didn't even include an algorithm. In addition to implanting artificial intelligence, making a version for mass production that's actually autonomous would require sensors that could differentiate between users and between scenarios - such as washing a pot versus a fork versus hands. Still, the researchers are optimistic that studies like these could lay the groundwork to support those developments.

"We're all human beings - we have good days and bad days. A product like this could have a large impact because it's growing and learning with you as you change," said Jou. "This faucet is working toward saving water but it's also keeping its user happy. In the long term, products like this might be our future."

Credit: 
Stanford University

Engineers make transistors and electronic devices entirely from thread

image: Figure 1 - manufacture of thread based transistors (TBTs)
a) Linen thread
b) Attachment of source (S) and drain (D) thin gold wires
c) Drop casting of carbon nanotubes on the surface of the thread
d) Application of electrolyte infused gel (ionogel) gate material
e) Attachment of the gate wire (G)
f) Cross-sectional view of TBT.

Electrolytes
EMI: 1-ethyl-3methylimidazolium
TFSI: bis(trifluoromethylsulfonyl)imide

Image: 
Nano Lab, Tufts University

A team of engineers has developed a transistor made from linen thread, enabling them to create electronic devices made entirely of thin threads that could be woven into fabric, worn on the skin, or even (theoretically) implanted surgically for diagnostic monitoring. The fully flexible electronic devices could enable a wide range of applications that conform to different shapes and allow free movement without compromising function, the researchers say.

In a study published in ACS Applied Materials and Interfaces, the authors describe engineering the first thread-based transistors (TBTs) which can be fashioned into simple, all-thread based logic circuits and integrated circuits. The circuits replace the last remaining rigid component of many current flexible devices, and when combined with thread-based sensors, enable the creation of completely flexible, multiplexed devices.

The field of flexible electronics is expanding rapidly, with most devices achieving flexibility by patterning metals and semiconductors into bendable "wavy" structures or using intrinsically flexible materials such as conducting polymers. These "soft" electronics are enabling applications for devices that conform and stretch with the biological tissue in which they are embedded, such as skin, heart or even brain tissue.

However, compared to electronics based on polymers and other flexible materials, thread-based electronics have superior flexibility, material diversity, and the ability to be manufactured without the need for cleanrooms, the researchers say. The thread-based electronics can include diagnostic devices that are extremely thin, soft and flexible enough to integrate seamlessly with the biological tissues that they are measuring.

The Tufts engineers previously developed a suite of thread-based temperature, glucose, strain, and optical sensors, as well as microfluidic threads that can draw in samples from, or dispense drugs to, the surrounding tissue. The thread-based transistors developed in this study allow the creation of logic circuits that control the behavior and response of those components. The authors created a simple small-scale integrated circuit called a multiplexer (MUX) and connected it to a thread-based sensor array capable of detecting sodium and ammonium ions - important biomarkers for cardiovascular health, liver and kidney function.

"In laboratory experiments, we were able to show how our device could monitor changes in sodium and ammonium concentrations at multiple locations," said Rachel Owyeung, a graduate student at Tufts University School of Engineering and first author of the study. "Theoretically, we could scale up the integrated circuit we made from the TBTs to attach a large array of sensors tracking many biomarkers, at many different locations using one device."

Making a TBT (see Figure 1) involves coating a linen thread with carbon nanotubes, which create a semiconductor surface through which electrons can travel. Attached to the thread are two thin gold wires - a "source" of electrons and a "drain" where the electrons flow out (in some configurations, the electrons can flow in the other direction). A third wire, called the gate, is attached to material surrounding the thread, such that small changes in voltage through the gate wire allows a large current to flow through the thread between the source and drain -the basic principle of a transistor.

A critical innovation in this study is the use of an electrolyte-infused gel as the material surrounding the thread and connected to the gate wire. In this case, the gel is made up of silica nanoparticles that self-assemble into a network structure. The electrolyte gel (or ionogel) can be easily deposited onto the thread by dip coating or rapid swabbing. In contrast to the solid-state oxides or polymers used as gate material in classical transistors, the ionogel is resilient under stretching or flexing.

"The development of the TBTs was an important step in making completely flexible electronics, so that now we can turn our attention toward improving design and performance of these devices for possible applications," said Sameer Sonkusale, professor of electrical and computer engineering at Tufts University School of Engineering and corresponding author of the study. "There are many medical applications in which real-time measurement of biomarkers can be important for treating disease and monitoring the health of patients. The ability to fully integrate a soft and pliable diagnostic monitoring device that the patient hardly notices could be quite powerful."

Credit: 
Tufts University

Applying machine learning in intelligent weather consultation

image: This is the weather forecasting process.

Image: 
Haochen Li

Weather forecasting is a typical problem of coupling big data with physical-process models, according to Prof. Pingwen Zhang, academician of Chinese Academy of Sciences, Director of the National Engineering Laboratory for Big Data Analysis and Application Technology, Director of the Center for Computational Science & Engineering, Peking University. Prof. Zhang is the corresponding author of a collaborated study by Peking University and Institute of Atmospheric Physics, Chinese Academy of Sciences.

Generally speaking, weather forecasting is a largely successful practice in the geosciences and, nowadays, it is inseparable from numerical weather prediction (NWP). However, because the outputs of NWP and observations contain different systematic errors, a "weather consultation" is an indispensable part of the process towards further improving the accuracy of forecasts.

"In fact, the theory-driven physical model and data-driven machine learning are complementary tools. Combining these two approaches, an intelligent weather consultation system can be built to assist the current manual process of weather consultation," says Prof. ZHANG. "One of the challenges linked with this is to build appropriate feature engineering for both types of information to make full use of the data."

To solve these problems, Prof. ZHANG and his team have proposed the "model output machine learning" (MOML) method for simulating weather consultation, and this research has recently been published in Advances in Atmospheric Sciences.

MOML is a post-processing method based on machine learning, which matches NWP forecasts against observations through a regression function. To test the new approach for grid temperature forecasts, the 2-m surface air temperature in the Beijing area was employed. The MOML method, with different feature engineering, was compared against the ECMWF model forecast and modified model output statistics (MOS) method. MOML showed better numerical performance than the ECMWF model and MOS, especially for winter; the accuracy when using MOML increased by 27.91% and 15.52% respectively.

Weather consultation data are unique, and mainly include information contained in both NWP model data and observational data. They have different data structures and features, which makes feature engineering a complicated task. The quality of feature engineering directly affects the final result. Zhang's group has proposed several feature engineering schemes following extensive numerical experiments. These schemes ensure the calculation efficiency and were employed in meteorological studies for the first time. Prof. ZHANG points out that the MOML method allows the observational data to directly participate in the calculation, and uses both the high- and low-frequency information of the data to make the forecast results more accurate. The MOML method proposed in this study could be applied to forecasting the weather during the upcoming 2022 Winter Olympics, hopefully providing more accurate, intelligent and efficient weather forecasting services for this international event.

Machine learning and deep learning offer diverse tools for weather forecasts in the era of big data, but there are also many challenges in practical applications.

"It is an important future research direction to incorporate weather forecast data and coupled models into a hybrid computing framework to explore and study the structure and features of observational and NWP data, and propose data-driven machine learning algorithms suitable for weather forecasting," Prof. Zhang concludes.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

New tool makes web browsing easier for the visually impaired

Researchers have developed a new voice assistant that allows people with visual impairments to get web content as quickly and as effortlessly as possible from smart speakers and similar devices.

In a new study, led by University of Waterloo Faculty of Mathematics student, Alexandra Vtyurina, who collaborated with Microsoft researchers and the University of Washington's Assistant Professor, Leah Findlater, a way was found to merge the best elements of voice assistants with screen readers to create a tool that makes free-form web searches easier. The tool is called Voice Exploration, Retrieval, and Search (VERSE).

"People with visual impairments often rely on screen readers, and increasingly voice-based virtual assistants, when interacting with computer systems," said Vtyurina, a PhD candidate in Waterloo's David R. Cheriton School of Computer Science, who undertook the study during her internship at Microsoft Research. "Virtual assistants are convenient and accessible but lack the ability to deeply engage with content, such as read beyond the first few sentences of an article, list alternative search results and suggestions. In contrast, screen readers allow for deep engagement with accessible content, and provide fine-grained navigation and control, but at the cost of reduced walk-up-and-use convenience.

"Our prototype, VERSE, adds screen reader-like capabilities to virtual assistants, and allows other devices, such as smartwatches to serve as input accelerators to smart speakers."

The primary input method for VERSE is voice; so, users can say "next", "previous", "go back" or "go forward". VERSE can also be paired with an app, which runs on a smartphone or a smartwatch. These devices can serve as input accelerators, similar to keyboard shortcuts. For example, rotating the crown on a smartwatch advances VERSE to the next search result, section, or paragraph, depending on the navigation mode.

In the study, 53 visually impaired web searchers were surveyed. More than half of the respondents reported using voice assistants multiple times a day, and a vast range of devices, such as, smart speakers, phones, and smart TVs.

The data collected from the survey was used to inform the design of a prototype of VERSE after which a user study was conducted to gather feedback.

"At the outset, VERSE resembles other virtual assistants, as the tool allows people to ask a question and have it answered verbally with a word, phrase or passage," said Vtyurina. "VERSE is differentiated by what happens next. If people need more information, they can use VERSE to access other search verticals, for example, news, facts, and related searches, and can visit any article that appears as a search result.

"For articles, VERSE showcases its screen reader superpowers by allowing people to navigate along words, sentences, paragraphs, or sections."

Credit: 
University of Waterloo

Laboratory studies identify a potential way to treat human cancers with ARID1A mutations

image: ARID1A mutations

Image: 
Alicja Tomaszewski

A new study shows that tumor cells depleted of ARID1A -- a protein that acts as a cancer suppressor -- become highly sensitive to anticancer poly ADP ribose polymerase (PARP) inhibitor drugs after radiation treatment. The research, led by Johns Hopkins Kimmel Cancer Center researchers, could advance efforts to treat many human cancers with loss of ARID1A that are resistant to current standard treatments, the study team suggests.

Previous research shows that ARID1A mutations that prevent cellular production of ARID1A are found in about 50% of ovarian clear cell carcinomas, 35% of uterine endometrioid endometrial adenocarcinomas and 30% of ovarian endometrioid carcinomas. ARID1A mutations are also frequently found in liver, stomach, bladder and pancreas cancers. PARP inhibitor drugs block the enzyme, PARP, that cells use to signal and recruit machinery to repair damage to their DNA, causing tumor cells to die. Cancers with mutations causing defective DNA repair systems tend to be more dependent on PARP than cancers with intact repair systems.

A report on the research was published online June 13 in Clinical Cancer Research.

"ARID1A mutations are highly prevalent in theses human cancers. The goal of the study was to understand what ARID1A normally does in cells and which cellular functions are affected if it is lost due to inactivating mutations," says le-Ming Shih, M.D., Ph.D., Richard W. TeLinde Distinguished Professor in the Department of Gynecology and Obstetrics at the Johns Hopkins University School of Medicine, and co-director of the Women's Malignancies Disease Program at the Johns Hopkins Kimmel Cancer Center. "This allows us to develop effective treatments to eradicate cancer cells with ARID1A mutations or functional inactivation."

Radiation treatment kills tumor cells by damaging the DNA irreparably. If ARID1A-deficient tumors are shown to be responsive to irradiation plus PARP inhibitors in future clinical trials, it could provide a new opportunity to treat several cancer types that do not have many effective therapeutic interventions, such as ovarian clear cell carcinoma, advanced endometrial or stomach cancers, he says.

In their experiments with mouse and human endometrial cancer or normal cells, the researchers deleted ARID1A from cells to determine if ARID1A-depleted cells were sensitive to fractional radiation. Radiation induces DNA damage and is commonly used to treat liver, gastric, bladder and gynecologic cancers. The researchers found that without ARID1A, some of the cells could not recover from radiation damage and died, says Shih.

Those findings prompted the researchers to hypothesize that cells without ARID1A cannot efficiently repair DNA. However, since treatment with radiation alone does not lead to a complete response, the researchers subsequently tested a collection of drugs targeting key enzymes in the DNA repair pathways to find one that might work hand in hand with radiation to make the tumor cells even more vulnerable. Their effort turned up one class of drugs: PARP inhibitors.

The researchers then set up an experiment by establishing tumors composed of human cancer cells in mice and treating the mice with radiation alone, PARP inhibitor alone, a combination of both or no treatment. They found that the ARID1A-deficient tumors shrank significantly when the tumor-bearing mice were treated with combined irradiation and PARP inhibitor but not when treated with either irradiation alone or PARP inhibitor alone. This anti-tumor effect lasted for a prolonged time after treatment. Such a phenomenon was not evident in ARID1A-proficient tumors or normal tissues.

Although radiation treatment causes DNA breaks, cancer cells with intact DNA repair systems are able to fix some of the damage, allowing cancer cells to continue to divide, the researchers say. Because DNA repair is so important in biology, redundant DNA repair systems exist. Two major mechanisms are nonhomologous end joining (NHEJ) and homologous recombination. In ARID1A-mutated cells, this study found that the NHEJ repair pathway was affected, and these cancer cells relied on homologous recombination to maintain tumor growth. Their findings also explained why ARID1A-deficient tumors only partially respond to radiation therapy. When the investigators added a PARP inhibitor to suppress homologous recombination repair after radiation, DNA repair capacity in irradiated ARID1A mutated cancer cells was thwarted, since both the NHEJ and homologous recombination mechanisms were defective.

"Radiation-induced DNA breaks, which cannot be repaired efficiently in ARID1A deficient tumors, prime these tumor cells for enhanced vulnerability to PARP inhibitors," Shih explains.

Several PARP inhibitors are clinically available and approved to treat BRCA mutation-related ovarian and breast cancers. PARP inhibitors have been tested against ARID1A-deficient tumors before, says Shih, but with limited success against those tumors. The new study may help explain why, he adds.

Given the clinical availability of PARP inhibitors and established clinical benefit of local irradiation, it is expected that the next step will be to determine this combination's safety and dose through a phase I clinical trial, according to co-authors Akila Viswanathan, M.D., M.P.H., M.Sc., interim director of the Johns Hopkins Department of Radiation Oncology and Molecular Radiation Sciences, and Stéphanie Gaillard, M.D., Ph.D., director of the Gynecologic Clinical Trial Center at Johns Hopkins.

Credit: 
Johns Hopkins Medicine

Drawing inspiration from natural marvels to make new materials

image: Professor LaShanda Korley (left) mimicked the architecture of the bristle worm's jaw system by adding a zinc-coordinated supramolecular polymer into a covalently crosslinked polyethylene glycol network.

Image: 
Kathy F. Atkinson

A tiny bristle worm, wriggling around the ocean, can extend its jaw outside its mouth to ensnare its prey. The worm's shape-shifting jaw, stiff at the base and flexible at the end, is made of one singular material containing the mineral zinc and the amino acid histidine, which together govern the joint's mechanical behavior through what is known as a metal coordination chemistry.

Scientists like LaShanda Korley, Distinguished Associate Professor of Materials Science and Engineering and Chemical and Biomolecular Engineering at the University of Delaware, want to recreate these chemistries and build similar structures in synthetic materials. By doing so, they can develop new, improved materials for use in sensors, healthcare applications, and much more. Chemistries like these are ubiquitous in nature. The iron-protein interaction in human blood, for example, can be a determinant of disease.

In a paper published in the July 2019 edition of the European Polymer Journal, Korley, joined by materials science and engineering doctoral student Chase Thompson and post-doctoral associate Sourav Chatterjee, described how they built a network of materials, made of zinc and polymers, that mimicked the mechanical gradient of a bristle worm's jaw.

This project, the culmination of more than five years of work, was funded by a grant from the National Science Foundation. The goal is to utilize natural material systems to understand how to control the interplay of structural features, especially mechanical properties, by combining dynamic and permanent structures, said Korley.

"The idea is: Can you put together two things that don't really like each other and utilize this idea of dynamics as a way to control how energy is released in the system, which is related to the mechanical behavior?" she said.

The team mimicked the architecture of the bristle worm's jaw system by adding a zinc-coordinated supramolecular polymer into a covalently crosslinked polyethylene glycol network. With the right concentrations, they found that they could govern the material's mechanical properties. "The permanent network that we use to house these dynamic interactions is a good platform for achieving these gradient structures," said Thompson. Next, he plans to investigate ways to influence shape memory and other properties of these materials.

Korley utilizes inspiration from nature to design a variety of materials. She is the principal investigator of PIRE: Bio-Inspired Materials and Systems, a five-year, $5.5 million grant from the National Science Foundation.

Through this project, Korley and collaborators at Case Western Reserve University, the University of California, San Diego, the University of Chicago, Switzerland's University of Fribourg and the UK's University of Strathclyde are studying and developing materials that can change toughness in response to their environment, are safer and more effective biological implants, transmit nerve-like electrical signals, and can respond to the environment to initiate biological processes, all for use in soft robotic applications.

For example, researchers are studying ways to make materials that are strong like spider silk and materials that change their shape in response to humidity, such as pine cones, which open in dry conditions and close in moist ones. They are also utilizing the unique materials properties they uncover to develop new 3D printed materials.

The study of soft materials and polymers, long a strength at UD, is growing, in part thanks to Korley's expertise. Korley and Thomas H. Epps, III, the Thomas and Kipp Gutshall Senior Career Development Professor in Chemical and Biomolecular Engineering and Materials Science and Engineering, have also formed a new research center, the Center for Research in Soft Matter and Polymers (CRISP). Korley and Epps are collaborating with researchers at Chemours and recently published a review article on structure-property relationships in polymeric surface coatings in the journal ACS Applied Polymer Materials.

Korley's research enterprise also involves outreach to undergraduate students, who can benefit greatly from research experience that complements their classroom work.

"Research gives you a platform to take that fundamental training from the classroom and be able to apply it to a problem," she said. "In the lab, students learn to think through problems, display and communicate their work, and be leaders and team players. We have all of those aspects in our courses, but I think that there's a holistic way that undergraduate research can train students to do that."

Korley is equally passionate about outreach activities that introduce girls in high school to science and engineering. Students from her lab have been involved in tutoring at Serviam Girls Academy in New Castle, Delaware.

"The biggest thing for me is to make an impact, to be collaborative, to really engage with the broader community," she said. "That's important to me."

Credit: 
University of Delaware