Tech

Next generation of soft robots inspired by a children's toy

video: The actuators can be reset and brought back to the initial configuration through vacuum and can take off repetitively.

Image: 
(Video courtesy of David Melancon and Benjamin Gorissen/Harvard SEAS)

Buckling, the sudden loss of structural stability, is usually the stuff of engineering nightmares. Mechanical buckling means catastrophic failure for every structural system from rockets to soufflés. It's what caused the Deepwater Horizon oil spill in 2010, among numerous other disasters.

But, as anyone who has ever played with a toy popper knows, buckling also releases a lot of energy. When the structure of a popper buckles, the energy released by the instability sends the toy flying through the air. Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Harvard's Wyss Institute for Biologically Inspired Engineering have harnessed that energy and used buckling to their advantage to build a fast-moving, inflatable soft actuator.

The research is published in Science Robotics.

"Soft robots have enormous potential for a wide spectrum of applications, ranging from minimally invasive surgical tools and exoskeletons to warehouse grippers and video game add-ons," said Benjamin Gorissen, a postdoctoral fellow at SEAS and co-first author of the paper. "But applications for today's soft actuators are limited by their speed."

Fluidic soft actuators tend to be slow to power up and move because they need a lot of fluid to work and the flow, whether gas or liquid, is restricted by tubes and valves in the device.

"In this work, we showed that we can harness elastic instabilities to overcome this restriction, enabling us to decouple the slow input from the output and make a fast-jumping fluidic soft actuator," said David Melancon, a graduate student at SEAS and co-first author of the paper.

"This actuator is a building block that could be integrated into a fully soft robotic system to give soft robots that can already crawl, walk and swim the ability to jump," said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the study. "By incorporating our jumper into these designs, these robots could navigate safely through uncharted landscapes."

Bertoldi is also an Associate Faculty member of the Wyss Institute.

The researchers relied on the same type of buckling that propels toy poppers, known as shell buckling. The team designed the actuators with two spherical caps -- essentially two poppers-- nestled together like Russian nesting dolls and connected at the base. Upon inflation, pressure builds up between the two caps. The thinner outer cap expands up while the thicker inner cap buckles and collapses, hitting the ground and catapulting the device into the air.

While the device seems simple, understanding the fundamental physics at play was paramount to controlling and optimizing the robot's performance. Most previous research into shell buckling studied how to avoid it but Gorissen, Melancon and the rest of the team wanted to increase the instability.

As fate would have it, one of the pioneers of shell buckling research sits just two floors down from Bertoldi's team in Pierce Hall. Professor Emeritus John W. Hutchinson, who joined the Harvard faculty in 1964, developed some of the first theories to characterize and quantify the buckling shell structures.

"Our research shines a different perspective on some of [Hutchinson's] theories and that enables us to apply them to a different research domain," said Gorissen.

"It was nice to be able to get feedback from one of the pioneers in the field," said Melancon. "He developed the theory to prevent failure and now we're using it to trigger buckling."

Using established theories as well as more recent numerical simulation tools, the researchers were able to characterize and tune the pressure volume relationship between the two shells to develop a soft robot capable of quickly releasing a specific amount of energy over and over again. The approach can be applied to any shape and any size. It could be used in everything from a small medical device to puncture a vein or in large exploratory robots to traverse uneven terrain.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Nanobowls serve up chemotherapy drugs to cancer cells

image: In this illustration, a nanobowl (purple semicircle) supports the structure of a liposome (blue membrane bilayer) to help keep a chemotherapy drug (red) from leaking out.

Image: 
<i>Nano Letters</i> <b>2020</b>, DOI: 10.1021/acs.nanolett.0c00495

For decades, scientists have explored the use of liposomes -- hollow spheres made of lipid bilayers -- to deliver chemotherapy drugs to tumor cells. But drugs can sometimes leak out of liposomes before they reach their destination, reducing the dose received by the tumor and causing side effects in healthy tissues. Now, researchers report in ACS' Nano Letters a way to stabilize liposomes by embedding a stiff nanobowl in their inner cavity.

Scientists have tried various approaches to prevent liposomes from leaking, such as coating their surfaces with polymers or crosslinking lipids in their bilayers. However, these modifications can alter the properties of liposomes so that they interact differently with cells. Chao Fang, Jonathan Lovell and colleagues wanted to find a new way to stabilize liposomes that keeps their surfaces intact. They decided to try nanobowls -- concave nanostructures with an opening that would allow drugs to escape once the liposomal bilayers break up inside a cancer cell. They reasoned that by assembling the lipid bilayer around the nanobowl, the rigid structure would mechanically support the liposome.

The team prepared silica nanobowls, modifying their surfaces with a positively charged chemical group and assembling a negatively charged lipid bilayer around each structure. Then, they loaded the chemotherapy drug doxorubicin into the water-filled center. The resulting nanobowl-stabilized liposomes were less leaky than regular liposomes in serum or under sheer stress, as would be encountered in blood vessels, but still released doxorubicin when taken up by cancer cells in a dish. In an experiment with mice that had transplanted, metastatic breast tumors, animals injected with the nanobowl-liposomes lived longer than those receiving regular liposomes. The nanobowl-treated mice also had smaller tumors compared with the group receiving conventional liposomes, and the cancer had not spread to their lungs, in contrast to the other group. The simple, effective method should be "easy for wide application and holds potential for clinical translation," the researchers say.

Credit: 
American Chemical Society

A new algorithm predicts the difficulty in fighting fire

image: This is a forest after the extreme spread of fire.

Image: 
University of Córdoba

Fires are one of the greatest threats to forest heritage. According to data from the Ministry of Agriculture, on average more than 17,000 fires occur per year in Spain, affecting 113,000 hectares and causing enormous financial and scenic losses.

The Forest Fire Laboratory at the University of Cordoba, along with the Rocky Mountain Research Station, of the Forest Service of the US Department of Agriculture, developed a new algorithm to improve response capacity to fire. The new tool predicts the difficulty in fighting a forest fire, which could help to optimize resources and prioritize extinguishing tasks.

When facing an uncontrolled fire blazing through hundreds of hectares, many questions arise that need urgent answers: Where should we start? What place presents less difficulty? What areas are already lost? How can we prioritize management tasks?

This algorithm is able to respond to these questions and has turned mathematics into a real ally for firefighting by means of a fraction: If the numerator gets close to 30 -the maximum number- it means that the fight against fire is practically lost when the denominator reaches low levels, the maximum being 50. If the denominator goes up to 50 -the limit- the land is considered to have proper infrastructure to effectively carry out fire suppression tasks. From that point on, and depending on the result of the equation, the tool offers an extinction difficulty index that advises about the possibilities of performing extinction operations safely and effectively. When the algorithm predicts low or moderate difficulty, authorities can clearly establish firefighting strategies and control active fronts, and in the cases of high to extreme difficulty, actions that jeopardize safety can be avoided, as well as those that will mean depleting fire suppression resources when the likelihood of extinguishing a fire is slim to none.

The tool, which has been validated in two fires occurring in Andalusia (in Segura, Jaén in 2017) and the Cascade Mountains of the Okanagan-Wenatchee National Forest in Washington State, US (Jolly Fire in 2017), takes into account a series of variables that complete and update previous prediction models. "The new algorithm contemplates new parameters such as the presence of irregular ravines and hillsides and does not center solely on the spreading of surface fire, but also on fire in treetops and on eruptive spreading in ravines and canyons, which can become a tremendous source of energy for a fire when it occurs, spreading widely", points out Professor Francisco Rodríguez y Silva, one of the authors of the research. In addition to these variables, others are added, like road density, firebreak areas, frequency of unloading from aircraft, the potential behavior of the fire, the ability for specialized responders to access the area and performing suppression actions by the combined firefighting resources.

For now, they are working on setting this algorithm up in a mobile application that would allow for estimations in real time. However, the best way would be to "use this methodology to draw up maps of extinction difficulty before a fire occurs and use it to predict the places where we need to be more proactive by planning and investing in our surroundings in terms of prevention", explains the researcher.

Previous understanding of the surroundings and mathematical computations could help in this way to reduce uncertainty and plan ahead for fires. The aim is to begin gaining on the fire even before the first flames appear.

Credit: 
University of Córdoba

Virus prevalence associated with habitat

image: Dr Charlotte Eve Davies of Swansea University releasing a Panulirus argus lobster back to the wild after sampling: studies by her research team showed that there can be a link between virus prevalance and habitat for this species

Image: 
Charlotte Eve Davies/Swansea University

Virus prevalence associated with habitat: study of Caribbean lobsters sheds light on disease and fragile ecosystems.

Levels of virus infection in lobsters seem to be related to habitat and other species, new studies of Caribbean marine protected areas have shown.

The findings will support efforts to safeguard Caribbean spiny lobsters, which are a vital food source for communities across the region and world.

They also boost our understanding of how viruses spread - disease dynamics - and of the ecology of fragile environments such as tropical reef lagoons and seagrass ecosystems.

The spiny lobster, which has the scientific name Panulirus argus, is found across the Caribbean. Coastal habitats such as seagrass meadows and beds of algae act as nurseries for the juveniles before they move to the coral reefs when they are fully grown.

The research, led by Dr Charlotte Davies, now of Swansea University, took place with colleagues at the National Autonomous University of Mexico's Reef Systems Unit.

They focused on a threat to this species called Panulirus argus virus 1 (PaV1). Discovered in 2000, PaV1 is the first known naturally occurring virus in lobsters.

The virus is a particular threat to juvenile lobsters, so tackling it is vital in protecting the species.

The team examined lobsters in two marine protected areas in the Mexican Caribbean: the Sian Ka'an Biosphere Reserve and the National Reef Park of Puerto Morelos, where the virus has been present since 2001.

They carried out the systematic assessment of virus prevalence across both sites, once a year for two years in Sian Ka'an and seasonally over 4 years in Puerto Morelos. Each site was separated into zones with differing features such as water depth, sediment and extent of vegetation.

Previous research had suggested that virus prevalence may have a correlation with habitat, so investigating this in more detail was at the heart of the research, as well as looking at biodiversity of surrounding invertebrate (lobster food) communities.

The team found:

The rate of infection overall was highest amongst smaller juvenile lobsters, confirming findings from previous studies, and true prevalence could be as high as 32% across populations.

In Sian Ka'an, they found that significantly more lobsters with PaV1 lived in the highly vegetated seagrass meadows, compared to the coral reefs - indicating that there may be something in the seagrass which is preventing the virus spreading. Recent research elsewhere has shown that coastal seagrass meadows can trap some pathogens, greatly reducing the number that reach the open ocean and benefiting humans and marine life.

However, in Puerto Morelos, where the lobsters are smaller and the ecosystem is very different, variations in habitat in the lagoon did not significantly influence the prevalence of the virus, showing that results may be site-specific.

Dr Charlotte Eve Davies of Swansea University, lead researcher on the project, said:

"What influences the spread of a virus? Our question was whether habitat or food species play a role, in relation to Caribbean spiny lobsters.

Our studies showed that the overall habitat - physical surroundings and other species - can significantly influence the prevalence of this virus depending on the location and ecosystem.

Our findings can help safeguard this important food resource for Caribbean communities. They also increase our understanding of this virus and give us a better picture of the wider ecosystem in this fragile environment."

Credit: 
Swansea University

NUS researchers create novel device that harnesses shadows to generate electricity

image: The novel Shadow-effect Energy Generator developed by researchers from the National University of Singapore uses the contrast in illumination between the lit and shadowed areas to generate electricity.

Image: 
Royal Society of Chemistry

Shadows are often associated with darkness and uncertainty. Now, researchers from the National University of Singapore (NUS) are giving shadows a positive spin by demonstrating a way to harness this common but often overlooked optical effect to generate electricity. This novel concept opens up new approaches in generating green energy under indoor lighting conditions to power electronics.

A team from the NUS Department of Materials Science and Engineering as well as NUS Department of Physics created a device called a shadow-effect energy generator (SEG), which makes use of the contrast in illumination between lit and shadowed areas to generate electricity. Their research breakthrough was reported in scientific journal Energy & Environmental Science on 15 April 2020.

"Shadows are omnipresent, and we often take them for granted. In conventional photovoltaic or optoelectronic applications where a steady source of light is used to power devices, the presence of shadows is undesirable, since it degrades the performance of devices. In this work, we capitalised on the illumination contrast caused by shadows as an indirect source of power. The contrast in illumination induces a voltage difference between the shadowed and illuminated sections, resulting in an electric current. This novel concept of harvesting energy in the presence of shadows is unprecedented," explained research team leader Assistant Professor Tan Swee Ching, who is from the NUS Department of Materials Science and Engineering.

Mobile electronic devices such as smart phones, smart glasses and e-watches require efficient and continuous power supply. As these devices are worn both indoors and outdoors, wearable power sources that could harness ambient light can potentially improve the versatility of these devices. While commercially available solar cells can perform this role in an outdoor environment, their energy harvesting efficiency drops significantly under indoor conditions where shadows are persistent. This new approach to scavenge energy from both illumination and shadows associated with low light intensities to maximise the efficiency of energy harvesting is both exciting and timely.

To address this technological challenge, the NUS team developed a low-cost, easy-to-fabricate SEG to perform two functions: (1) to convert illumination contrast from partial shadows castings into electricity, and (2) to serve as a self-powered proximity sensor to monitor passing objects.

Generating electricity using the 'shadow-effect'

The SEG comprises a set of SEG cells arranged on a flexible and transparent plastic film. Each SEG cell is a thin film of gold deposited on a silicon wafer. Carefully designed, the SEG can be fabricated at a lower cost compared to commercial silicon solar cells. The team then conducted experiments to test the performance of the SEG in generating electricity and as a self-powered sensor.

"When the whole SEG cell is under illumination or in shadow, the amount of electricity generated is very low or none at all. When a part of the SEG cell is illuminated, a significant electrical output is detected. We also found that the optimum surface area for electricity generation is when half of the SEG cell is illuminated and the other half in shadow, as this gives enough area for charge generation and collection respectively," said co-team leader Professor Andrew Wee, who is from the NUS Department of Physics.

Based on laboratory experiments, the team's four-cell SEG is twice as efficient when compared with commercial silicon solar cells, under the effect of shifting shadows. The harvested energy from the SEG in the presence of shadows created under indoor lighting conditions is sufficient to power a digital watch (i.e. 1.2 V).

In addition, the team also showed that the SEG can serve as a self-powered sensor for monitoring moving objects. When an object passes by the SEG, it casts an intermittent shadow on the device and triggers the sensor to record the presence and movement of the object.

Towards lower cost and more functionalities

The six-member team took four months to conceptualise, develop and perfect the performance of the device. In the next phase of research, the NUS team will experiment with other materials, besides gold, to reduce the cost of the SEG.

The NUS researchers are also looking at developing self-powered sensors with versatile functionalities, as well as wearable SEGs attached to clothing to harvest energy during normal daily activities. Another promising area of research is the development of low-cost SEG panels for efficient harvesting of energy from indoor lighting.

Credit: 
National University of Singapore

Just read my face, baby

image: Are you good at reading your partner's emotions? Your perceptiveness may very well strengthen your relationship. Yet when anger or contempt enter the fray, little is to be gained and the quality of your relationship tanks, researchers find.

Image: 
University of Rochester illustration / Michael Osadciw

Are you good at reading your partner's emotions? Your perceptiveness may very well strengthen your relationship. Yet when anger or contempt enter the fray, little is to be gained and the quality of your relationship tanks, researchers find.

A new study by a team of psychologists from the University of Rochester and the University of Toronto tried to figure out under what circumstances the ability to read another person's emotions--what psychologists call "empathic accuracy"--is beneficial for a relationship and when it could be harmful. The study examined whether the accurate perception of a romantic partner's emotions has any bearing on the quality of a relationship and a person's motivation to change when a romantic partner asks for a change in behavior or attitude.

While prior research on empathic accuracy had yielded mixed findings, the new study shows that couples who accurately perceive appeasement emotions, such as embarrassment, have better relationships than those accurately perceiving dominance emotions, such as anger or contempt. The perception may be on the part of the person requesting the change, or the person receiving the request.

Lead author Bonnie Le, an assistant professor in the University of Rochester's Department of Psychology, says the team zeroed in on how accurately deciphering different types of emotions affects relationship quality.

"If you accurately perceive threatening displays from your partner, it can shake your confidence in a relationship," says Le, who conducted the research while a postdoctoral fellow at the University of Toronto's Rotman School of Management.

Key findings

Couples who accurately perceive appeasement emotions—either as the person requesting the change or the person receiving the request—have better relationships.

Couples where either partner feels negative emotions, regardless of whether those emotions are accurately perceived by the partner, have poorer relationships.

Accuracy in reading another person’s emotions does not increase the motivation to heed a partner’s request for change.

Why is the ability to change important for a partnership?

Even in the best relationships, partners invariably experience conflict. One way to tackle conflict, researchers argue, is to ask a partner to change by, for example, spending less money, losing weight, making changes to a couple's sex life, or resetting life goals. Yet, requesting such personal (and sometimes threatening) change can elicit negative emotions and put a strain on a relationship. That's why figuring out how best to navigate emotionally charged situations is crucial to maintaining a healthy relationship.

"If you are appeasing with your partner--or feel embarrassed or bashful--and your partner accurately picks up on this, it can signal to your partner that you care about their feelings and recognize a change request might be hurtful," Le says. "Or if your partner is angry or contemptuous--what we call dominance emotions--that signals very different, negative information that may hurt a partner if they accurately perceive it."

The team--besides Rochester's Le--is made up of Stéphane Côté of the University of Toronto's Rotman School of Management; and Jennifer Stellar and Emily Impett, both from the University of Toronto Mississauga. They discovered that the type of negative emotion detected matters: if you read in your partner's expression softer emotions--such as sadness, shame, or embarrassment--you generally enjoy a strong relationship. One possible reason is that these so-called "appeasement emotions" are read as signals of concern for the partner's feelings.

In contrast, and contrary to the researchers' original hypothesis, simply feeling anger or contempt--emotions that signal blame and defensiveness--rather than accurately reading those emotions in your partner, may be socially destructive for a relationship. The team found that if even just one partner felt angry, or displayed contempt, the quality of the relationship tanked, regardless of whether the other partner's ability to read emotions was spot on, or completely missed the mark.

Coauthor Côté says the team doesn't exactly know why anger functions in this way. "We think reading emotions allows partners to coordinate what they do and say to each other, and perhaps that is helpful when appeasement emotions are read, but not when anger emotions are read. Anger seems to overpower any effect of reading emotions, which is consistent with lots of research findings on how anger harms relationships."

Yet, regardless of how well a person was able to decipher a partner's emotions, accuracy did not increase motivation to heed the partner's request for change.

Direct communication is key

For the study, the researchers asked 111 couples who had been dating for an average of three years to discuss in a lab setting an aspect that they wanted their partner to change, such as particular behaviors, personal characteristics, or how they controlled their temper. The research team then switched the roles of those making the request and those who were asked to change. Afterward, the participants rated their own emotions and perceptions of their partner's emotions, their relationship quality, and their motivation to heed those change requests.

"Expressing and perceiving emotions is, of course, important for making connections and deriving satisfaction in a relationship," says Le. "But in order to really propel your partner to change, you may need to use more direct communication about exactly what kind of change you are hoping for."

Research has shown that direct communication, whether positive or negative, is more likely to lead to change in the long run. That said, the emotional tone you take when you ask your partner for a change is important, notes Le:

"It's not bad to feel a little bashful or embarrassed when raising these issues because it signals to the partner that you care and it's valuable for your partner to see that. You acknowledge that what you raise may hurt their feelings. It shows that you are invested, that you are committed to having this conversation, and committed to not hurting them. And the extent to which this is noted by your partner may foster a more positive relationship."

Credit: 
University of Rochester

Scientists find evidence of link between diesel exhaust, risk of Parkinson's

image: Dr. Jeff Bronstein from University of California - Los Angeles Health Sciences

Image: 
UCLA Health

A new UCLA study in zebrafish identified the process by which air pollution can damage brain cells, potentially contributing to Parkinson's disease.

Published in the peer-reviewed journal Toxicological Sciences, the findings show that chemicals in diesel exhaust can trigger the toxic buildup of a protein in the brain called alpha-synuclein, which is commonly seen in people with the disease.

Previous studies have revealed that people living in areas with heightened levels of traffic-related air pollution tend to have higher rates of Parkinson's. To understand what the pollutants do to the brain, Dr. Jeff Bronstein, a professor of neurology and director of the UCLA Movement Disorders Program, tested the effect of diesel exhaust on zebrafish in the lab.

"It's really important to be able to demonstrate whether air pollution is actually the thing that's causing the effect or whether it's something else in urban environments," Bronstein said.

Testing the chemicals on zebrafish, he said, lets researchers tease out whether air pollution components affect brain cells in a way that could increase the risk of Parkinson's. The freshwater fish works well for studying molecular changes in the brain because its neurons interact in a way similar to humans. In addition, the fish are transparent, allowing scientists to easily observe and measure biological processes without killing the animals.

"Using zebrafish allowed us to see what was going on inside their brains at various time-points during the study," said Lisa Barnhill, a UCLA postdoctoral fellow and the study's first author.

Barnhill added certain chemicals found in diesel exhaust to the water in which the zebrafish were kept. These chemicals caused a change in the animals' behavior, and the researchers confirmed that neurons were dying off in the exposed fish.

Next, they investigated the activity in several pathways in the brain known to be related to Parkinson's disease to see precisely how the pollutant particles were contributing to cell death.

In humans, Parkinson's disease is associated with the toxic accumulation of alpha-synuclein proteins in the brain. One way these proteins can build up is through the disruption of autophagy -- the process of breaking down old or damaged proteins. A healthy brain continuously makes and disposes of the proteins it needs for communication between neurons, but when this disposal process stops working, the cells continue to make new proteins and the old ones never get cleared away.

In Parkinson's, alpha-synuclein proteins that would normally be disposed of pile up in toxic clumps in and around neurons, eventually killing them and interfering with the proper functioning of the brain. This can result in various symptoms, such as tremors and muscle rigidity.

Before exposing the zebrafish to diesel particles, the researchers examined the fishes' neurons for the tell-tale pouches that carry out old proteins, including alpha-synuclein, as part of the autophagy disposal operation and found that the process was working properly.

"We can actually watch them move along, and appear and disappear," Bronstein said of the pouches.

After diesel exposure, however, they saw far fewer of the garbage-toting pouches than normal. To confirm that this was the reason brain cells were dying, they treated the fish with a drug that boosts the garbage-disposal process and found that it did save the cells from dying after diesel exposure.

To confirm that diesel could have the same effect on human neurons, the researchers replicated the experiment using cultured human cells. Exposure to diesel exhaust had a similar effect on those cells.

"Overall, this report shows a plausible mechanism of why air pollution may increase the risk of Parkinson's disease," Bronstein said.

Credit: 
University of California - Los Angeles Health Sciences

Doctors should be cautious when using current warning system for patient's worsening health

The current system for checking on a patient's health and how likely it is to worsen while in hospital is based on weak evidence and using poor scores may harm patients, suggests research published by The BMJ today.

Early warning scores (EWS) are used widely in hospitals to assess and identify clinical deterioration in adult patients, based on measuring vital signs such as heart rate, oxygen levels, and blood pressure.

They were introduced as a way of tackling adverse events and unnecessary deaths in hospitals, but so far, there has been little investigation into how the system was developed and validated.

Researchers led by the University of Oxford therefore set out to provide an overview and critical appraisal of early warning scores by reviewing existing studies on the issue.

They analysed the results of 95 studies describing the development or external validation of an early warning score for adult hospital inpatients.

They found that most early warning scores were developed for use in the United Kingdom (29%) and the United States (38%).

Death was the most frequent prediction outcome for development studies (44%) and validation studies (79%), with different time horizons - the most frequent were 'in-hospital' and 24 hours.

The most common predictors used in the scores were respiratory rate (88% of studies), heart rate (83%), oxygen saturation, temperature, and systolic blood pressure (all 71%). Age (38%) and sex (9%) were less frequently included.

Overall analysis of the results, however, showed that most studies were carried out using poor methods, had inadequate reporting, and all of the studies were at risk of being biased.

Specifically, key details of the analysis populations were often not reported in development studies (41% of these studies) or validation studies (39%).

The researchers found that handling of statistical issues, such as missing data, was inadequate in many of the previous studies, while many also failed to report important details, such as sample size, number of events, population characteristics, and details of statistical methods.

They acknowledge some study limitations, such as the fact that they had assessed fewer development studies than might have been expected because they only included scores published in peer-reviewed journals.

Nevertheless, they conclude that many early warning scores in clinical use were found to have methodological and reporting weaknesses and shortcomings.

"Early warning scores might not perform as well as expected and therefore they could have a detrimental effect on patient care," they say. "Future work should focus on following recommended approaches for developing and evaluating early warning scores, and investigating the impact and safety of using these scores in clinical practice."

Credit: 
BMJ Group

Chemical recycling makes useful product from waste bioplastic

A faster, more efficient way of recycling plant-based "bioplastics" has been developed by a team of scientists at the Universities of Birmingham and Bath.

The team has shown how their chemical recycling method not only speeds up the process, it can also be converted into a new product - a biodegradable solvent - which can be sold for use in a wide variety of industries including cosmetics and pharmaceuticals.

Bioplastics, made from polylactic acid (PLA), are becoming increasingly common in products such as disposable cups, packaging materials and even children's toys. Typically, once they reach the end of their useful life, they are disposed of in landfill or composted, biodegrading over periods of up to several months.

In a new study, researchers have shown that a chemical process, using a zinc-based catalyst developed at the University of Bath and methanol, can be used to break down real consumer plastics and produce the green solvent, called methyl lactate. Their results are published in the journal Industrial & Engineering Chemistry Research.

The team tested their method on three separate PLA products- a disposable cup, some 3D printer waste, and a children's toy. They found the cup was most easily converted to methyl lactate at lower temperatures, but even the bulkier plastic in the children's toy could be converted using higher temperatures. "We were excited to see that it was possible to obtain high quantities of the green solvent regardless of samples' characteristics due to colorants, additives, sizes and even molecular weight.", said lead author Luis Román-Ramírez of the University of Birmingham's School of Chemical Engineering.

Lead researcher Professor Joe Wood, at the University of Birmingham, says: "The process we've designed has real potential to contribute to ongoing efforts to reduce the amount of plastic going into landfill or being incinerated creating new valuable products from waste.

"Our technique breaks down the plastics into their chemical building blocks before 'rebuilding' them into a new product, so we can guarantee that the new product is of sufficiently high quality for use in other products and processes."

The chemical process has been tried up to 300 ml, so next steps would include scaling up the reactor further before it can be used in an industrial setting. The research was funded by the Engineering and Physical Sciences Research Council.

Credit: 
University of Birmingham

Fish feed foresight

As the world increasingly turns to aqua farming to feed its growing population, there's no better time than now to design an aquaculture system that is sustainable and efficient.

Researchers at UC Santa Barbara, the University of Tasmania and the International Atomic Agency examined the current practice of catching wild fish for forage (to feed farmed fish) and concluded that using novel, non-fishmeal feeds could help boost production while treading lightly on marine ecosystems and reserving more of these small, nutritious fish for human consumption.

"The annual catch of wild fish has been static for almost 40 years, but over the same period the production from aquaculture has grown enormously," said Richard Cottrell, lead author of a paper that appears in the journal Nature Food.

Approximately 16 million of the 29 million tonnes of forage fish -- such as herrings, sardines and anchovies -- caught globally each year are currently used for aquaculture feed. To meet the growing demand for fish in a sustainable manner, other types of fish feed must be used, the researchers said.

"We looked at a range of scenarios to predict future aquaculture production and, depending on consumer preferences, we found growth between 37-98% is likely," said Cottrell, a postdoctoral scholar at UCSB's National Center for Ecological Analysis & Synthesis (NCEAS), who conducted this work at the University of Tasmania.

Fortunately, nutritional sources exist that could ease the growing demand for forage fish. Based on microalgae, insect protein and oils, these novel feeds could, in many cases, at least partially substitute fishmeal and oil in the feeds of many species without negative impacts on feed efficiency or omega-3 profiles.

"Previous work has identified that species such as carps and tilapias respond well, although others such as salmon are still more dependent on fish-based feeds to maintain growth and support metabolism," said Cottrell, who with his colleagues analyzed results from 264 scientific studies of farmed fish feeding experiments. As the nutrition and the manufacturing technologies improve for these novel feeds, they could allow for substantial reductions in the demand for wild-caught fishmeal in the future, he added.

"Even limited adoption of novel fish feeds could help to ensure that this growth (in aquaculture production) is achieved sustainably," Cottrell said, "which will be increasingly important for food security as the global population continues to rise."

As we lean more on ocean-based food, the practices in place for producing it must come under scrutiny, and be improved where possible, according to UCSB marine ecologist and co-author Ben Halpern, director of NCEAS.

"Sorting out these questions about feed limitations and opportunities is nothing short of essential for the sector, and ultimately the planet," he said. " Without sustainable feed alternatives, we will not be able to sustainably feed humanity in the future."

This study is one of several examinations of the potential for novel feed ingredients to replace wild caught forage fish in aquaculture.

"Our future research will continue to look at the wider consequences and trade-offs of shifting toward novel feed ingredients, including assessing the impacts on both marine and terrestrial environments, as well as balancing these with social and economic outcomes," said University of Tasmania associate professor and study co-author Julia Blanchard.

Credit: 
University of California - Santa Barbara

Study: Ancient ocean oxygen levels associated with changing atmospheric carbon dioxide

image: Deep ocean floor sediment cores hold chemical clues to Earth's past.

Image: 
Texas A&M University

Why do carbon dioxide levels in the atmosphere wax and wane in conjunction with the warm and cold periods of Earth's past? Scientists have been trying to answer this question for many years, and thanks to chemical clues left in sediment cores extracted from deep in the ocean floor, they are starting to put together the pieces of that puzzle.

Recent research suggests that there was enhanced storage of respired carbon in the deep ocean when levels of atmospheric carbon dioxide concentrations were lower than today's levels. But new research led by a Texas A&M University scientist has reached back even further, for the first time revealing insights into atmospheric carbon dioxide levels in the 50,000 years before the last ice age.

"One of the biggest unknowns about past climate is the cause of atmospheric carbon dioxide variability over global warm-cold cycles," said Franco Marcantonio, lead author of the study and professor and Jane and Ken R. Williams '45 Chair in the Department of Geology and Geophysics at Texas A&M. "Here we investigated the 'how' of varying carbon dioxide with the 'where' -- namely, the Eastern Equatorial Pacific Ocean, which is an important region of the world ocean where, today, significant carbon dioxide is exhaled into the atmosphere and the greatest rates phytoplankton growth are found."

The National Science Foundation-funded research was recently published in Scientific Reports, a Nature Research journal.

To examine ancient carbon dioxide levels, Marcantonio and a team of researchers analyzed an ocean floor sediment core extracted from the deep Eastern Equatorial Pacific Ocean. The 10-meter long core spans about 180,000 years, and the chemistry of the layers of sediment provide scientists with a window into past climates. The chemical measurements they make serve as a proxy for oxygen levels of the deep sea.

Measuring minute traces of uranium and thorium isotopes, the team was able to associate periods of increased storage of respired carbon (and low deep-sea oxygen levels) with periods of decreased global atmospheric carbon dioxide levels during the past 70,000 years.

"By comparing our high-resolution sediment record of deep-sea oxygenation in the Eastern Equatorial Pacific with other areas of the Pacific and Southern Ocean, we find that the Pacific Ocean, like the Southern Ocean, is a location for deep-ocean respired carbon storage during periods of decreased global atmospheric CO2 concentrations," he said. "Importantly, we put constraints on the location in the water column of the extent of the respired stored carbon pool during cold periods.

"Understanding the past dynamics of Earth's carbon cycle is of fundamental importance to informing and guiding societal policy-making in a warming world with increasing levels of atmospheric carbon dioxide."

Co-authors of the study were Ryan Hostak, a former Texas A&M graduate student who earned his master's degree in geology in 2019; Jennifer E. Hertzberg, who received her Ph.D. in oceanography from Texas A&M in 2015 and is now a postdoctoral researcher in the Department of Earth, Ocean and Atmospheric Sciences at Old Dominion University; and Matthew W. Schmidt, associate professor of Ocean, Earth and Atmospheric Sciences at Old Dominion. Marcantonio and his colleagues designed the study, he and Hostak performed the isotope analyses, and the team interpreted the data.

"By performing similar studies in sediment covering a wider swath of the deep Pacific Ocean, we'll be able to spatially map the extent of this past deep pool of respired carbon," Marcantonio said, looking forward to future research.

Credit: 
Texas A&M University

4D electric circuit network with topology

image: (a) The 4D circuit lattice realized on a 2D plane. A pair of Weyl points with the same chirality are localized on the three-dimensional boundary. (b) The bulk band structures and the boundary Weyl states (red lines). (c) Schematic of the chirality of Weyl states.

Image: 
©Science China Press

In recent years, topology has emerged as an important tool to classify and characterize properties of materials. It has been found that many materials exhibit a number of unusual topological properties, which are unaffected by deformations, e.g., stretching, compressing, or twisting. These topological properties include quantized Hall currents, large magnetoresistance, and surface excitations that are immune to disorder. It is hoped that these properties could be utilized for future technologies, such as, low-power electronics, ultrafast detectors, high-efficiency energy converters, or for quantum computing.

More recently, topology has been applied also to synthetic materials, e.g., photonic crystals or networks of electric circuits. These synthetic materials have several benefits compared to their natural counterparts. For example, the topology of their excitations (i.e., their excitation bands) can be precisely controlled and manipulated. In addition, due to their long-ranged lattice connectivity, synthetic materials can realize topological excitations in dimensions greater than three. Hence, synthetic materials, and in particular electric circuit networks, offer the possibility to realize a number of interesting topological properties that are not accessible in real materials.

Rui Yu from Wuhan University, Yuxin Zhao from Nanjing University, and Andreas Schnyder from the Max-Planck-Institute Stuttgart have now demonstrated this potential by explicitly constructing an electric circuit network that simulates a four-dimensional (4D) topological insulator with a classical time-reversal symmetry [Fig. 1(a)]. Topological insulators are materials which are insulating in the bulk volume, but highly conducting at the surface, due gapless surface excitations. Similarly, the simulated 4D topological insulator has an excitation gap in the bulk volume, within which there exists a pair of surface excitations [Fig. 1(b)]. These 3D surface excitations have a linear dispersion, and more interestingly, they are of Weyl type with the same handedness, i.e., they have internal degrees of freedom that are spinning following the same left or right-handed rule with respect to their propagating direction [Fig. 1(c)]. They are of topological origin and are unlike any surface excitation found in conventional materials. Topology dictates that these 3D Weyl excitations must come in pairs and that they are robust to disorder and deformations. The authors have performed detailed numerical simulations of the topological circuit network and have shown that the 3D Weyl excitations can be readily observed in frequency-dependent measurements.

The authors' work demonstrates that topological excitations can be easily realized on commercially available circuit boards or integrated-circuit wafers composed of inductors and capacitors. It paves the way for realizing arbitrary types of topological surface excitations, for example, so-called Dirac or Majorana excitations of dimension two, three, or even higher. The electric-circuit implementation of topological excitations has the advantage of being simple, easily reconfigurable, and allowing a high degree of control. This will make it possible to study in the future topological phase transitions, non-linear effects, out-of-equilibrium phenomena, and quantum open systems (e.g., non-Hermitian systems).

Credit: 
Science China Press

2D molecular crystals modulating electronic properties of organic semiconductors

image: Schematics of the bottom gate, top contact OFETs based on 1D/2D composite single crystals and schematic diagram of I-V curves before and after doping.

Image: 
©Science China Press

Organic field-effect transistors (OFETs) are the heart of plastic electronics. Doping has been proven to improve the performance of OFETs effectively. There are two major ways of doping OSCs.

The first strategy is bulk doping. Bulk doping involves the solution phase blending or vapor phase co-deposition of the dopants with the host OSCs. However, bulk doping introduces structural defects and energetic disorders in the host material, which reduces the mobility of the organic semiconductors.

The second strategy is surface doping. Surface doping is achieved by the deposition of dopants on surfaces of the host OSCs. Compared with bulk doping, the dopants are not incorporated into the lattice of the host, and thus the induction of structural defects and energetic disorders by common bulk doping are eliminated.

As a result, surface doping is considered as a useful strategy for nondestructive doping of OSCs. Up to now, dopants of various structures have been applied in surface doping. However, most of the dopants are polycrystalline thin films and their thicknesses are not well controlled, Therefore, the performance improvements of OSCs are restricted.
Two-dimensional molecular crystals (2DMCs) are periodically arranged monolayer or few-layered organic molecules held together by weak interactions (e.g., hydrogen bonds, π-π interactions, van der Waals forces) in a 2D plane. They are continuous ultrathin films with long-range order.

Moreover, the thickness of the 2DMCs can be tuned at monolayer level, enabling highly controllable doping of OSCs at mono-layer precision. As a result, 2DMCs are potentially favorable materials as dopants for surface doping.

Very recently, Dr. Rongjin Li and colleagues in Tianjin University reported a highly effective and highly controllable surface doping strategy based on 1D/2D composite single crystal to boost the mobility and to modulate the threshold voltage of OFETs.

Taking advantage of the molecular scale thickness of the 2DMC dopants, tight attachment to the surface of the host OSCs is ensured and efficient doping is achieved. More importantly, the molecular scale thickness of the 2DMC with controllable layers ensures precise doping of the host material at monolayer precision.

In their study, 1D organic single-crystalline microribbons of TIPS-pentacene were adopted as an example OSC. Compared with the pristine materials, the average mobility of OFETs based on 1D/2D composite single crystals increased from 1.31 cm2/V* s to 4.71 cm2/V* s, corresponding to an increase of 260%. Meanwhile, a substantial reduction of the threshold voltage from -18.5 V to -1.8 V was achieved. The maximum mobility of 5.63 cm2/V* s was higher than the vast majority of reported mobilities for TIPS-pentacene so far as we know. Moreover, high on/off ratios of up to 108 were retained. Surface doping by 2DMCs provides a highly efficient and highly controllable strategy to modulate the optoelectronic properties of OSCs for various applications.

Credit: 
Science China Press

Molybdenum telluride nanosheets enable selective electrochemical production of hydrogen peroxide

image: (a) SEM image of MoTe2 nanoflakes. (b) (Lower panel) Polarization curves of MoTe2 nanoflakes, bulk MoTe2 powders and graphene nanosheets alone and (upper panel) corresponding ring currents (dash line) and H2O2 percentage (solid line). (c) Derived mass activity of MoTe2 nanoflakes in comparison with those of Pt/Pd-Hg alloys and Au-based catalysts estimated from literature. (d) Polarization curves, ring currents and H2O2 percentage of MoTe2 nanoflakes at the initial state and after certain numbers of cycles during the accelerated durability test.

Image: 
©Science China Press

H2O2 is an important commodity chemical and potential energy carrier, and is widely used for various environmental, medical and household applications. At present, about 99% H2O2 is produced from energy-intensive anthraquinone oxidation process. Its centralized production in this way produces highly concentrated H2O2 that often has to be distributed to and diluted at the site of use, bringing additional complexity and challenges. In addition, H2O2 can also be produced from the direct reaction between H2 and O2 in the presence of Pd-based catalysts. The potential explosion hazard of this approach, however, hinders its practical application.

Electrochemical oxygen reduction reaction via a two-electron pathway represents a novel and decentralized strategy to produce H2O2. It relies on the development of active and selective electrocatalysts. The state-of-the-art candidates are Pt-Hg and Pd-Hg alloys. Despite their relatively high mass activity and selectivity in acids, these precious metal alloys are unlikely to be used on a large scale due to their prohibitive costs and toxicity (because of the inclusion of Hg). More recently, carbon-based materials emerge and demonstrate appreciable activity and selectivity for H2O2 production in alkaline solution. Unfortunately, their potentials are also limited since H2O2 is subjected to rapid decomposition in alkaline medium. For practical applications, H2O2 is more widely used in acidic media with stronger oxidation ability. As a result, it is highly desirable to pursue high-performance electrocatalysts for selective H2O2 production in acids.

In a new research published in the Beijing-based National Science Review, scientists from Soochow University (Suzhou, China), the University of Chinese Academy of Sciences (Beijing, China), Nanjing Normal University (Nanjing, China) and Trinity College Dublin (Dublin, Ireland) worked together, and reported for the first time that molybdenum telluride (MoTe2) nanoflakes had a remarkable performance for H2O2 production in acids.

MoTe2 nanoflakes were prepared via the well-established liquid phase exfoliation method from bulk MoTe2. X-ray diffraction and Raman analyses evidenced that the product had a hexagonal 2H phase. Scanning electron microscopy and transmission electron microscopy imaging revealed that exfoliated MoTe2 nanoflakes had lateral size distribution from 50 to 350 nm. Moreover, the authors used aberration-corrected scanning transmission electron microscopy to elucidate the atomic structure of MoTe2 nanoflakes, and observed that their exposed edges, though not atomically sharp, were mostly along the zigzag directions with abundant bonding unsaturated Mo and Te sites.

When investigated as the electrocatalyst materials in O2-satuated 0.5 M H2SO4 solution, MoTe2 nanoflakes mixed with graphene nanosheets exhibited positive onset potential of 0.56 V versus reversible hydrogen electrode and outstanding H2O2 selectivity up to 93%. The mass activity was also calculated by normalizing the catalytic current with respect to the catalyst mass. The authors found that the value was in the range of ~10-102 A g-1 between 0.3-0.45 V for MoTe2, which, although not as magnificent as the state-of-the-art Pt-Hg and Pd-Hg alloys, was superior to Au alloys and carbon-based materials. Prof. Yanguang Li who led the electrochemical experiments noted that "the mass activity of exfoliated MoTe2 nanosheets at 0.4 V was 27 A g-1 -- approximately 7-10 times greater than those of Au-Pd alloys and N-doped carbon." In addition to its impressive activity and selectivity, MoTe2 nanoflakes also exhibited decent stability with negligible performance loss even after the accelerated durability test and overnight aging experiment.

In order to understand the experimental result, the authors conducted density functional theory calculations to simulate the absorption energies of key reaction intermediates on the catalyst surface. They found that the zigzag edge of 2H MoTe2 had suitable binding for HOO* and weak binding for O*, and therefore would promote the reduction of O2 to H2O2 but retard its further reduction to H2O. Prof. Yafei Li who led the theoretical work said "MoTe2 was really unique for its capability for two-electron oxygen reduction, which was not found in other transition metal dichalcogenides including MoS2 and MoSe2"

"Our study here unveiled the unexpected potential of MoTe2 nanoflakes as a non-precious metal based electrocatalyst for H2O2 production in acids, and might open a new pathway toward the catalyst design for this challenging electrochemical reaction." Prof. Yanguang Li finally commented on their interesting discovery.

Credit: 
Science China Press

Hierarchical self-assembly of atomically precise nanoclusters

image: Scheme illustration of the 1D-3D assemblies of Ag29(SSR)12 nano-building blocks ? including Ag29-0D cluster dots in the presence of PPh3 , Ag29-1D linear chains (1D array) in the presence of Cs+ and DMF, Ag29-2D grid networks (2D array) in the presence of Cs+ and NMP and Ag29-3D superstructures (3D array) in the presence of Cs+ and TMS.

Image: 
©Science China Press

Metal nanoclusters have been served as an emerging class of modular nanomaterials owing to their atomically precise structures, fascinating properties, and potential applications. The subject of cluster-based supramolecular assembly represents one of the most dynamic areas and has emerged recently as a new "growth point" in the nanocluster science. Such assemblies originate in different types of inter-cluster interactions such as chemical bonding, hydrogen bonding, electrostatic, van der Waals, π* * * π and C-H* * * π interactions. On one hand, these cluster-based aggregates typically display enhanced performance (e.g., stability and fluorescence) relative to their constituent cluster building blocks owing to the synergy from the cluster-linker-cluster assembly system. On the other hand, the precise structures of nanoclusters allow for the atomic-level understanding of inter-cluster interaction modes, and such knowledge further guides us to controllably constitute hierarchically assembled cluster-based nanomaterials. However, up to the present, the controllable assembly of cluster nano-building blocks in different arrays remains challenging.

In a new paper published in the National Science Review, scientists at the Anhui University (China) and Nanjing University (China) reported the hierarchical self-assembly of atomically precise nanoclusters. On the basis of the Ag29(SSR)12 cluster nano-building framework (where SSR is 1,3-benzene dithiol), Professor Manzhou Zhu and his coworkers selectively constructed the cluster-based 1D linear chains, 2D grid networks, 3D superstructures in the presence of different solvent-conjoined Cs+ cations. Crystal structures of these cluster-based assemblies have been successfully determined. Besides, the hierarchical self-assembly of these Ag29(SSR)12 nano-building blocks has not only been observed in their crystalline state, but also in their amorphous state, with the help of the aberration-corrected HAADF-STEM (high angle annular dark field scanning transmission electron microscope).

"Such Ag29-based assemblies manifest distinguishable optical absorptions and emissions in both solutions and crystallized films; such differences originate from their different surface structures and crystalline packing modes," they state, "Furthermore, the surface areas of these cluster-based assemblies are evaluated, the maximum value of which occurs when the cluster nano-building blocks are assembled into 2D arrays. The 2D-array assembly endows the best gas storage capability of these cluster-based frameworks."

"This work presents an exciting example of the hierarchical assembly of atomically precise nanoclusters by simply controlling the adsorbed molecules on the cluster surface," they add, "and we believe that this work will shed light on more future works touching upon the supramolecular chemistry of metal nanoclusters."

Credit: 
Science China Press