Tech

Stress reduction as a path to eating less fast food

COLUMBUS, Ohio - Overweight low-income mothers of young kids ate fewer fast-food meals and high-fat snacks after participating in a study - not because researchers told them what not to eat, but because the lifestyle intervention being evaluated helped lower the moms' stress, research suggests.

The 16-week program was aimed at preventing weight gain by promoting stress management, healthy eating and physical activity. The methods to get there were simple steps tucked into lessons on time management and prioritizing, many demonstrated in a series of videos featuring mothers like those participating in the study.

"We used the women's testimonies in the videos and showed their interactions with their families to raise awareness about stressors. After watching the videos, a lot of intervention participants said, 'This is the first time I've realized I am so stressed out' - because they've lived a stressful life," said Mei-Wei Chang, lead author of the study and associate professor of nursing at The Ohio State University.

"Many of these women are aware of feeling impatient, and having head and neck pain and trouble sleeping - but they don't know those are signs of stress."

An analysis of the study data showed that the women's lowered perceived stress after participating in the intervention was the key factor influencing their eventual decrease in consumption of high-fat and fast foods.

"It's not that these women didn't want to eat healthier," Chang said. "If you don't know how to manage stress, then when you are so stressed out, why would you care about what you eat?"

The research is published in a recent issue of the journal Nutrients.

The 338 participants, overweight or obese moms between the ages of 18 and 39, were recruited from the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), which serves low-income mothers and children up to age 5. Those eligible for the program must have an annual household income no higher than 185 percent of the federal poverty line.

Chang said these women are likely to face a number of challenges that could cause them stress: financial difficulties, living in run-down neighborhoods, frequent moves, unstable romantic relationships and households bustling with little kids. It's also common for this population to retain 10 or more pounds of pregnancy weight after childbirth and risk life-long obesity and potential problems for themselves and new babies if they become pregnant again.

During the trial, the 212 participants randomized into the intervention group watched a total of 10 videos in which women like them gave unscripted testimonials about healthy eating and food preparation, managing their stress and being physically active. Participants also dialed in to 10 peer support group teleconferences over the course of the study.

Chang and colleagues previously reported that as a group, the women in the intervention arm of the study were more likely to have reduced their fat consumption than women in a comparison group who were given print materials about lifestyle change.

This newer analysis showed that the intervention's lessons alone did not directly affect that change in diet. When the researchers assessed the potential role of stress as a mediator, the indirect effect of the intervention - reducing participants' perceived stress - was associated with less consumption of high-fat foods, including fast food. A 1-point reduction in the scale measuring stress was linked to a nearly 7% reduction in how frequently the women ate high-fat foods.

The intervention focused on showing the women examples of how they could achieve a healthier and less stressful lifestyle rather than telling them what they had to change.

"I learned a lot from those women," Chang said. "Everything needs to be practical and applicable to daily life - anytime, anywhere."

Some examples: Comparing a bag of chips to a bag of apples - the chips might be half the price, but they supply far fewer family snacks. Or using a household responsibility chart to assign tasks to young children, and encouraging moms to reward kids with a hug or individual attention when they follow the instructions. And taking deep breaths to counter the feeling of being overwhelmed.

When it came to stress management, the researchers focused on advising the women to shift their thinking, and not to blame themselves when things go wrong, rather than to take on solving the problems that caused them stress.

"We raised their awareness about stressors in their lives, and unfortunately a lot of these problems are not within their control," Chang said. "So we teach them ways to control their negative emotions - remember that this is temporary, and you can get through it. And give them confidence to look to the future."

Credit: 
Ohio State University

A touch of silver

image: Power transmitted through the conductive silver-hydrogel composite actuated the shape-memory alloy muscle of this stingray-inspired soft swimmer.

Image: 
Soft Machines Lab, College of Engineering, Carnegie Mellon University

In the field of robotics, metals offer advantages like strength, durability, and electrical conductivity. But, they are heavy and rigid--properties that are undesirable in soft and flexible systems for wearable computing and human-machine interfaces.

Hydrogels, on the other hand, are lightweight, stretchable, and biocompatible, making them excellent materials for contact lenses and tissue engineering scaffolding. They are, however, poor at conducting electricity, which is needed for digital circuits and bioelectronics applications.

Researchers in Carnegie Mellon University's Soft Machines Lab have developed a unique silver-hydrogel composite that has high electrical conductivity and is capable of delivering direct current while maintaining soft compliance and deformability. The findings were published in Nature Electronics.

The team suspended micrometer-sized silver flakes in a polyacrylamide-alginate hydrogel matrix. After going through a partial dehydration process, the flakes formed percolating networks that were electrically conductive and robust to mechanical deformations. By manipulating this dehydration and hydration process, the flakes can be made to stick together or break apart, forming reversible electrical connections.

Previous attempts to combine metals and hydrogels revealed a trade-off between improved electrical conductivity and lowered compliance and deformability. Majidi and his team sought to tackle this challenge, building on their expertise in developing stretchable, conductive elastomers with liquid metal.

"With its high electrical conductivity and high compliance or 'squishiness,' this new composite can have many applications in bioelectronics and beyond," explained Carmel Majidi, professor of mechanical engineering. "Examples include a sticker for the brain that has sensors for signal processing, a wearable energy generation device to power electronics, and stretchable displays."

The silver-hydrogel composite can be printed by standard methods like stencil lithography, similar to screen printing. The researchers used this technique to develop skin-mounted electrodes for neuromuscular electrical stimulation. According to Majidi, the composite could cover a large area of the human body, "like a second layer of nervous tissue over your skin."

Future applications could include treating muscular disorders and motor disabilities, such as assisting someone with tremors from Parkinson's disease or difficulty grasping something with their fingers after a stroke.

Credit: 
College of Engineering, Carnegie Mellon University

Climate change damaging North America's largest temperate rainforest, harming salmon

New research released in Bioscience found that a remote region of North America's largest temperate rainforest is experiencing changes to its ecosystem due to climate change. Brian Buma, a researcher and professor of integrated biology at University of Colorado Denver, co-leads the research network that outlined the changes in a new paper.

North America's largest remaining temperate rainforest, located in Southeast Alaska, is one of the most pristine and intact ecosystems. The entire ecosystem stretches well over 2,000 km from north to south and stores more carbon in its forests than any other.

The region can store more than 1,000 tons per hectare of carbon in biomass and soil. Although the area is extremely remote, researchers say it is not immune from the negative impacts of climate change. Glaciers are disappearing faster than most other places on Earth and winter snows are turning into winter rains. This is leading to a change in stream temperatures, which can harm salmon, and changes in ground temperatures, causing the death of forests.

"This is an incredible landscape in a relatively compact area we have as much biomass carbon as 8% of the lower 48 states put together," said Buma. "The 200-foot trees, the deep soils--it's just layers and layers of life. And that land is so intertwined with the water that any change in one means massive change in the other, downstream and into the ocean."

Why is this important? Forests absorb more carbon than they release. Trees absorb carbon during photosynthesis, removing large amounts of carbon from the atmosphere. Since the forest is growing faster as the climate warms, a lot of that carbon "leaks" out through the creeks and rivers. This carbon powers downstream and marine ecosystems, which thrive on the flow of energy off the land.

"This region is immensely important to global carbon cycles and our national carbon strategy, but we still don't know the direction overall carbon stocks and movement will take as the world warms," said Buma. "While there is ample research identifying how important this area is, more work is needed to determine where this large reservoir will trend in the future."

Credit: 
University of Colorado Denver

Ideas for future NASA missions searching for extraterrestrial civilizations

image: Artistic recreation of a hypothetical exoplanet with artificial lights on the night side

Image: 
Rafael Luis Méndez Peña/Sciworthy.com

A researcher at the Instituto de Astrofísica de Canarias (IAC) is the lead author of a study with proposals for "technosignatures" -evidence for the use of technology or industrial activity in other parts of the Universe- for future NASA missions. The article, published in the specialized journal Acta Astronautica, contains the initial conclusions of a meeting of experts in the search for intelligent extraterrestrial life, sponsored by the space agency to gather advice about this topic.

In the article, several ideas are presented to search for technosignatures that would indicate the existence of extraterrestrial civilizations, from the most humdrum, such as the presence of industrial pollution in the atmosphere or large swarms of satellites, to hypothetical gigantic space engineering work, such as heat shields to fend off climate change, or Dyson spheres for optimum use of the light from the local star. Some of the proposed searches look very far in space, across our galaxy and even beyond, while others aim at scanning our own solar system in search for probes that might have been sent here in a distant past. In addition, a study is included of a new way of classifying the technosignatures as a function of their "cosmic footprint", a measure of how conspicuous they are at large distances.

"We have no idea whether intelligence is something very common in the Universe or, on the contrary, whether it is extremely rare", explains Hector Socas-Navarro, an IAC researcher, the Director of the Museum of Science and the Cosmos, of Museums of Tenerife, and the first author of the article. "For that reason we cannot know whether these searches have any chance of success. There is no choice but to search and see what we find, because the implications would be tremendous".

"The idea of searching for technosignatures draws upon the technology we have on Earth today and possible extensions of our technology into the future", notes Jacob Haqq-Misra, a coauthor of the article and chairman of the TechnoClimes 2020 organizing committee. "This does not necessarily mean that any extraterrestrial technology must be like our own, but imagining plausible extensions of our own future is one place to begin thinking of astronomical searches we could actually do to look for possible technosignatures".

The search for technosignatures

In 1993, NASA abruptly terminated its initial SETI programme for the search for intelligent extraterrestrial life, when it had hardly started. It comprised two complementary ambitious projects, one using the giant radiotelescope at Arecibo, Puerto Rico, and the other with the antennas of the Deep Space Network in California. Now, nearly 30 years later, things have changed and the Agency wants to re-start its search effort.

In the past decade great advances in astronomical instrumentation have been made, leading to a revolution in the science of discovery and study of exoplanets. The new telescopes, and projects on future space missions will for the first time allow the search for so-called biomarkers, evidence for life on other planets. Many experts consider it plausible that in the coming years we will discover extraterrestrial life, even though it is most likely to be life in very simple form.

Given present and future technological advance there will be new opportunities to search for technosignatures. That is why NASA has decided to get involved again in the search for extraterrestrial intelligence, taking advantage of the possibilities of the current and proposed future space observatories.

These subjects, among others, were on the agenda of the meeting TechnoClimes 2020 under the auspices of NASA at the Blue Marble Space Institute of Science (Seattle, USA). With scientists from all over the world, its aim was to propose new developments making way for future advances.

Finally, due to the COVID-19 pandemic, the meeting was held virtually via videoconference, in which 53 researchers from various disciplines coming from 13 countries discussed a range of aspects of the search for other intelligent species.

Credit: 
Instituto de Astrofísica de Canarias (IAC)

Unfavorable weather conditions were the main cause of the fog-haze events over the Beijing-Tianjin-Hebei region during the COVID-19 lockdown

image: Aerosol, emission and meteorology

Image: 
GAO Yi

At the end of December 2019, Coronavirus Disease 2019 (COVID-19) quickly spread throughout Hubei Province and other parts of China. During the 2020 Spring Festival, public activities were cancelled, people tried their best to stay at home, and human and industrial activities were reduced to a basic or minimum level. However, during this period, severe fog-haze events occurred over the North China Plain. What was the leading factor that caused these severe smog incidents? And what were the individual impacts of meteorological conditions and emission reductions?

To evaluate the impacts of meteorological conditions and emission reduction measures on the near-surface PM2.5 (fine particulate matter) during the COVID-19 lockdown, three numerical experiments with different meteorological fields and emission sources were carried out with a coupled meteorology and aerosol/chemistry model (WRF-Chem) by Professor Zhang Meigen and his team from the Institute of Atmospheric Physics, Chinese Academy of Sciences, and the findings have recently been published in Atmospheric and Oceanic Science Letters.

The results of the study found that, compared with the same period in 2019, the PM2.5 concentration in the Beijing-Tianjin-Hebei region increased by 50-70 μg m-3 from 7 to 14 February 2020, during which time the daily average PM2.5 concentration in Beijing reached 175 μg m-3. Results from sensitivity tests showed that the main cause was that the increase in PM2.5 caused by meteorological conditions was greater than the decrease in PM2.5 caused by emission reductions.

"Higher temperatures and relative humidity usually hasten the formation of secondary aerosols by accelerating chemical reactions", explains Prof. Zhang. "Meanwhile, the lower wind speed in the Beijing-Tianjin-Hebei region inhibits the diffusion of air pollutants and the lower planetary boundary layer height enhances atmospheric stability. These unfavorable meteorological conditions led to these haze events in the Beijing-Tianjin-Hebei region."

Therefore, it is necessary to consider meteorological conditions when assessing the effectiveness of emission control policies on changes in air pollutants. Doing so is likely to be very helpful for the formulation of future air pollution reduction policies.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Adaptive microelectronics reshape independently and detect environment for first time

image: Thanks to sensors and artificial muscles on the microscale, future microelectronics will be able to take on complex shapes and create bioneural interfaces with sensitive biological tissue without causing damage.

Image: 
IFW Dresden/Chemnitz University of Technology

Flexible and adaptive microelectronics is considered an innovation driver for new and more effective biomedical applications. These include, for example, the treatment of damaged nerve bundles, chronic pain, or the control of artificial limbs. For this to work, close contact between electronics and neural tissue is essential for effective electrical and mechanical coupling. In addition, potential applications arise from the production of tiny and flexible surgical tools.

An international team led by Prof. Dr. Oliver G. Schmidt, head of the Institute for Integrative Nanosciences at the Leibniz Institute for Solid State and Materials Research (IFW) Dresden and holder of the Professorship of Materials for Nanoelectronics at Chemnitz University of Technology and initiator of the Center for Materials, Architectures and Integration of Nanomembranes (MAIN), as well as Boris Rivkin, a PhD student in Prof. Schmidt's group, has now demonstrated for the first time that such adaptive microelectronics are able to position themselves in a controlled manner, manipulate biological tissue, and respond to their environment by analyzing sensor signals. The results, with Rivkin as first author, have appeared in the journal "Advanced Intelligent Systems".
Different properties for dynamic processes combined for the first time in adaptive microelectronics

Until now, it has not been possible for microelectronic structures to both sense and adapt to their environment. Although there are structures with a strain sensor that monitor their own shape, microelectronics with magnetic sensors that orient themselves in space, or devices whose motion can be controlled by electroactive polymer structures, a combination of these properties for application in a dynamic changing organism at the micrometer scale, i.e. well below a millimeter, has not been reported so far.
Adaptive and intelligent microelectronics

At the heart of these applications is a polymer film, just 0.5 mm wide and 0.35 mm long, which acts as a carrier for the microelectronic components. By comparison, a 1-cent piece has a diameter of around 16 mm. In their publication, the team from Chemnitz University of Technology and the Leibniz IFW in Dresden now presents adaptive and intelligent microelectronics that use microscopic artificial muscles to reshape and adapt to dynamic environments thanks to the feedback of appropriate sensors.

The sensor signals are fed through electrical connections to a microcontroller, where they are evaluated and used to generate control signals for the artificial muscles. This allows these miniature tools to adapt to complex and unpredictable anatomical shapes. For example, nerve bundles have always different sizes. Adaptive microelectronics can gently enclose these nerve bundles to establish a suitable bioneural interface.

Essential for this is the integration of shape or position sensors in combination with microactuators. Adaptive microelectronics are therefore manufactured in a so-called "monolithic wafer-scale process." "Wafers" are flat substrates made of silicon or glass on which the circuits are manufactured. Monolithic production allows many components to be manufactured simultaneously in parallel on one substrate. This enables fast and at the same time more cost-effective production.
Artificial muscles generate movement - use in organic environment possible

The movement and reshaping of adaptive microelectronics is achieved by means of artificial muscles, the so-called "actuators". These generate movement by ejecting or absorbing ions and can thus reshape the polymer film.

This process is based on the use of the polymer polypyrrole (PPy). The advantage of this method is that manipulation of the shape can be carried out in a targeted manner and with already very low electrical bias (less than one volt). The fact that artificial muscles are also safe for use in organic environments has already been demonstrated by other groups in the past. This involved testing the performance of the micromachines in various environments relevant to medical applications, including cerebrospinal fluid, blood, plasma, and urine.

Going for even more complex microelectronic robots in the future

The team from Dresden and Chemnitz expects that adaptive and intelligent microelectronics will be developed into complex robotic microsystems in the medium term. Boris Rivkin says: "The crucial next step is the transition from the previously flat architecture to three-dimensional micro-robots. Previous work has demonstrated how flat polymer films can reshape into three-dimensional structures through self-organized folding or rolling. We will add adaptive electronics to such materials to develop systems such as robotic micro-catheters, tiny robotic arms, and malleable neural implants that act semi-autonomously following a digital instruction."

Dr. Daniil Karnaushenko, group leader in Prof. Oliver Schmidt's team, adds, "Such complex microrobots will require a large number of individual actuators and sensors. To effectively accommodate and use electronic components in such a density is a challenge because more electrical connections are needed than space is available. This will be solved by complex electronic circuits that will be integrated into adaptive microelectronics in the future to pass the appropriate instructions through to the right components."

This work also contributes to the emerging field of robot-assisted surgery, which could enable less invasive yet more precise procedures. Smart surgical tools that generate reliable feedback about their shape and position could become indispensable in treating delicate tissue.

Credit: 
Chemnitz University of Technology

Learning to help the adaptive immune system

image: Researchers at The University of Tokyo use the mathematics of adaptive learning and artificial intelligence to describe how T helper cells adjust the response of the vertebrate immune system, which may lead to new vaccines and treatments for infections

Image: 
Institute of Industrial Science, the University of Tokyo

Tokyo, Japan - Scientists from the Institute of Industrial Science at The University of Tokyo demonstrated how the adaptive immune system uses a method similar to reinforcement learning to control the immune reaction to repeat infections. This work may lead to significant improvements in vaccine development and interventions to boost the immune system.

In the human body, the adaptive immune system fights germs by remembering previous infections so it can respond quickly if the same pathogens return. This complex process depends on the cooperation of many cell types. Among these are T helpers, which assist by coordinating the response of other parts of the immune system--called effector cells--such as T killer and B cells. When an invading pathogen is detected, antigen presenting cells bring an identifying piece of the germ to a T cell. Certain T cells become activated and multiply many times in a process known as clonal selection. These clones then marshal a particular set of effector cells to battle the germs. Although the immune system has been extensively studied for decades, the "algorithm" used by T cells to optimize the response to threats is largely unknown.

Now, scientists at The University of Tokyo have used an artificial intelligence framework to show that the number of T helpers act like the "hidden layer" between inputs and outputs in an artificial neural network commonly used in adaptive learning. In this case, the antigens presented are the inputs, and the responding effector immune cells are the output.

"Just as a neural network can be trained in machine learning, we believe the immune network can reflect associations between antigen patterns and the effective responses to pathogens," first author Takuya Kato says.

The main difference between the adaptive immune system compared with computer machine learning is that only the number of T helper cells of each type can be varied, as opposed to the connection weights between nodes in each layer. The team used computer simulations to predict the distribution of T cell abundances after undergoing adaptive learning. These values were found to agree with experimental data based on the genetic sequencing of actual T helper cells.

"Our theoretical framework may completely change our understanding of adaptive immunity as a real learning system," says co-author Tetsuya Kobayashi. "This research can shed light on other complex adaptive systems, as well as ways to optimize vaccines to evoke a stronger immune response."

Credit: 
Institute of Industrial Science, The University of Tokyo

Sonic Dirac points and the transition towards Weyl points

image: a, A 3D Dirac sonic crystal and its transition to a Weyl sonic crystal. a Photograph of the 3D Dirac sonic crystal. b,c Geometry of the unit cells for b the 3D Dirac sonic crystal and c the Weyl sonic crystal with additional chiral hopping tubes. The insets of b and c show top-view images of the 3D Dirac and Weyl sonic crystals. One unit cell is outlined with a red hexagon for each sonic crystal. d The Brillouin zone (BZ) of the 3D Dirac sonic crystal. e The band structure of the 3D Dirac sonic crystal where a Dirac point (DP) is located on . f Tight-binding model of the 3D Dirac sonic crystal, including nearest-neighbour hopping and next-nearest-neighbour hopping. g The BZ of the Weyl sonic crystal, where each 3D Dirac point splits into a pair of Weyl points, WP1 and WP2, with charges of +1 and -1, respectively. h The band structure of the Weyl sonic crystal. States with positive orbital angular momentum (OAM) and negative OAM are represented by red and blue lines, respectively. i Additional chiral interlayer coupling in the Weyl sonic crystal. The arrows represent the direction of positive phase hopping.

Image: 
by Boyang Xie, Hui Liu, Hua Cheng, Zhengyou Liu, Jianguo Tian, and Shuqi Chen

Recently, the three-dimensional (3D) Dirac points and 3D Dirac semimetals have attracted tremendous attention in the field of topological physics. The 3D Dirac point is a fourfold band crossing in 3D momentum space, which can be view as the degeneracy of two opposite Weyl points. However, the 3D Dirac points can be described by the Z2 topological invariant other than the Chern number. The topological property of 3D Dirac point is not totally the same as Weyl point. Besides, the transition from Dirac points to Weyl points has not been experimentally studied in both photonic and acoustic systems so far. Therefore, the theoretical or experimental breakthrough of 3D Dirac points and the study on their transition is of great significance further research and application.

In a new paper published in Light Science & Application, a team of scientists, led by Professor Shuqi Chen from The Key Laboratory of Weak Light Nonlinear Photonics, School of Physics, Nankai University, China, and co-workers have achieved the theoretical and experimental realization of a pair of class I acoustic 3D Dirac points in a hexagonal sonic crystal and demonstrate how the exotic features of the surface states and interface states evolve in the transition towards Weyl points. The transition from two Dirac points to two pairs of Weyl points is realized by introducing chiral hopping into the Dirac sonic crystal. Correspondingly, the surface state dispersion evolves from connecting Dirac points to connecting Weyl points. Pseudospin-polarized helical states, which link the two Dirac points in momentum space, are created through particular interface design using sublattice pseudospin inversion.

The Dirac and Weyl sonic crystals were fabricated by 3D printing based on a layer-stacking strategy. Both the bulk and surface band structures are obtained, showing the topological feature in surface and interface states. The experiment results are consistent with the simulation results. These scientists summarize the principle results:

"We study the 3D Dirac sonic crystal for three purposes: (1) to realize a 3D Dirac point with band inversion in acoustics; (2) to realize the pseudospin-polarized interface states in acoustic semimetals; and (3) to experimentally study how the surface states act in the transition from Dirac points to Weyl points."

"The helical states from 3D Dirac sonic crystal can be inherited by the Weyl sonic crystal, while more exotic interface states can arise with the chirality inversion." they added.

"The presented pseudospin-polarized interface states and the chiral interface states correspond to different regions in momentum space and different frequencies, which may further inspire the design of topological devices using both kinds of interface states. We hope our work will inspire the design of weak topological insulators, the realization of acoustic hinge states, and the design of other 3D topological devices." the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

A safer MRI contrast agent for high-resolution 3D microvascular imaging

image: (a) High-resolution 3D MRI brain vascular map using SAIO. Various blood vessels in the rat brain can be visualized. In addition to clinically important brain vessels including anterior cerebral artery, middle cerebral artery, posterior cerebral artery, fine vessel structures as thin as ~ 100 microns in diameter can be clearly visualized. (b) 3D MRI brain vascular map using Dotarem (gadolinium-based contrast agent). Resolution is low and microvessels are not visible. (c) For 3D reconstruction, multiple slices of MRI images of the rat heart were obtained after injection of SAIO. Rat coronary arteries (180 microns) which are 10 times thinner than human coronary arteries were successfully visualized. Coronary arteries supply blood to the heart muscle and the blockage or narrowing of such arteries causes myocardial infarction or angina, but visualizing these arteries was impossible due to the limited resolution of current MRI contrast agents.

Image: 
Institute for Basic Science

Heart attack and stroke are the first and second leading causes of death in developed countries, respectively. As the disease often results in sudden death with few special prognostic symptoms, early diagnosis is very important. For this purpose, imaging techniques such as magnetic resonance imaging (MRI) are widely used to identify the narrowing or blockage of blood vessels.

In MRI, contrast agents improve the visibility of the structures such as smaller blood vessels within the body. Just as satellites or global positioning systems (GPS) give traffic congestion information, the MRI contrast agents can give accurate information of vascular conditions such as vascular blockage and stenosis. Commonly used products are the gadolinium-based contrast agents. Free gadolinium ions have high toxicity, and thus they must be administered in chelated forms. However, even then there are some health risks of nephrogenic systemic fibrosis for patients with poor kidney functions. There have been some attempts to use paramagnetic iron-based nanoparticles as contrasting agents, but their renal clearance profile must be improved to prevent undesirable accumulation in the liver and other organs.

A collaborative research team led by Professor CHEON Jinwoo, the director of the Center for Nanomedicine (CNM), Institute for Basic Science (IBS) at Yonsei University, Seoul, South Korea and Professor CHOI Byoung Wook from Yonsei University College of Medicine developed a high-performance MRI contrast agent for 3D vascular mapping. The researchers developed a nanoparticle-based MRI contrast agent, called SAIO (Supramolecular Amorphous-like Iron Oxide). The particle is 5 nanometers in size, which is about 1,500 times smaller than the microvascular diameter. This allows it to circulate across blood vessels in the body.

The SAIO is a special nanoparticle that consist of a polysaccharide core made primarily with dextran cross-linked with other molecules. This core is then coated with an iron oxide surface to give it paramagnetic properties at room temperature. The hybrid nature of the SAIOs gives them both excellent biocompatibility and imaging performance. The researchers compared the contrast performance, retention, and renal clearance profile of the SAIO against Dotarem (a gadolinium-based agent) and iron oxide nanoparticles.

SAIO is one of the highest resolution imaging agents, which is 10 times more precise than the image generated using current contrast agents. It has achieved a 3D brain vascular mapping that can clearly identify brain microvessels as thin as a hair (100 microns) in animal experiments. In addition, to excellent resolution, the enhancement lasts substantially longer (>10 min) in comparison to Dotarem (

In addition to having high resolution, excretion of the contrasting agent through urine is especially important to avoid its accumulation within the body, which can cause various side effects. SAIO has an excellent renal clearance profile without accumulation in liver or spleen. It was also found that SAIO was stable without aggregation nor iron leaching for up to a year.

Director Cheon said, "SAIO is a next-generation contrast agent that satisfies both high resolution and safety at the same time" and professor Choi said, "SAIO is expected to play a vital role in increasing the accuracy of the diagnosis of cerebro-cardiovascular diseases such as stroke, myocardial infarction, angina, and dementia."

Credit: 
Institute for Basic Science

The secret of catalysts that increase fuel cell efficiency

image: Schematic diagram of the process of material phase transition, ex-solution particle formation, and the change in catalytic activity depending on the reduction environment.

Image: 
POSTECH

Fuel cells, which are attracting attention as an eco-friendly energy source, obtain electricity and heat simultaneously through the reverse reaction of water electrolysis. Therefore, the catalyst that enhances the reaction efficiency is directly connected to the performance of the fuel cell. To this, a POSTECH-UNIST joint research team has taken a step closer to developing high-performance catalysts by uncovering the ex-solution and phase transition phenomena at the atomic level for the first time.

A joint research team of Professor Jeong Woo Han and Ph.D. candidate Kyeounghak Kim of POSTECH's Department of Chemical Engineering, and Professor Guntae Kim of UNIST have uncovered the mechanism by which PBMO - a catalyst used in fuel cells - is transformed from perovskite structure to layered structure with nanoparticles ex-solution1 to the surface, confirming its potential as an electrode and a chemical catalyst. These research findings were recently published as an outside back cover paper of the Energy & Environmental Science, an international journal in the field of energy.

Catalysts are substances that enhance chemical reactions. PBMO (Pr0.5Ba0.5MnO3-δ), one of the catalysts for fuel cells, is known as a material that stably operates even when directly used as a hydrocarbon, not hydrogen. In particular, it exhibits high ionic conductivity as it changes to a layered structure under a reduction environment that loses oxygen. At the same time, the ex-solution phenomenon occurs in which the elements inside the metal oxide segregate to the surface.

This phenomenon occurs voluntarily under a reduction environment without any particular process. As the elements inside the material rise to the surface, the stability and performance of the fuel cell improve immensely. However, it was difficult to design the materials because the process through which these high-performance catalysts were formed was unknown.

Focusing on these features, the research team confirmed that the process goes through a progression of phase transition, particle ex-solution, and catalyst formation. This was proved using the first-principles calculation based on quantum mechanics and the in-situ XRD2 experiment that allows the observation of real-time crystal structural changes in materials. The researchers also confirmed that the oxidation catalyst developed this way displays up to four times better performance than the conventional catalysts, verifying that this study is applicable to various chemical catalysts.

"We were able to accurately understand the materials in atomic units that were difficult to confirm in previous experiments, and successfully demonstrated it thus overcoming the limitations of existing research by accurately understanding materials in atomic units, which were difficult to confirm in existing experiments, and successfully demonstrating them," explained Professor Jeong Woo Han who led the study. "Since these support materials and nanocatalysts can be used for exhaust gas reduction, sensors, fuel cells, chemical catalysts, etc., active research in numerous fields is anticipated in the future."

Credit: 
Pohang University of Science & Technology (POSTECH)

Variant B.1.1.7 of COVID-19 associated with a significantly higher mortality rate

Variant B.1.1.7 of COVID-19 associated with a significantly higher mortality rate, research shows

The highly infectious variant of COVID-19 discovered in Kent, which swept across the UK last year before spreading worldwide, is between 30 and 100 per cent more deadly than previous strains, new analysis has shown.

A pivotal study, by epidemiologists from the Universities of Exeter and Bristol, has shown that the SARS-CoV-2 variant, B.1.1.7, is associated with a significantly higher mortality rate amongst adults diagnosed in the community compared to previously circulating strains.

The study compared death rates among people infected with the new variant and those infected with other strains.

It showed that the new variant led to 227 deaths in a sample of 54906 patients - compared to 141 amongst the same number of closely matched patients who had the previous strains.

With the new variant already detected in more than 50 countries worldwide, the analysis provides crucial information to governments and health officials to help prevent its spread.

The study is published in the British Medical Journal on Wednesday, 10 March 2021.

Robert Challen, lead author of the study from the University of Exeter said: "In the community, death from COVID-19 is still a rare event, but the B.1.1.7 variant raises the risk. Coupled with its ability to spread rapidly this makes B.1.1.7 a threat that should be taken seriously."

The Kent variant, first detected in the UK in September 2020, has been identified as being significantly quicker and easier to spread, and was behind the introduction of new lockdown rules across the UK from January.

The study shows that the higher transmissibility of the Kent strain meant that more people who would have previously been considered low risk were hospitalised with the newer variant.

Having analysed data from 54609 matched pairs of patients of all age-groups and demographics, and differing only in strain detected, the team found that there were 227 deaths attributed to the new strain, compared to 141 attributable to earlier strains.

Leon Danon, senior author of the study from the University of Bristol said: "We focussed our analysis on cases that occurred between November 2020 and January 2021, when both the old variants and the new variant were present in the UK. This meant we were able to maximise the number of "matches" and reduce the impact of other biases. Subsequent analyses have confirmed our results.

"SARS-CoV-2 appears able to mutate quickly, and there is a real concern that other variants will arise with resistance to rapidly rolled out vaccines. Monitoring for new variants as they arise, measuring their characteristics and acting appropriately needs to be a key part of the public health response in the future."

Ellen Brooks-Pollock from the University of Bristol expanded: "It was fortunate the mutation happened in a part of the genome covered by routine testing. Future mutations could arise and spread unchecked".

Credit: 
University of Exeter

New study links protein causing Alzheimer's disease with common sight loss

Newly published research has revealed a close link between proteins associated with Alzheimer's disease and age-related sight loss. The findings could open the way to new treatments for patients with deteriorating vision and through this study, the scientists believe they could reduce the need for using animals in future research into blinding conditions.

Amyloid beta (AB) proteins are the primary driver of Alzheimer's disease but also begin to collect in the retina as people get older. Donor eyes from patients who suffered from age-related macular degeneration (AMD), the most common cause of blindness amongst adults in the UK, have been shown to contain high levels of AB in their retinas.

This new study, published in the journal Cells, builds on previous research which shows that AB collects around a cell layer called the retinal pigment epithelium (RPE), to establish what damage these toxic proteins cause RPE cells.

The research team exposed RPE cells of normal mouse eyes and in culture to AB. The mouse model enabled the team to look at the effect the protein has in living eye tissue, using non-invasive imaging techniques that are used in ophthalmology clinics. Their findings showed that the mouse eyes developed retinal pathology that was strikingly similar to AMD in humans.

Dr Arjuna Ratnayaka, a Lecturer in Vision Sciences at the University of Southampton, who led the study said, "This was an important study which also showed that mouse numbers used for experiments of this kind can be significantly reduced in the future. We were able to develop a robust model to study AMD-like retinal pathology driven by AB without using transgenic animals, which are often used by researchers the field. Transgenic or genetically engineered mice can take up to a year and typically longer, before AB causes pathology in the retina, which we can achieve within two weeks. This reduces the need to develop more transgenic models and improves animal welfare."

The investigators also used the cell models, which further reduced the use of mice in these experiments, to show that the toxic AB proteins entered RPE cells and rapidly collected in lysosomes, the waste disposal system for the cells. Whilst the cells performed their usual function of increasing enzymes within lysosomes to breakdown this unwanted cargo, the study found that around 85% of AB still remained within lysosomes, meaning that over time the toxic molecules would continue to accumulate inside RPE cells.

Furthermore, the researchers discovered that once lysosomes had been invaded by AB, around 20 percent fewer lysosomes were available to breakdown photoreceptor outer segments, a role they routinely perform as part of the daily visual cycle.

Dr Ratnayaka added, "This is a further indication of how cells in the eye can deteriorate over time because of these toxic molecules collecting inside RPE cells. This could be a new pathway that no-one has explored before. Our discoveries have also strengthened the link between diseases of the eye and the brain. The eye is part of the brain and we have shown how AB which is known to drive major neurological conditions such as Alzheimer's disease can also causes significant damage to cells in retina."

The researchers hope that one of the next steps could be for anti-amyloid beta drugs, previously trialled in Alzheimer's patients, to be re-purposed and trialled as a possible treatment for age-related macular degeneration. As the regulators in the USA and the European Union have already given approval for many of these drugs, this is an area that could be explored relatively quickly.

The study may also help wider efforts to largely by-pass the use of animal experimentation where possible, so some aspects of testing new clinical treatments can transition directly from cell models to patients.

This research was funded by the National Centre for the Replacement Refinement & Reduction of animals in research (NC3Rs). Dr Katie Bates, Head of Research Funding at the NC3Rs said:

"This is an impactful study that demonstrates the scientific, practical and 3Rs benefits to studying AMD-like retinal pathology in vitro."

Credit: 
University of Southampton

Face masks are a ticking plastic bomb

image: disposed facemasks collected in Odense City, Denmark.

Image: 
Elvis Genbo Xu/SDU

Recent studies estimate that we use an astounding 129 billion face masks globally every month - that is 3 million a minute. Most of them are disposable face masks made from plastic microfibers.

- With increasing reports on inappropriate disposal of masks, it is urgent to recognize this potential environmental threat and prevent it from becoming the next plastic problem, researchers warn in a comment in the scientific journal Frontiers of Environmental Science & Engineering.

The researchers are Environmental Toxicologist Elvis Genbo Xu from University of Southern Denmark and Professor of Civil and Environmental Engineering Zhiyong Jason Ren from Princeton University.

No guidelines for mask recycling:

Disposable masks are plastic products, that cannot be readily biodegraded but may fragment into smaller plastic particles, namely micro- and nanoplastics that widespread in ecosystems.

The enormous production of disposable masks is on a similar scale as plastic bottles, which is estimated to be 43 billion per month.

However, different from plastic bottles, (of which app. 25 pct. is recycled), there is no official guidance on mask recycle, making it more likely to be disposed of as solid waste, the researchers write.

Greater concern than plastic bags:

If not disposed of for recycling, like other plastic wastes, disposable masks can end up in the environment, freshwater systems, and oceans, where weathering can generate a large number of micro-sized particles (smaller than 5 mm) during a relatively short period (weeks) and further fragment into nanoplastics (smaller than 1 micrometer).

- A newer and bigger concern is that the masks are directly made from microsized plastic fibers (thickness of ~1 to 10 micrometers). When breaking down in the environment, the mask may release more micro-sized plastics, easier and faster than bulk plastics like plastic bags, the researchers write, continuing:

- Such impacts can be worsened by a new-generation mask, nanomasks, which directly use nano-sized plastic fibers (with a diameter smaller than 1 micrometer) and add a new source of nanoplastic pollution.

- The researchers stress that they do not know how masks contribute to the large number of plastic particles detected in the environment - simply because no data on mask degradation in nature exists.

- But we know that, like other plastic debris, disposable masks may also accumulate and release harmful chemical and biological substances, such as bisphenol A, heavy metals, as well as pathogenic micro-organisms. These may pose indirect adverse impacts on plants, animals and humans, says Elvis Genbo Xu.

What can we do?

Elvis Genbo Xu and Zhiyong Jason Ren have the following suggestions for dealing with the problem:

Set up mask-only trash cans for collection and disposal

consider standardization, guidelines, and strict implementation of waste management for mask wastes

replace disposable masks with reusable face masks like cotton masks

consider development of biodegradable disposal masks.

Credit: 
University of Southern Denmark

Catching energy-exploration caused earthquakes before they happen

video: Sandia National Laboratories geoscientist Hongkyu Yoon and his team 3D-print rocks with reproducible faults and then squeeze them until they crack. Listening to the sound of the rocks breaking provides the team with the data they need to "train" a deep-learning algorithm to identify signals of seismic events faster and more accurately than conventional earthquake monitoring systems.

Image: 
Video by Rebecca Gustaf/Sandia National Laboratories. All video footage was taken prior to the COVID-19 pandemic.

ALBUQUERQUE, N.M. -- Geoscientists at Sandia National Laboratories used 3D-printed rocks and an advanced, large-scale computer model of past earthquakes to understand and prevent earthquakes triggered by energy exploration.

Injecting water underground after unconventional oil and gas extraction, commonly known as fracking, geothermal energy stimulation and carbon dioxide sequestration all can trigger earthquakes. Of course, energy companies do their due diligence to check for faults -- breaks in the earth's upper crust that are prone to earthquakes -- but sometimes earthquakes, even swarms of earthquakes, strike unexpectedly.

Sandia geoscientists studied how pressure and stress from injecting water can transfer through pores in rocks down to fault lines, including previously hidden ones. They also crushed rocks with specially engineered weak points to hear the sound of different types of fault failures, which will aid in early detection of an induced earthquake.

3D-printing variability provides fundamental structural information

To study different types of fault failures, and their warning signs, Sandia geoscientist Hongkyu Yoon needed a bunch of rocks that would fracture the same way each time he applied pressure -- pressure not unlike the pressure caused by injecting water underground.

Natural rocks collected from the same location can have vastly different mineral orientation and layering, causing different weak points and fracture types.

Several years ago, Yoon started using additive manufacturing, commonly known as 3D printing, to make rocks from a gypsum-based mineral under controlled conditions, believing that these rocks would be more uniform. To print the rocks, Yoon and his team sprayed gypsum in thin layers, forming 1-by-3-by-0.5 inch rectangular blocks and cylinders.

However, as he studied the 3D-printed rocks, Yoon realized that the printing process also generated minute structural differences that affected how the rocks fractured. This piqued his interest, leading him to study how the mineral texture in 3D-printed rocks influences how they fracture.

"It turns out we can use that variability of mechanical and seismic responses of a 3D-printed fracture to our advantage to help us understand the fundamental processes of fracturing and its impact on fluid flow in rocks," Yoon said. This fluid flow and pore pressure can trigger earthquakes.

For these experiments, Yoon and collaborators at Purdue University, a university with which Sandia has a strong partnership, made a mineral ink using calcium sulfate powder and water. The researchers, including Purdue professors Antonio Bobet and Laura Pyrak-Nolte, printed a layer of hydrated calcium sulfate, about half as thick as a sheet of paper, and then applied a water-based binder to glue the next layer to the first. The binder recrystallized some of the calcium sulfate into gypsum, the same mineral used in construction drywall.

The researchers printed the same rectangular and cylindrical gypsum-based rocks. Some rocks had the gypsum mineral layers running horizontally, while others had vertical mineral layers. The researchers also varied the direction in which they sprayed the binder, to create more variation in mineral layering.

The research team squeezed the samples until they broke. The team examined the fracture surfaces using lasers and an X-ray microscope. They noticed the fracture path depended on the direction of the mineral layers. Yoon and colleagues described this fundamental study in a paper published in the journal Scientific Reports.

Sound signals and machine learning to classify seismic events

Also, working with his collaborators at Purdue University, ?Yoon monitored acoustic waves coming from the printed samples as they fractured. These sound waves are signs of rapid microcracks. Then the team combined the sound data with machine-learning techniques, a type of advanced data analysis that can identify patterns in seemingly unrelated data, to detect signals of minute seismic events.

First, Yoon and his colleagues used a machine-learning technique known as a random forest algorithm to cluster the microseismic events into groups that were caused by the same types of microstructures and identify about 25 important features in the microcrack sound data. They ranked these features by significance.

Using the significant features as a guide, they created a multilayered "deep" learning algorithm -- like the algorithms that allow digital assistants to function -- and applied it to archived data collected from real-world events. The deep-learning algorithm was able to identify signals of seismic events faster and more accurately than conventional monitoring systems.

Yoon said that within five years they hope to apply many different machine-learning algorithms, like these and those with imbedded geoscience principles, to detect induced earthquakes related to fossil fuel activities in oil or gas fields. The algorithms can also be applied to detect hidden faults that might become unstable due to carbon sequestration or geothermal energy stimulation?, he said.

"One of the nice things about machine learning is the scalability," Yoon said. "We always try to apply certain concepts that were developed under laboratory conditions to large-scale problems -- that's why we do laboratory work. Once we proved those machine-learning concepts developed at the laboratory scale on archived data, it's very easy to scale it up to large-scale problems, compared to traditional methods."

Stress transfers through rock to deep faults

A hidden fault was the cause of a surprise earthquake at a geothermal stimulation site in Pohang, South Korea. In 2017, two months after the final geothermal stimulation experiment ended, a magnitude 5.5 earthquake shook the area, the second strongest quake in South Korea's recent history.

After the earthquake, geoscientists discovered a fault hidden deep between two injection wells. To understand how stresses from water injection traveled to the fault and caused the quake, Kyung Won Chang, a geoscientist at Sandia, realized he needed to consider more than the stress of water pressing on the rocks. In addition to that deformation stress, he also needed to account for how that stress transferred to the rock as the water flowed through pores in the rock itself in his complex large-scale computational model.

Chang and his colleagues described the stress transfer in a paper published in the journal Scientific Reports.

However, understanding deformation stress and transfer of stress through rock pores is not enough to understand and predict some earthquakes induced by energy-exploration activities. The architecture of different faults also needs to be considered.

Using his model, Chang analyzed a cube 6 miles long, 6 miles wide and 6 miles deep where a swarm of more than 500 earthquakes took place in Azle, Texas, from November 2013 to May 2014. The earthquakes occurred along two intersecting faults, one less than 2 miles beneath the surface and another longer and deeper. While the shallow fault was closer to the sites of wastewater injection, the first earthquakes occurred along the longer, deeper fault.

In his model, Chang found that the water injections increased the pressure on the shallow fault. At the same time, injection-induced stress transferred through the rock down to the deep fault. Because the deep fault was under more stress initially, the earthquake swarm began there. He and Yoon shared the advanced computational model and their description of the Azle earthquakes in a paper recently published in the Journal of Geophysical Research: Solid Earth.

"In general, we need multiphysics models that couple different forms of stress beyond just pore pressure and the deformation of rocks, to understand induced earthquakes and correlate them with energy activities, such as hydraulic stimulation and wastewater injection," Chang said.

Chang said he and Yoon are working together to apply and scale up machine-learning algorithms to detect previously hidden faults and identify signatures of geologic stress that could predict the magnitude of a triggered earthquake.

In the future, Chang hopes to use those stress signatures to create a map of potential hazards for induced earthquakes around the United States.

Credit: 
DOE/Sandia National Laboratories

How a ladybug warps space-time

image: The gold ball used in size comparison with a 1 cent coin. According to Einstein's general theory of relativity, every mass bends space-time

Image: 
© Tobias Westphal / Arkitek Scientific

Gravity is the weakest of all known forces in nature - and yet it is most strongly present in our everyday lives. Every ball we throw, every coin we drop - all objects are attracted by the Earth's gravity. In a vacuum, all objects near the Earth's surface fall with the same acceleration: their velocity increases by about 9.8 m/s every second. The strength of gravity is determined by the mass of the Earth and the distance from the center. On the Moon, which is about 80 times lighter and almost 4 times smaller than the Earth, all objects fall 6 times slower. And on a planet of the size of a ladybug? Objects would fall 30 billion times slower there than on Earth. Gravitational forces of this magnitude normally occur only in the most distant regions of galaxies to trap remote stars. A team of quantum physicists led by Markus Aspelmeyer and Tobias Westphal of the University of Vienna and the Austrian Academy of Sciences has now demonstrated these forces in the laboratory for the first time. To do so, the researchers drew on a famous experiment conducted by Henry Cavendish at the end of the 18th century.

During the time of Isaac Newton, it was believed that gravity was reserved for astronomical objects such as planets. It was not until the work of Cavendish (and Nevil Maskelyne before him) that it was possible to show that objects on Earth also generate their own gravity. Using an elegant pendulum device, Cavendish succeeded in measuring the gravitational force generated by a lead ball 30 cm tall and weighing 160 kg in 1797. A so-called torsion pendulum - two masses at the ends of a rod suspended from a thin wire and free to rotate - is measurably deflected by the gravitational force of the lead mass. Over the coming centuries, these experiments were further perfected to measure gravitational forces with increasing accuracy.

The Vienna team has picked up this idea and built a miniature version of the Cavendish experiment. A 2 mm gold sphere weighing 90 mg serves as the gravitational mass. The torsion pendulum consists of a glass rod 4 cm long and half a millimeter thick, suspended from a glass fiber a few thousandths of a millimeter in diameter. Gold spheres of similar size are attached to each end of the rod. "We move the gold sphere back and forth, creating a gravitational field that changes over time," explains Jeremias Pfaff, one of the researchers involved in the experiment. "This causes the torsion pendulum to oscillate at that particular excitation frequency." The movement, which is only a few millionths of a millimeter, can then be read out with the help of a laser and allows conclusions to be drawn about the force. The difficulty is keeping other influences on the motion as small as possible. "The largest non-gravitational effect in our experiment comes from seismic vibrations generated by pedestrians and tram traffic around our lab in Vienna," says co-author Hans Hepach: "We therefore obtained the best measurement data at night and during the Christmas holidays, when there was little traffic." Other effects such as electrostatic forces could be reduced to levels well below the gravitational force by a conductive shield between the gold masses.

This made it possible to determine the gravitational field of an object that has roughly the mass of a ladybug for the first time. As a next step, it is planned to investigate the gravity of masses thousands of times lighter.

The possibility of measuring gravitational fields of small masses and at small distances opens up new perspectives for research in gravitational physics; traces of dark matter or dark energy could be found in the behavior of gravity, which could be responsible for the formation of our present universe. Aspelmeyer's researchers are particularly interested in the interface with quantum physics: can the mass be made small enough for quantum effects to play a role? Only time will tell. For now, the fascination with Einstein's theory of gravity still prevails. "According to Einstein, the gravitational force is a consequence of the fact that masses bend spacetime in which other masses move," says first author Tobias Westphal. "So what we are actually measuring here is, how a ladybug warps space-time."

Credit: 
University of Vienna