Tech

Study reveals diverse magnetic fields in solar-type star-forming cores

image: Core-scale magnetic fields (red segments) inferred using high-resolution and sensitive dust emission polarization observations using JCMT. The Solar-type star forming cores fragmented out of B213 filament are shown.

Image: 
Eswaraiah Chakali, et al. 2021

Magnetic fields are ubiquitous throughout our Milky Way Galaxy and play a crucial role in all dynamics of interstellar medium. However, questions like how Solar-type stars form out of magnetized molecular clouds, whether the role of magnetic fields changes at various scales and densities of molecular clouds, and what factors can change the morphology of magnetic fields in low-mass dense cores still remain unclear.

A new study led by Dr. Eswaraiah Chakali from Prof. LI Di's research group at the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC) has partially answered these questions. The study reveals the diverse magnetic field morphologies in Solar-type star forming cores in the Taurus B213 region.

This study was published in The Astrophysical Journal Letters on May 10.

The researchers used high-resolution and sensitive 850-micron dust emission polarization data acquired by the James Clerk Maxwell Telescope (JCMT) using the SCUBA-2 camera along with the POL-2 polarimeter.

The observations were conducted as a part of a large international program called B-fields In STar-forming Region Observations (BISTRO).

"Although formed out of the same filamentary cloud, Taurus/B213, among the three dense cores having more polarization measurements, only one remembers the relatively uniform large-scale magnetic field threading the parental cloud," said Dr. Eswaraiah Chakali, lead author of the study.

This is in contrast to expectations based on the theory that magnetic fields regulate star formation. If a large-scale magnetic field dominates throughout cloud accumulation, core collapse and star formation, the mean position angle of the magnetic field should be similar across various spatial scales.

Further analysis of the gas velocity gradient revealed that the kinematics due to gas accretion flows onto the parental filament could have altered the magnetic field configuration.

"Even in the presence of substantial magnetic flux, local physical conditions can significantly affect magnetic field morphology and their role in star formation," said Prof. LI Di, co-corresponding author of the study.

"Our current observations represent one of the deepest sub-millimeter polarimetry images ever taken using a single dish telescope toward a Galactic region," said Prof. QIU Keping of Nanjing University, co-PI of the BISTRO project and a coauthor of the study.

Prof. LI Di also highlighted "more comprehensive analyses, in combination with Planck data and stellar polarimetry, may give more insights into the evolution of magnetic fields in this stereotypical low-mass star-forming region."

Credit: 
Chinese Academy of Sciences Headquarters

No space wasted: Embedding capacitors into interposers to increase miniaturization

image: The new interposer design with an embedded capacitor provides a notable reduction in area requirements and interconnect length, leading to lower wiring resistance and parasitic capacitance. MPU: Microprocessing unit; DRAM: Direct random-access memory.

Image: 
The 2021 IEEE Electronic Components and Technology Conference

Scientists at Tokyo Institute of Technology develop a 3D functional interposer--the interface between a chip and the package substrate--containing an embedded capacitor. This compact design saves a lot of package area and greatly reduces the wiring length between the chip's terminals and the capacitor, allowing for less noise and power consumption. Their approach paves the way to new semiconductor package structures with greater miniaturization.

Electronics started big size-wise but have only grown smaller and more compact over time. Today, even smartphones outperform the bulky computers from the 1980s by orders of magnitude. Unfortunately, this accelerating trend in performance and scale is bound to slow down considerably as the materials and designs we use approach their physical limits. To overcome such problems, it is critical to think out of the box and come up with designs that address technological bottlenecks.

Over the last decade, progress in an essential passive component in electronics, the capacitor, has stagnated in some regards. Although we can fabricate much smaller capacitors than ever before, their actual capacity per unit area hasn't been improving as much. We need ways to make capacitors occupy less space while preserving their performance.

But what if we could integrate capacitors inside another element commonly used in modern circuits: the interposer? At Tokyo Institute of Technology, Japan, a team of scientists led by Professor Takayuki Ohba have committed to developing technologies to sustain the scaling of semiconductor circuits. In their latest study, which will be presented at the Proceedings of the 2021 IEEE Electronic Components and Technology Conference, they demonstrated that silicon interposers--the planar interface that holds and vertically connects an integrated chip with a circuit package or another chip--can be made into functional capacitors, thus saving up considerable space, and bringing along a ton of benefits.

In modern "2.5D" packages, chips such as DRAMs and microprocessors sit atop interposers with through-silicon vias, vertical conducting tunnels that bridge the connections in the chips with solder bumps on the package substrate. Capacitors are placed on the package substrate close to the components they serve, and a connection between their terminals and those of the chip has to be made, spanning 5-30 mm (Figure 1a). This layout not only increases the necessary package substrate area, but also causes problems such as high wiring resistance and noise due to the long interconnections.

In stark contrast with this design, the team at Tokyo Tech cut the middleman and directly made the interposer be the silicon capacitor (Figure 1b). They achieved this through a novel fabrication process in which the capacitive elements are embedded inside a 300 mm silicon piece using permanent adhesive and mold resin. The interconnects between the chip and the capacitor are made directly with through-silicon vias and without the need for solder bumps. "Our bumpless 3D functional interposer enables a notable reduction in package area of about 50% and an interconnect length a hundred times shorter," remarks Ohba.

The researchers also managed to cleverly avoid the two most common problems of bumpless chip-on-wafer designs, namely warping in the wafer due to the resin and misplacement errors due to void pockets in the adhesive. Through testing and theoretical calculations, they determined their functional interposer allowed for a wiring resistance about a hundred times lower than conventional designs, as well as a lower parasitic capacitance. These features could enable the use of lower supply voltages, leading to lower power consumption. "The chip-on-wafer integration technology we are developing will open up new routes in the evolution of semiconductor package structures," concludes Ohba excitedly. Overall, this study is a perfect example of the creative leaps that are needed if the accelerating march of technology is to be maintained.

Credit: 
Tokyo Institute of Technology

ALS development could be triggered by loss of network connections in the spinal cord

image: The spinal cord of a mouse with ALS. The green cells are inhibitory interneurons.

Image: 
Ilary Allodi, University of Copenhagen

ALS is a very severe neurodegenerative disease in which nerve cells in the spinal cord controlling muscles and movement slowly die. There is no effective treatment and the average life expectancy after being diagnosed with ALS is usually short. Because of this, new knowledge about the disease is urgently needed.

Now, researchers from the University of Copenhagen have gained new insights about ALS, by investigating the early development of the disease in a mouse model.

"We have found that networks of nerve cells in the spinal cord called inhibitory interneurons lose connection to motor neurons, the nerve cells that directly control muscle contraction. We do not yet know if these changes cause the disease. But the loss of the inhibitory signal could explain why the motor neurons end up dying in ALS", says first and co-corresponding author to the new study Ilary Allodi, Assistant Professor at the Department of Neuroscience.

A lot of ALS research have focused on the motor neurons themselves, but the research group at the University of Copenhagen had a different approach.

"It is only natural that motor neurons have received major attention. They control our muscles, which is the challenge for ALS patients. Here, we wanted to investigate the circuit of interneurons in the spinal cord because they determine the activity of motor neurons. Since we found that there is a loss of connections between inhibitory interneurons and motor neurons that happens before the motor neuron death, we think that this loss could be a possible explanation for why the motor neurons ends up dying in ALS patients", says Ole Kiehn, senior, co-corresponding author and Professor at the Department of Neuroscience.

Fast-twitch first

In ALS patients, the degeneration typically starts with what is called the fast-twitch motor neurons and then goes on to other motor neurons. This means that certain muscles and bodily functions are affected before others. Normally, patients lose coordination and speed in movement before more basic functions such as breathing. This is mirrored in the new findings, according to the researchers.

"In our mouse model, we show that the loss of connection happens to fast motor neurons first and then slow motor neurons later on involve a particular type of inhibitory neurons, the so called V1 interneurons", says Roser Montañana-Rosell, who is PhD student and shared first author on the study.

"The V1 interneuron connectivity loss is paralleled by the development of a specific locomotor deficit in the pre-symptomatic phase with lower speed and changes in limb coordination in the ALS mice that is dependent on V1 interneuron connections to motor neuron", says Ole Kiehn.

Expanding the window of opportunity

The researchers underline that the mechanisms should be investigated in human patients as well. However, they do not have any reason to believe that the same or similar biological mechanisms are not at play in humans.

Given the new understanding of the disease, Ilary Allodi hopes further research into the signaling process could reveal how to repair the nerve cell connection loss in ALS.

"We definitely hope that our findings can contribute with a new way of thinking about ALS development. With a distinct focus on interneurons, we might be able, in future experiments, to increase the signaling processes from the interneurons to the motor neurons and prevent or delay the motor neuron degeneration from an early stage," ends Ilary Allodi.

Credit: 
University of Copenhagen - The Faculty of Health and Medical Sciences

Is the U.S. Understating Climate Emissions from Meat and Dairy Production?

Methane emissions from North American livestock may be routinely undercounted, a new analysis by researchers at New York University and Johns Hopkins University finds. The work also notes that in developing countries, where animal agriculture is becoming increasingly industrialized, methane emissions could rise more than expected.

These assessments are based on a review, appearing in the journal Environmental Research Letters, of eight existing studies.

Methane is a global warming gas even more powerful than CO2. Its amount and lifetime in the atmosphere are smaller than CO2, but quantities are still increasing. The United Nations has recently urged that reducing methane emissions is a highly effective way of rapidly reducing global warming.

The U.S. Environmental Protection Agency (EPA) reports these emissions in a national greenhouse gas inventory every year using complex models. But, the researchers write, existing methods that the EPA and other international agencies use to estimate methane emissions from animals are not corroborated by measuring concentrations of the gas in the air.

This omission is significant.

Some previous studies have monitored methane directly in the air using tall towers, airplanes, and satellites, collected above and downwind of animal production facilities. The recent Environmental Research Letters analysis compiled and reviewed several of these atmospheric studies over North America through the last decade. These studies consistently found more methane than the EPA and other agencies expected coming from livestock, in amounts ranging from 39 percent to 90 percent higher than previously estimated.

"Back in 2013, we found that atmospheric methane emissions were higher from livestock and oil and gas producing regions than the EPA was reporting," says Scot Miller, an assistant professor at Johns Hopkins University and coauthor of the Environmental Research Letters paper. "Since then, the models and atmospheric measurements don't appear much closer to coming into agreement. It's increasingly likely that methane emissions from farmed animals could be higher in North America than is often being reported."

Methane comes from cows' and sheep's digestion, as well as from stockpiles of manure from all farmed animals. In the U.S. and Canada, animal production is nearly entirely divorced from other farming practices like crop production. Pigs and chickens are raised in crowded sheds and their manure is stored in large stockpiles. Dairy cows are crowded into milking parlors and produce more manure than some small cities.

These industrialized changes to rearing animals allow producers to use less feed like hay, corn, and soybeans, translating to fewer resources needed on farms. It was long assumed by the scientific community that this also translates into lower greenhouse gas emissions, too.

"North American meat and dairy producers often tout improvements in their efficiency, claiming that concentrated feeds and confinement have reduced greenhouse gas emissions greatly over the past few decades," observes Matthew Hayek, an assistant professor in NYU's Environmental Studies Department and a co-author of the paper. "Our findings throw those claims into doubt. Individual cows may be belching and emitting less, but that doesn't necessarily translate to entire herds and warehouses of confined animals, and their stockpiles of manure, emitting less."

These assessments have international importance as well, the authors note. Since re-entering the Paris Agreement in 2021, the U.S. is preparing to reduce emissions from all greenhouse gases, including those from animal agriculture.

"This research indicates a need to reexamine or improve reporting methods for methane, which are critical to tracking progress over time," Hayek says.

Other countries may have cause for concern in the future, too. For instance, throughout Asia, meat and dairy consumption is on the rise, and this production is becoming increasingly industrialized. The United Nations Food and Agriculture Organization previously predicted that East and Southeast Asia's animal emissions will peak around 2030 because U.S.-style technological efficiency in Asia could reduce emissions afterward.

The findings reported in Environmental Research Letters, however, indicate that emissions could actually continue to rise through the year 2050.

"This would further undermine international goals to limit global warming, surpassing 1.5° or 2° Celsius even more quickly than expected," Miller says.

The authors highlight the role of international agencies, development banks, and corporations in hastening the transition toward industrial animal agriculture production.

"This evidence suggests that the banks and government agencies who are funding intensive animal facilities' expansion might be accepting more climate risk than they realize," says Hayek. "Policymakers should consider methane emissions along with a gamut of other major environmental issues stemming from concentrated meat and dairy production, including water pollution and infectious animal-borne disease breakouts, to inform policies that guide food systems toward a better direction."

Credit: 
New York University

It's never too early to begin healthy eating habits

June 1, 2021 -- Researchers at Columbia University Mailman School of Public Health and Universidade Federal de Ciências da Saúde de Porto Alegre, Brazil found that when health workers were trained to promote infant healthy feeding practices to pregnant women their children consumed less fats and carbohydrates at 3 years of age and had lower measures of body fat at the age of 6. The study is the first to show that the roots for obesity start in the first year of life, after mothers stop breastfeeding. The findings are published online in the Journal of Human Nutrition and Dietetics.

"The first year after birth is a critical window for the establishment of habits that will influence health patterns throughout one's lifetime, said Caroline N. Sangalli, in the Graduate Program in Health Sciences, Universidade Federal de Ciências da Saúde de Porto Alegre, Brazil, and first author. "The message worldwide is that to avoid obesity later in life you cannot start too early to help mothers feed their children well. And this study is proof of principle that it is possible to change a mother's behavior."

"Most surprising was that the mothers in our randomized trial offered ultra-processed foods, that are high in sugar and fat, as early as 6 months of age," said Ma?rcia Vitolo, Graduate Program in Pediatrics: Child and Adolescent Health Care, Universidade Federal de Ciências da Saúde de Porto Alegre, Brazil, and co-senior author. "This behavior can be explained by cultural influences and strong marketing of processed baby foods which continues globally".

The researchers conducted the randomized trial in Porto Alegre, Brazil, in 31 centers that provide prenatal, infant, and other primary care services to low-income families. The intervention was based on births from May 2008 to February 2009 and consisted of a training program to increase the knowledge of primary healthcare workers centered on the 'Ten Steps for Healthy Feeding for Brazilian Children from Birth to Two Years of Age', the Brazilian dietary guideline.

All families were informed about complementary foods that should not be offered to children under 2 years of age (i.e., cookies, snacks, soft drinks and sweets) through posters in waiting rooms. Trained interviewers measured children's growth and other outcomes at ages 6 months, 12 months, 3 years and 6 years at subsequent home visits. Details about food types, amounts and preparation methods were also recorded.

Energy intake at all ages was lower in the intervention group compared to the control group with a statistically significant difference at age 3 years. Also, children from the intervention group at 3 years of age had lower consumption of carbohydrates and total fat than the control group and at 6 years of age had accumulated less body fat as measured by a smaller waist circumference and thinner skinfolds. "We found that the energy intake in both study groups was above the requirement across all age waves; however, the excess energy intake was less in the intervention group," observed Sangalli, who analyzed the study results with Dr. L.H. Lumey at Columbia Mailman School of Public Health with a grant from the Brazil government. "Although the disparity was slight at the onset, in the long term, the reduced intake of 92 kcal per day adds up to 33,000 kcal per year, and changes of this magnitude could explain changes in weight gain during childhood."

The findings were particularly striking with regard to calories from cookies and powder chocolate, important sources of carbohydrates and fats. During the health workers training, sugar, sweets, soft drinks, salty snacks, cookies and ultra-processed foods were emphasized as foods for mothers to avoid for their babies until 2 years of age.

The intervention group at 6 years of age had lower body fat on several measures but this difference was not reflected in BMI-scores, a less sensitive measure of adiposity. "However with the prevalence of overweight in the intervention group at 7 percent lower than the control group at 6 years, this does suggest a valuable public health impact - especially since estimates indicate that the reduction in 1 percent of obesity prevalence among children up to age 6 years would save $1.7 billion in medical costs," said Vitolo.

"Many individuals including Alice Waters, Jamie Oliver, and Michelle Obama have devoted efforts to improve school lunches and eating habits of school age children to aid in the fight against obesity," said Dr. Lumey, professor of Epidemiology and a co-senior author. "All these efforts are to be applauded and encouraged. What this study suggests is that we might have to think even earlier. Feeding practices early in life can already have a significant impact on the body size of pre-school children."

Credit: 
Columbia University's Mailman School of Public Health

Precise data for improved coastline protection

Researchers working under the leadership of the Technical University of Munich (TUM) have conducted the first precise and comprehensive measurements of sea level rises in the Baltic Sea and the North Sea. A new method now makes it possible to determine sea level changes with millimeter accuracy even in coastal areas and in case of sea ice coverage. This is of vital importance for planning protective measures.

For the billions of people who live in coastal areas, rising sea levels driven by climate change can pose an existential threat. "To protect people and infrastructure - for example by building flood protection structures, securing ports or making dikes higher - we need reliable forecasts on sea level trends," explains Prof. Florian Seitz, the Director of the German Geodetic Research Institute at TUM. "However, this requires precise data with high spatial resolution. And until now, the required wide-area coverage was not available."

Especially near coastlines - where so many cities, ports, industry facilities and residential areas are located - the quality of data collected by the radar satellites orbiting the Earth for decades was compromised by high signal-to-noise ratios. The reason: Mountains, bays and offshore islands scatter the signals and distort the reflected echoes. Another problem is sea ice, which covers parts of the oceans in winter, and is impenetrable to radar.

In the Baltic Sea Level project (Baltic SEAL), a team of researchers at TUM worked with international partners to develop algorithms to process the measurement data from radar satellites to permit precise and high-resolution measurements of sea level changes even in coastal areas and beneath sea ice.

Penetrating ice and islands with radar

The researchers chose the Baltic Sea as the model region: "Data from this region are especially suitable for developing new methods because multiple factors make analysis difficult: The complex shape of the coastline, sea ice and wind. At the same time, there are plenty of local sea level measurements to corroborate the results," says project leader Dr. Marcello Passaro. "An analytical method that works in the Baltic Sea can be easily adapted to other regions."

To handle hundreds of millions of radar measurements taken between 1995 and 2019, the team developed a multi-stage process: In the first step, they calibrated the measurements from the various satellite missions so that they could be combined. With specially developed algorithms, they were then able to detect signals from the ice-covered sea water in the radar reflections produced along cracks and fissures. This made it possible to determine sea levels for the winter months. With new computational methods they also achieved better resolution of radar echoes close to land.

As a result, it is now possible to measure sea levels in coastal areas and compare the results with local tidal records. The processed data were then fitted to a fine grid with a resolution of 6 to 7 km using an algorithm developed by the team. The result: A highly precise data set covering the entire region.

The largest rises in sea levels are occurring in the Bay of Bothnia

The analysis of these data for the Baltic Sea shows the regional effects of the rise in sea levels over the past quarter century: The sea level has risen at an annual rate of 2 to 3 millimeters in the south, on the German and Danish coasts, as compared to 6 millimeters in the north-east, in the Bay of Bothnia. The cause of this large rise: Strong south-westerly winds that drive the waters to the north and eastward. This above-average increase in sea level does not pose a threat to coastal dwellers, however, because the land has been rising since the end of the last Ice Age - currently at an annual rate of up to 1 cm.

"Through the newly developed processes for analyzing and combining radar data, we are now in a position to arrive at precise and reliable conclusions on sea level changes in recent decades for other coastal regions as well," adds Dr. Denise Dettmering. The researcher has also created a comprehensive data set for the North Sea region: The sea level there is rising by 2.6 millimeters per year, and by 3.2 millimeters in the German Bight. Local trends can be determined using the data set and the user manual - both of which are freely accessible online. "With the data, researchers can verify their climate models, for example, and public authorities can plan suitable protective measures," says Dr. Seitz.

Credit: 
Technical University of Munich (TUM)

A new direction of topological research is ready for take off

image: The image shows a "topolectric circuit" used to realize the topological states studied here.

Image: 
Lukas Ziegler

In a joint effort, ct.qmat scientists from Dresden, Rostock, and Würzburg have accomplished non-Hermitian topological states of matter in topolectric circuits. The latter acronym refers to topological and electrical, giving a name to the realization of synthetic topological matter in electric circuit networks. The main motif of topological matter is its role in hosting particularly stable and robust features immune to local perturbations, which might be a pivotal ingredient for future quantum technologies. The current ct.qmat results promise a knowledge transfer from electric circuits to alternative optical platforms, and have just been published in Physical Review Letters.

Topological defect tuning in non-Hermitian systems

At the center of the currently reported work is the circuit realization of parity-time (PT) symmetry, as it has been previously intensely studied in optics. The ct.qmat team have employed the PT symmetry to still make the open circuit system with gain and loss share a large amount of features with an isolated system. This is a core insight in order to design topological defect states in a compensatingly dissipative and accumulative setting. It is accomplished through non-Hermitian PT topolectric circuits.

Potential paradigm change in synthetic topological matter

"This research project has enabled us to create a joint team effort between all locations of the Cluster of Excellence ct.qmat towards topological matter. Topolectric circuits create an experimental and theoretical inspiration for new avenues of topological matter, and might have a particular bearing on future applications in photonics. The flexibility, cost-efficiency, and versatility of topolectric circuits is unprecedented, and might constitute a paradigm change in the field of synthetic topological matter", summarizes the Würzburg scientist and study director Ronny Thomale.

Next stop: applications

Having built a one-dimensional version of a PT symmetry topolectric circuit with a linear dimension of 30 unit cells, the next step towards technology envisioned by the research team is to take on PT symmetric circuits in two dimensions and as such about 1000 coupled circuit unit cells. Eventually, the insight gained through topolectric circuits may establish one milestone that could make light-controlled computers possible. They would be much faster and more energy-efficient than today's electron-controlled models.

People involved

Besides the cluster members based at Julius-Maximilians-Universität Würzburg (JMU) and the Leibnitz Institute for Solid State and Materials Research Dresden (IFW), the scientists around Professor Alexander Szameit from the University of Rostock are also involved in the publication. The Cluster of Excellence ct.qmat cooperates with Szameit's group in the field of topological photonics.

Credit: 
University of Würzburg

Turbulence in interstellar gas clouds reveals multi-fractal structures

In interstellar dust clouds, turbulence must first dissipate before a star can form through gravity. A German-French research team has now discovered that the kinetic energy of the turbulence comes to rest in a space that is very small on cosmic scales, ranging from one to several light-years in extent. The group also arrived at new results in the mathematical method: Previously, the turbulent structure of the interstellar medium was described as self-similar - or fractal. The researchers found that it is not enough to describe the structure mathematically as a single fractal, a self-similar structure as known from the Mandelbrot set. Instead, they added several different fractals, so-called multifractals. The new methods can thus be used to resolve and represent structural changes in astronomical images in detail. Applications in other scientific fields such as atmospheric research is also possible.

The German-French programme GENESIS (Generation of Structures in the Interstellar Medium) is a cooperation between the University of Cologne's Institute for Astrophysics, LAB at the University of Bordeaux and Geostat/INRIA Institute Bordeaux. In a highlight publication of the journal Astronomy & Astrophysics, the research team presents the new mathematical methods to characterize turbulence using the example of the Musca molecular cloud in the constellation of Musca.

Stars form in huge interstellar clouds composed mainly of molecular hydrogen - the energy reservoir of all stars. This material has a low density, only a few thousand to several tens of thousands of particles per cubic centimetre, but a very complex structure with condensations in the form of 'clumps' and 'filaments', and eventually 'cores' from which stars form by gravitational collapse of the matter.

The spatial structure of the gas in and around clouds is determined by many physical processes, one of the most important of which is interstellar turbulence. This arises when energy is transferred from large scales, such as galactic density waves or supernova explosions, to smaller scales. Turbulence is known from flows in which a liquid or gas is 'stirred', but can also form vortices and exhibit brief periods of chaotic behaviour, called intermittency. However, for a star to form, the gas must come to rest, i.e., the kinetic energy must dissipate. After that, gravity can exert enough force to pull the hydrogen clouds together and form a star. Thus, it is important to understand and mathematically describe the energy cascade and the associated structural change.

Credit: 
University of Cologne

Story tips: Un-Earthly ice, buildings in the loop, batteries unbound and 3D printing for geothermal

image: ORNL and NASA's Jet Propulsion Laboratory scientists studied the formation of amorphous ice like the exotic ice found in interstellar space and on Jupiter's moon, Europa.

Image: 
NASA/JPL-Caltech

Neutrons - Space ice, un-Earthly cold

Researchers from NASA's Jet Propulsion Laboratory and Oak Ridge National Laboratory successfully created amorphous ice, similar to ice in interstellar space and on icy worlds in our solar system. They documented that its disordered atomic behavior is unlike any ice on Earth.

The findings could help interpret data from future NASA missions such as Europa Clipper, which will assess the habitability of Jupiter's moon, Europa.

Using the Spallation Neutron Source's SNAP instrument, the scientists replicated the cold vacuum of space and added a few molecules at a time of heavy water to a plate cooled to 25 Kelvin to produce amorphous ice. They then used neutron scattering to observe the ice's structural changes at varying temperatures before it transitioned to crystalline ice.

"Amorphous water ice is ubiquitous in the universe, yet isn't well understood. Our data could help understand exotic ice forms in our solar system and beyond," said ORNL's Chris Tulk.

Media contact: Paul Boisvert, 502.229.4466, boisvertpl@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2021-06/EuropaClipper_Poster_08_2020_002_2_.jpg

Caption: ORNL and NASA's Jet Propulsion Laboratory scientists studied the formation of amorphous ice like the exotic ice found in interstellar space and on Jupiter's moon, Europa. Credit: NASA/JPL-Caltech

Buildings - In the loop

Researchers at Oak Ridge National Laboratory have developed a novel envelope system that diverts heat or coolness away from a building and stores it for future use.

Traditional building envelopes, such as roofs and walls, use insulation to reduce heat flow. ORNL's thermally anisotropic building envelope, or TABE, adds thin conductive layers between the insulation. The conductive layers connect to a thermal loop that redirects the heat or coolness to an energy storage system.

Stored energy is then used to heat or cool the indoor space. Sensors and controls determine when to transfer energy between the envelope and the loop to maximize energy savings or peak load reductions.

"Our simulations predicted more than 50% energy savings in a residential building," ORNL's Som Shrestha said. "Results from a one-year field demonstration also showed promising results for TABE when used in walls and roofs."

Media contact: Jennifer Burke, 865.414.6835, burkejj@ornl.gov

Image/Video: https://www.ornl.gov/sites/default/files/2021-06/wall_drop_high_res_longer.gif

Caption: ORNL researchers developed an innovative insulation system that uses sensors and controls to exchange heat or coolness between a building and its thermal energy storage system, which maximizes energy savings. Credit: Andrew Sproles and Michelle Lehman/ORNL, U.S. Dept. of Energy

Recycling - Batteries unbound

Scientists at Oak Ridge National Laboratory have developed a solvent that results in a more environmentally friendly process to recover valuable materials from used lithium-ion batteries, supports a stable domestic supply chain for new batteries and keeps old ones out of landfills.

Spent batteries are typically broken down using smelting, an expensive, energy-intensive process that releases toxic gas. The ORNL-developed alternative is a wet chemical process using triethyl phosphate to dissolve the binder material that adheres cathodes to metal foil current collectors in Li-ion batteries. The result is efficient recovery of cobalt-based cathodes, graphite and other valuable materials like copper foils that can be repurposed in new batteries.

"With this solvent, we're able to create a process that reduces toxic exposure for workers and recovers valuable, undamaged, active NMC [nickel-manganese-cobalt] cathodes, clean metal foils and other materials that can be easily reused in new batteries," said ORNL's Ilias Belharouak.

Media contact: Stephanie Seay, 865.576.9894, seaysg@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2021-06/metal_03_0.jpg

Caption: ORNL's green solvent enables environmentally friendly recycling of valuable Li-ion battery materials. Credit: Andy Sproles/ORNL, U.S. Dept. of Energy

Geothermal - Design and 3D print

Additive manufacturing can make the design and production of specialized tools for geothermal energy cheaper and more efficient, according to a study by Oak Ridge National Laboratory.

Geothermal is a renewable energy resource that requires specialized tools for drilling in harsh subsurface environments. The tools are typically produced in low quantities at high cost using conventional fabrication.

By using 3D-printing techniques, geothermal companies can take advantage of computer-aided technologies to design tools with enhanced performance characteristics. Those custom parts can then be printed using ORNL's high-strength alloys at a lower cost, especially when printing multiple parts in a single build. The lab's techno-economic analysis found ample opportunity to lower the cost of geothermal projects while improving system performance using additive manufacturing.

"The study points to the significant benefits of additive manufacturing and provides a roadmap for future work, including the development of new AM feedstocks based on advanced, high-temperature alloys," said ORNL's Yarom Polsky.

Media contact: Stephanie Seay, 865.576.9894, seaysg@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2021-06/geothermal-3dprinting.jpg

Caption: By using computer-aided design and additive manufacturing, developers can improve the performance characteristics of geothermal tools, such as this optimized rotor design, and reduce production cost. Credit: ORNL, U.S. Dept. of Energy

Credit: 
DOE/Oak Ridge National Laboratory

Manipulating quinary charge states in solitary defects of 2D intermetallic semiconductor

image: (a) Typical STM image of Sn2Bi with two defects: A1 and A2. (b) Differential conductance maps of A1 and A2 taken at -0.75 V. (c) Top view (up) and side view (bottom) of the Bi vacancy ball-and-stick mode. (d-e) Color-coded dI/dV line mapping over two identical defects A1 and A2 in (a). Vertical dotted lines indicate the center positions of A1 and A2. The dash-dotted lines represent the theoretical fitting of the position-dependent tip-induced band bending (TIBB) at each charging peak. The TIBB energies are also marked. (f) TIBB-derived charging energies of A1 and A2 for four charging peaks vary with charge number N (1-4).

Image: 
@Science China Press

Single atomic defect is the smallest structural unit of solid material. The construction of devices based on single defect can reach the limit of miniaturization of semiconductor devices. In the past decades, the creation and manipulation of single defects in semiconductors opened a new research field, and could be used to physically realize "qubits" of solid-state quantum computation through spin or electron charge. Most interest have focused on the studies of spin quantum computing. However, the spin manipulation need an optical and magnetic field. On the contrary, multiple charge states can be written and read using only an electric field, which will result in a more compact device size and compatibility with modern electronic technology. A successful charging in single atomic defect needs to overcome Coulomb repulsion energy. The successive multiple charging processes would increase the charging energy quadratically, which easily exceeds the band gap, leading to charging failure. Previous works only observed one or two charging processes.

To manipulate multiple (especially more than two) charge states in solitary defects, a wide band gap with delocalized defect states corresponding to small Coulomb repulsion energy are required. Recently, Jian Gou, Xuguang Wang, Peng Cheng, Kehui Wu from Institute of Physics, Chinese Academy of Sciences, Bingyu Xia, Wenhui Duan, Yong Xu from Tsinghua University, Andrew Wee from National University of Singapore proposed the use of 2D intermetallic semiconductor with delocalized in-gap defect states is a promising solution for the problem of the tradeoff between the band gap and electron delocalization, and made an exciting progress that realized quinary charge states in a single atomic defect.

From the perspective of materials design, materials in the two-dimensional (2D) limit can have various chemical stoichiometries and atomic arrangements, thus offering numerous opportunities to construct intermetallic semiconductor which contains two elements that have very close electronegativity. An example is the 2D Sn2Bi layer that grows successfully on Si(111) surface by the same group [PRL 121, 126801(2018)]. The Sn and Bi atoms in this 2D material form an alternating honeycomb structure, in which the valence electrons in both elements are saturated. Consequently, 2D Sn2Bi has a band gap of ~0.8 eV. Meanwhile, due to the very close electronegativity between Sn and Bi, the intrinsic Bi vacancy can possess the strongly delocalized in-gap defect states. Such delocalization can lower the Coulomb energy significantly, when multiple electrons are charging into the single Bi vacancy simultaneously.

Experimentally, the metallic tip in scanning tunneling microscopy (STM) was used to detect the in-gap defect states and inject electrons to defect states. The electric field applied by the STM tip can act as top gate to induce surface band bending, leading to a shift of the in-gap defect states with respect to the Fermi level. The defect states, once aligned with the Fermi level, will be charged. The relative distance variation between the tip and the defect can regulate the Fermi level of Sn2Bi at the defect position. Thus, as shown in the image, when tip is moved across the Bi vacancy, the four charging peaks can be observed and shift to a higher energy with the tip moving always from the defect, representing five charge states transition from zero to four. By fitting the contours, the four charging energies were obtained to be about 100 meV, which is much smaller than the band gap. The growth of charging energy with increasing charge number is not quadratic but sublinear. The ultralow defect charging energy and the sublinear dependence behavior are distinct from typical quadratic charging behaviors of defects.

In theoretical calculations, the Bi vacancy has four in-gap defect states with delocalized distribution, which can result in low defect charge concentration and weak Coulomb repulsion. The simulation of charge energies on the four defect states of Bi vacancy gives the similarly low value and sublinear growth behavior, supporting the discovery and manipulation of quinary charge states in a single atomic defect.

The successful quinary charging in a single atomic defect by the STM tip demonstrates the smallest demo to realize multiple charge states. Single electron can be charged into the Bi vacancy one-by-one controllably and continuously, which presents a more compact unit for the application of solotronics. Meanwhile, the study about the single defect on Sn2Bi shows a paradigm to find other intermetallic semiconductors for more outstanding multiple charge states with low energy in a single atomic defect.

Credit: 
Science China Press

Best of both worlds: High entropy meets low dimensions, opens up infinite possibilities

image: -

Image: 
Tokyo Tech

The discovery of graphene, a 2D layered form of carbon, once caused a paradigm shift in science and technology like no other. As this wonder material drew attention from material scientists around the world, it spurred research on other materials that were structurally similar, such as "van der Waals materials", which comprise strongly-bonded 2D atomic layers that are held together by weak interlayer interactions called "van der Waals forces". These materials quickly caught on because they were highly conducive to structural modifications, such as stacking, twisting, and insertion of foreign molecules between layers, which gave them interesting physical properties with several practical applications.

At about the same time, there emerged another remarkable class of materials called "high-entropy alloys" (HEA). HEA are formed by mixing five or more metals in specific concentrations such that an infinite number of potential combinations are possible simply by tuning their spin (intrinsic angular momentum), charge, and composition! Notable properties of HEAs include their high toughness and corrosion resistance. Thus, just like van der Waals materials, HEAs too have several unique applications.

Now, a team of scientists from Japan and China has attempted to merge these two types of materials to form something that inherits the desirable properties of both. Prof. Hideo Hosono from Tokyo Institute of Technology (Tokyo Tech), Japan, who is the pioneer of 2D-electride materials, and led the study, outlines their motivation: "The marriage of these two materials would bring us more degrees of freedom and expand the territory of both, opening up newer application possibilities."

In their study, published in the Journal of the American Chemical Society, the team first synthesized polycrystalline and single crystal samples of the new materials, which they called "high-entropy van der Waals", or HEX, materials. They then characterized the structures and chemical states of these new materials using X-ray diffraction and X-ray photoelectron spectroscopy, respectively. Among the physical properties they measured were resistivity, magnetic ordering, and heat capacity. They also measured the materials' corrosion resistance in acid, base, and organic solutions.

The HEX materials came from three categories of van der Waals (vdW) materials, namely, metal dichalcogenides (of formula ME2, M = metal, E = Sulphur, Selenium, Tellurium), halides, and phosphorus trisulfide (PS3), each of which was mixed with a unique combination of transition metals e.g. iron, nickel, cobalt, manganese.

The team found that by introducing multiple components, they could induce several remarkable physical properties such as superconductivity (dichalcogenide HEX), magnetic ordering (PS3 HEX), metal-insulator transition (dichalcogenide HEX), and strong corrosion resistance (dichalcogenide HEX).

With these encouraging findings, the team contemplates practical applications of HEX materials. "The high corrosion resistance could be a promising route for the design of heterogeneous catalysts. The concept of high entropy could also be introduced to other low-dimensional materials, and considering their infinite possibilities, we think these materials deserve the focus of the research community," says an excited Prof. Hosono.

An infinitude of possibilities is hard to ignore!

Credit: 
Tokyo Institute of Technology

How the major Swedish forest fire of 2014 affected the ecosystem

image: Hälleskogsbrännan, Västmanland, Sweden, three months after the fire in 2014. Almost all organic soil in the area was lost, which released large amounts of carbon and nitrogen.

Image: 
Joachim Strengbom

Swedish researchers from institutions including Uppsala University have spent four years gathering data from the areas affected by the major forest fire of 2014. In their study of how the ecosystem as a whole has been altered, they could see that water quality in watercourses quickly returned to normal, while forested areas continued to lose carbon for many years after the fire.

The consequences of major forest fires remain poorly studied in Northern Europe. To improve this situation, researchers from Uppsala University, the Swedish University of Agricultural Sciences (SLU) and the Swedish Meteorological and Hydrological Institute (SMHI) decided to investigate just how much carbon and nutrients are released into the atmosphere and watercourses during fires and how quickly the ecosystem returns to its previous state. The results of this research are now being presented in the scientific journal Biogeosciences.

The 2014 fire in the Swedish province Västmanland was particularly ferocious, burning both woodland and wetland. Only in a few areas did the trees survive.

"It is not however the trees that release carbon during fires in coniferous forests. Only some of the needles and small branches up in the trees burn, while around 90% of losses come from organic soil, the so-called humus layer. Ditched peatlands that dry a great deal of organic material in the soil are therefore large point sources for emissions from the landscape. This makes it important to measure how deep the burning goes in the ground in order to estimate carbon emissions after a forest fire. We had the opportunity to do just that over a wide area in Västmanland," says Uppsala University researcher Gustaf Granath, lead author of the study.

The loss of the humus layer releases large amounts of carbon and nitrogen from woodland and risks other nutrients leaching out after the fire. It is therefore important that vegetation can quickly re-establish itself in the interests of retaining nutrients and restoring soil carbon.

The results from Västmanland demonstrate that during the fire between 145 and 160 tonnes of carbon dioxide was lost to the atmosphere per hectare. For the whole burned area this is equivalent to 10% of the carbon dioxide emitted annually by Sweden's domestic transport sector. Due to the lack of vegetation after the fire, the soil continued to lose carbon over the next few years, with a net uptake of carbon first noted during a summer month three years after the fire. Researchers were concerned that a great deal of carbon would be lost to watercourses after the fire but were unable to observe any such additional export of carbon into streams when comparing conditions before and after the fire.

Quantities of nutrients such as nitrogen and phosphorus did however increase in streams and lakes after the fire, reaching a peak within one to two months of the fire before declining over time. For many of these substances, in the region of five times as much was transported away during the first year after the fire compared to before; however, most values had returned to normal one to two years after the fire.

"This rapid leaching of nutrients after the fire is due to the lack of vegetation that could absorb the substances, as well as the large release of the substances during the fire as organic soil burned. Without living vegetation and organic soil, water flows in streams increased by 50%" explains Stephan Köhler, professor of environmental geochemistry at SLU, who initiated the measurement of water quality immediately after the fire.

Other studies have shown how vegetation in the area of forest fires has re-established itself and how carbon and nutrient stocks have been rebuilt. How quickly this happens and what parameters affect the process will influence whether or not Sweden's forests could become long-term sources of CO2 to the atmosphere, is something that the researchers intend to continue studying in the area.

"While we now know more about how much and where carbon and nutrients disappear in fires, what happens next is equally interesting. There is a great deal of carbon bound in dead trees that will soon begin to decompose, while at the same time the soil and vegetation will store new carbon and build up stocks of nitrogen. It's important to follow this if we are to understand how our forests are affected when they burn," says Gustaf Granath.

Credit: 
Uppsala University

More salmonella infections in Europe: Hygiene rules help prepare poultry safely

In recent months, more than three hundred cases of salmonellosis have occurred in various European countries and Canada, which are linked to each other. In the UK the cases could be partly traced back to frozen breaded poultry meat. The cause was contamination with the bacterium Salmonella Enteritidis, which causes gastrointestinal inflammation. Salmonella is not killed by deep freezing and can remain infectious at temperatures below zero degrees Celsius. The Robert Koch Institute (RKI) and the BfR are monitoring the situation together with the Federal Office of Consumer Protection and Food Safety (BVL). In Germany, the number of reported cases has currently risen to more than 20 in six federal states. In 2020, there were a total of about 10,000 reported cases of salmonellosis in Germany, most of which were caused by the consumption of contaminated food. In principle, foodborne infections can be avoided by paying particular attention to hygienic care when preparing raw poultry. Due to the measures taken to contain the COVID 19 pandemic, people are currently cooking more often at home and, in the course of this, convenience products such as frozen goods are also being used more frequently. Sometimes it is not obvious at first glance whether such products contain pre-cooked or raw meat. Sufficient heating should always be ensured during preparation, especially of products containing raw poultry meat. In addition, bacterial contamination of other dishes via the raw meat and breading is possible. "Especially for children and elderly people there is a higher risk of getting sick from salmonella," says BfR President Prof. Dr. Dr. Andreas Hensel.

Link to the FAQs:
https://www.bfr.bund.de/en/selected_faqs_on_poultry_meat-54623.html

Investigations by the official food monitoring authorities show that raw poultry and poultry meat products - including frozen products - can be contaminated with pathogens. In 2018, Salmonella was found in 5.6% of chicken meat samples examined and Campylobacter bacteria in every second sample. For this reason, the BfR encourages adherence to its recommendations on the handling and preparation of poultry and poultry products.

It is true that germs such as salmonella and campylobacter are killed during the preparation of poultry meat if the correspondingly high temperatures are reached during cooking. But by transferring these germs to hands, household utensils and kitchen surfaces, other food can become contaminated with these pathogens. If this contaminated food is not reheated before consumption, one can fall ill. Since salmonella can multiply in food at temperatures above 7 °C, there is a particular risk when eating food that is kept unrefrigerated for a long time, such as salads and desserts.

Therefore, the following general hygiene rules should be strictly followed when preparing raw poultry:

- Store and prepare raw poultry products and other foods separately, especially when the latter are not reheated

- Store fresh poultry at a maximum of +4 °C and process and consume until the use-by date.

- Defrost frozen poultry without packaging in the refrigerator (cover and place in a bowl to collect the defrost water).

- Dispose packaging materials carefully and discard defrost water immediately.

- Do not wash poultry, as the splashing water can spread germs; it is better to process it directly or dab it with a paper towel, which should be disposed of directly.

- Utensils and surfaces that have come into contact with raw poultry products or defrost water must be cleaned thoroughly with warm water and washing-up liquid before further use.

- Clean hands thoroughly with warm water and soap between each preparation step.

About the BfR

The German Federal Institute for Risk Assessment (BfR) is a scientifically independent institution within the portfolio of the Federal Ministry of Food and Agriculture (BMEL) in Germany. The BfR advises the Federal Government and the States ('Laender') on questions of food, chemical and product safety. The BfR conducts its own research on topics that areclosely linked to its assessment tasks.

This text version is a translation of the original German text which is the only legally binding version.

Credit: 
BfR Federal Institute for Risk Assessment

Harmonious electronic structure leads to enhanced quantum materials

image: Schematic of a single set of band interactions, where E is the band energy and EF the Fermi energy. A change in chirality or magnetization would cause a change in the anomalous Hall conductivity. Schematic of multiple sets of band interactions, where E is the band energy and EF the Fermi energy. Comparison of off stoichiometric CrPt3 with elemental metals and magnetic Weyl Semimetals.

Image: 
MPI CPfS

The electronic structure of metallic materials determines the behavior of electron transport. Magnetic Weyl semimetals have a unique topological electronic structure - the electron's motion is dynamically linked to its spin. These Weyl semimetals have come to be the most exciting quantum materials that allow for dissipationless transport, low power operation, and exotic topological fields that can accelerate the motion of the electrons in new directions. The compounds Co3Sn2S2 and Co2MnGa [1-4], recently discovered by the Felser group, have shown some of the most prominent effects due to a set of two topological bands.

Researchers at the Max Planck Institute for Chemical Physics of Solids in Dresden, the University of South Florida in the USA, and co-workers have discovered a new mechanism in magnetic compounds that couples multiple topological bands. The coupling can significantly enhance the effects of quantum phenomena. One such effect is the anomalous Hall effect that arises with spontaneous symmetry breaking time-reversal fields that cause a transverse acceleration to electron currents. The effects observed and predicted in single crystals of Co3Sn2S2 and Co2MnGa display a sizable increase compared to conventional magnets.

In the current publication, we explored the compounds XPt3, where we predicted an anomalous Hall effect nearly twice the size of the previous compounds. The large effect is due to sets of entangled topological bands with the same chirality that synergistically accelerates charged particles. Interestingly, the chirality of the bands couple to the magnetization direction and determine the direction of the acceleration of the charged particles. This chirality can be altered by chemical substitution. Our theoretical results of CrPt3 show the maximum effect, where MnPt3 significantly reduced the effect due to the change in the order of the chiral bands.

Advanced thins films of the CrPt3 were grown at the Max Planck Institute. We found in various films a pristine anomalous Hall effect, robust against disorder and variation of temperature. The result is a strong indication that the topological character dominates even at finite temperatures. The results show to be near twice as large as any intrinsic effect measured in thin films. The advantage of thin films is the ease of integration into quantum devices with an interplay of other freedoms, such as charge, spin, and heat. XPt3 films show possible utilization for Hall sensors, charge-to-spin conversion in electronic devices, and charge-to-heat conversion in thermoelectric devices with such a strong response.

Credit: 
Max Planck Institute for Chemical Physics of Solids

Self-aware materials build the foundation for living structures

image: An illustration of the novel self-aware metamaterial system as used in a coronary artery stent. The design can sense restenosis when used in a stent, and the same design can be used at a large scale in bridge beams to to self-monitor for defects on the structure.

Image: 
iSMaRT Lab

From the biggest bridges to the smallest medical implants, sensors are everywhere, and for good reason: The ability to sense and monitor changes before they become problems can be both cost-saving and life-saving.

To better address these potential threats, the Intelligent Structural Monitoring and Response Testing (iSMaRT) Lab at the University of Pittsburgh Swanson School of Engineering has designed a new class of materials that are both sensing mediums and nanogenerators, and are poised to revolutionize the multifunctional material technology big and small.

The research, recently published in Nano Energy, describes a new metamaterial system that acts as its own sensor, recording and relaying important information about the pressure and stresses on its structure. The so-called "self-aware metamaterial" generates its own power and can be used for a wide array of sensing and monitoring applications.

The most innovative facet of the work is its scalability: the same design works at both nanoscale and megascale simply by tailoring the design geometry.

"There is no doubt that the next generation materials need to be multifunctional, adaptive and tunable." said Amir Alavi, assistant professor of civil and environmental engineering and bioengineering, who leads the iSMaRT Lab. "You can't achieve these features with natural materials alone--you need hybrid or composite material systems in which each constituent layer offers its own functionality. The self-aware metamaterial systems that we've invented can offer these characteristics by fusing advanced metamaterial and energy harvesting technologies at multiscale, whether it's a medical stent, shock absorber or an airplane wing."

While nearly all of the existing self-sensing materials are composites that rely on different forms of carbon fibers as sensing modules, this new concept offers a completely different, yet efficient, approach to creating sensor and nanogenerator material systems. The proposed concept relies on performance-tailored design and assembly of material microstructures.

The material is designed such that under pressure, contact-electrification occurs between its conductive and dielectric layers, creating an electric charge that relays information about the condition of the material. In addition, it naturally inherits the outstanding mechanical properties of metamaterials, like negative compressibility and ultra-high resistance to deformation. The power generated by its built-in triboelectric nanogenerator mechanism eliminates the need for a separate power source: such material systems can harness hundreds of watts of power at large scales.

A "Game Changer," from the Human Heart to Space Habitats

"We believe this invention is a game changer in metamaterial science where multifunctionality is now gaining a lot of traction," said Kaveh Barri, lead author and doctoral student in Alavi's lab. "While a substantial portion of the current efforts in this area has been merely going into exploring new mechanical properties, we are going a step further by introducing revolutionary self-charging and self-sensing mechanisms into the fabric of material systems."

"Our most exciting contribution is that we are engineering new aspects of intelligence into the texture of metamaterials. We can literally transform any material system into sensing mediums and nanogenerators under this concept," added Gloria Zhang, co-lead author and doctoral student in Alavi's lab.

The researchers have created multiple prototype designs for a variety of civil, aerospace and biomedical engineering applications. At a smaller scale, a heart stent using this design can be used to monitor blood flow and detect signs of restenosis, or the re-narrowing of an artery. The same design was also used at a much larger scale to create a mechanically-tunable beam suitable for a bridge that could self-monitor for defects on its structure.

These materials have enormous potential beyond Earth, as well. A self-aware material uses neither carbon fibers nor coils; it is light in mass, low in density, low in cost, highly scalable, and it can be fabricated using a broad range of organic and inorganic materials. Those qualities make them ideal for use in future space exploration.

"To fully understand the huge potential of this technology, imagine how we can even adapt this concept to build structurally-sound self-powering space habitats using only indigenous materials on Mars and beyond. We are actually looking into this right now," said Alavi. "You can create nano-, micro-, macro- and mega-scale material systems under this concept. That is why I am confident that this invention can build the foundations for a new generation of engineering living structures that respond to the external stimuli, self-monitor their condition, and power themselves."

Credit: 
University of Pittsburgh