Tech

Following nature's cue, researchers build successful, sustainable industrial networks

image: Research shows that design guidelines based on the connection characteristics of food webs can create successful industry networks.

Image: 
Texas A&M University College of Engineering

By translating the pattern of interconnections between nature's food chains to industrial networks, researchers at Texas A&M University have delineated guidelines for setting up successful industrial communities. The researchers said this guidance can facilitate economic growth, lower emissions and reduce waste while simultaneously ensure that partnering industries can recover from unexpected disturbances.

"Industries can often partner up to exchange byproducts and over time these industries might form bigger communities. While these networks sound quite beneficial to all industry partners within the community, they are not always successful," said Dr. Astrid Layton, assistant professor in the J. Mike Walker' 66 Department of Mechanical Engineering. "We tried to solve this problem by providing design guidelines inspired by nature's food webs so that the overall system will be both eco-friendly and save money for everyone."

The researchers published their study in the journal Resources, Conservation & Recycling.

An essential part of running any industry is identifying resources, such as raw materials, that are economically viable. Rather than having each industry work out these details independently, an eco-industrial park or a network of partnering industries is an emerging trend. Further, businesses belonging to these parks work symbiotically where, much like in nature, industries mutually benefit from each other. For example, one industry's waste is another one's raw materials -- often saving both partners money.

When successful, industrial symbiosis can help to reduce raw material use, costs and emissions while generating sizeable financial returns. However, there have also been instances where eco-industrial parks have not worked.

"When eco-industrial parks started to show success, people took note and began trying to form their own community of companies that exchanged byproducts, but these 'from-the-ground-up' designs can be hit or miss," said Layton. "The underlying reasons could be many, perhaps economic or if, for instance, one company goes bust, the whole system breaks down because they're all connected."

To combat this problem, the researchers sought to provide guidelines on how best to design these industry communities to leverage the benefits of industrial symbiosis while avoiding the downfalls.

For their analysis, Layton and her team referred to food webs that are both resilient to disturbances and produce minimal waste. These biological networks are made up of multiple food chains linking predators and preys. Furthermore, the organization of the interconnected chains in food webs has been extensively studied over the years using quantitative measures. Of the many metrics, the researchers were especially interested in one called nestedness.

This metric, which ranges from 0 to 1, reflects the location where connections are occurring within networks. When nestedness has values closer to one, there is a hierarchy in the connections, in other words, one actor is connected to all other actors in the network, another actor is connected to a subset of those, and so on. For example, a highly nested structure would be one where certain species of bees pollinate a variety of plants while other "specialist" bees pollinate only a small number of plants within this much wider set. Alternatively, poorly nested structures have values closer to zero and every actor in the network may be connected to every other.

But unlike food webs, many industrial networks have been shown to have low nestedness. So, the researchers tested if increasing the nestedness in industrial networks could promote the financial benefit and the ability for industries to recover from disturbances.

For their study, Layton and her team included nine industries, including a fertilizer plant, a pharmaceutical plant and a wastewater treatment facility, that could participate in five types of water-based exchanges. Next, they created 4000 different network designs, broken up into 200 designs at 20 different nestedness values.

They found that when the network design had high nestedness, freshwater usage was less and the network survived unforeseen disturbances, which ultimately translated to more savings and resource conservation. They also found in more specific scenarios, such as when the industries were spread out geographically and the resources are very expensive, high nestedness in industry networks was again more advantageous.

The researchers noted that they analyzed only water exchanges in the current study and their future work will address other types of resource exchanges and environmental impacts. However, they said the benefit of high nestedness in industrial networks was generalizable to other exchanges as well.

"Water is the worst-case scenario compared to other exchange products in terms of infrastructure costs," said Layton. "Our results have identified situations when high nestedness is an advantage, which can then guide the design of the network. This work will support success both from an economic perspective and resilience perspective."

Credit: 
Texas A&M University

Higher education does not influence how the brain ages

image: Education was not related to
rate of change in cortical volume.

Image: 
Fredrik Magnussen, Center for Lifespan Changes in Brain and Cognition, University of Oslo, Norway

All brains shrink with age, and the dominant view has been that more education slows the rate of shrinking.

However, the evidence has been inconclusive because studies have not been able to track the rate of change over time. Until now.

Brains shrink at the same rate

Through the pooling of several European brain data sets from the Lifebrain consortium, the current study has been able to track brain changes in individuals over many years.

They found that whereas highly educated people have slightly larger brain volumes than less educated people, their brains shrink at the same rate throughout life.

“This finding suggests that higher education does not influence brain aging” says Lars Nyberg from Umeå University in Sweden, first author of the study and also a part of the Lifebrain consortium.

Measured brain shrinkage over time

The researchers measured brain aging by measuring the volume of the cortical mantle and hippocampus regions of the brain, in MRI scans from more than 2000 participants in the Lifebrain and UK biobanks. These areas of the brain are prone to shrinkage over time, as a natural part of aging. Participants’ brains were scanned up to three times over an 11 year period, in what is known as a “longitudinal” study.

“This is what makes this study unique,” says Nyberg. “The study is a large-scale longitudinal test, with replication across two independent samples, and is one of the largest of its kind.”

The researchers compared the rate of the shrinkage of these areas in people who had attained higher education before the age of 30 and those without. The participants ranged from 29-91 years of age.

Higher education is modestly related to bigger brains

Whereas the rate of brain change was similar in participants with and without high education, the researchers found that those with high education had slightly larger cortical volume in a few regions, but even in these regions the rate of change was unrelated to education.

“The study does not say that education is not important,” stresses Anders Fjell from the University of Oslo, also one of the main authors of the paper.

“Education is associated with advantages in life, but we cannot from this study say whether education caused these advantages. If people with high education have larger brains to begin with, this may delay the onset of dementia or other conditions associated with lower cognitive functioning,” says Fjell.

“The bottom line is that all people’s brains shrink eventually, but the rate of this shrinkage does not seem to be affected by how many years you spent in school,” concludes Fjell.

Credit: 
Norwegian Institute of Public Health

Shopping online or locally - an individual choice

image: Vacancies in inner cities: Online business has an effect on local shops.

Image: 
Thomas Wieland, KIT/IfGG

The obstacles associated with shopping, such as shipping costs or the time needed to go to the shop, are crucial to the individual choice of where to shop. When deciding between online shopping and local shopping, personal opinion on purchasing security, environmental protection aspects, and work conditions plays a role. This is found by a study using microeconometric models at Karlsruhe Institute of Technology (KIT). Some of the results of the representative study funded by the German Research Foundation are reported in Papers in Applied Geography and Raumforschung und Raumordnung.

For the evaluations reported, data were collected in 2019, that is before local shopping was restricted due to the pandemic. "During the lockdowns, local retail shops selling products that are not needed daily are closed. Moreover, voluntary changes of conduct of the population can be observed. Now, purchasing power is shifting towards online business, of course," says Dr. Thomas Wieland, Head of the project "Zur Raumwirksamkeit des Onlinehandels" (on the regional effects of online trade) that started in 2018 at KIT's Institute of Geography and Geoecology (IfGG). In the second phase started in April 2021, the project will focus on whether temporary lockdowns caused this shift to become sustainable. The project will end in 2022.

Various Analog and Digital Shopping Channels

Digitalization of trade increasingly influences inner cities. Consumers can choose among a number of analog and digital shopping channels, ranging from online shops to local stores to cross-channel trade combining both. "In principle, most customers use both shopping channels, but they have personal preferences, with their place of residence and subjective attitudes playing a role," says Dr. Thomas Wieland. "Younger people tend to shop online more frequently than older persons," the scientist adds. He underlines that preference of a certain shopping channel also depends on subjective opinions, such as doubts in data security during online shopping or in whether the product desired will be delivered. According to Wieland, "a major criterion of many persons" is their personal opinion on whether online trade negatively affects the environment or working conditions of the suppliers are bad.

Obstacles in the Shopping Process Are Crucial

"The purchasing decision is influenced crucially by transaction costs, that is the different obstacles that have to be overcome in the shopping process," the economic geographer says. Whether the next electronics market can be reached in 5 or 25 minutes, whether shipping costs 3 or 6 euros, and whether it is raining and the person does not like to get wet when riding to the next shop - all this influences the choice between online and offline shopping, although not all these parameters can be studied. "In the food sector, good accessibility is a decisive parameter," Wieland says.

For his study that refers to the electronics and food sectors, Wieland and his team interviewed 1,400 consumers in the Middle Upper-Rhine Region that is of rather urban character with the city of Karlsruhe and in the more rural region of the south of Lower Saxony with the city of Göttingen. The answers were evaluated with the help of microeconometric models. "The models operate on the level of the individual consumers and the individual online or offline shops," the expert explains. "So far, studies combining various potential approaches to explaining the behavior have been lacking."

Integrated Online Shops Strengthen Retail Sector

Two thirds of the respondents stated that they informed themselves about products and compared prices on the internet, irrespective of whether shopping takes place online or offline. Model analyses revealed that suppliers operating an integrated online shop have far more clients. "Cross-channel integration is a good way for an owner-operated local retail business to improve the own market position," Wieland says. Information on availability is an important parameter. "Persons seeing online that the smartphone desired is available in the city center, may directly go there to buy it." However, some medium-sized companies cannot be found online. "Cooperatives active in the electronics sector or city management services may provide support and make businesses visible online," the scientist says. It is important to consider online trade in regional development and urban planning as well as in the businesses' site and expansion planning, he emphasizes.

Click and Collect: Less Used Prior to the Lockdowns

The study also reveals that residents of big cities are more inclined to shop online. Click and collect services were found to have a very small effect. However, data were collected in 2019 prior to local shopping restrictions caused by the pandemic. In its second phase that started in April 2021, the project now covers another investigation area in Saxony-Anhalt and additionally determines whether temporary lockdown has led to a sustainable shifting of business towards online shopping. The project is funded with about EUR 200,000 for personnel and materials by the German Research Foundation (DFG). (afr)

Credit: 
Karlsruher Institut für Technologie (KIT)

Ozone pollution in germany falls thanks to lower nitrogen oxide emissions

Summer is the ozone season: The harmful gas forms at ground level on hot, sunny days. In recent years, however, the rise in ozone levels over the summer months has not been as pronounced in Germany as it was previously. According to a new study, this is primarily due to a reduction in nitrogen oxide emissions. This trend can be observed across Germany's southwestern regions in particular, while Berlin lags behind.

Nitrogen oxides (NOx) are among the precursors of ground-level ozone, which can irritate the eyes, nose and throat and aggravate respiratory conditions. The emissions are primarily produced during combustion processes in engines and industrial facilities. "Traffic is the most significant source of nitrogen oxide emissions in urban centers. In recent years, emissions have fallen significantly, partly due to improved vehicle exhaust values", explains lead author Noelia Otero (IASS Potsdam/FU Berlin). Together with her colleagues, Otero wanted to learn more about the effect of falling NOx emissions on the formation of ground-level ozone.

The researchers used long-term measurements of hourly ozone concentrations in conjunction with measurements of nitrogen oxide concentrations gathered at stations across Germany to determine the relationship between temperature and ozone over the period 1999 to 2008 and 2009 to 2018. The researchers discovered that warm temperatures caused ozone concentrations to rise more in the first period than in the second. This demonstrates that a reduction in emissions positively affects the formation of ozone.

As an example, the researchers compared data from measuring stations located at a town square in Wörth am Rhein (Rhineland-Palatinate) and on Nansenstraße in Berlin-Neukölln. In Wörth, nitrogen oxide concentrations declined by 35 % between the first and second periods, while in Berlin they sank by just 7.5 % in the second period. In Wörth, ozone concentrations sank in response to rising temperatures compared to the first period; this effect could not be observed in Berlin, however.

According to the researchers, these changes in ozone concentrations are likely to be driven not only by NOx emissions, but also by another ozone precursor: volatile organic compounds (VOCs), which derive from a range of sources, including traffic, industry, solvents and even vegetation. "In the absence of long-term data on volatile organic compounds, further analysis with short-term measurements of a range of VOCs would be necessary to quantify their contribution to the observed changes," says co-author Tim Butler (IASS Potsdam/FU Berlin). The researchers also note the need for further reductions in NOx emissions in Berlin to reduce ozone pollution in summer.

Credit: 
Research Institute for Sustainability (RIFS) – Helmholtz Centre Potsdam

New research uncovers continental crust emerged 500 million years earlier than thought

image: An artist's conception of the early Earth, showing a surface bombarded by large impacts that result in the extrusion of magma onto the surface. At the same time, distal portions of the planet's surface may have retained liquid water.

Image: 
Simone Marchi/SwRI

MUNICH -- The first emergence and persistence of continental crust on Earth during the Archaean (4 billion to 2.5 billion years ago) has important implications for plate tectonics, ocean chemistry, and biological evolution, and it happened about half a billion years earlier than previously thought, according to new research being presented at the EGU General Assembly 2021.

Once land becomes established through dynamic processes like plate tectonics, it begins to weather and add crucial minerals and nutrients to the ocean. A record of these nutrients is preserved in the ancient rock record. Previous research used strontium isotopes in marine carbonates, but these rocks are usually scarce or altered in rocks older than 3 billion years.

Now, researchers are presenting a new approach to trace the first emergence of old rocks using a different mineral: "barite".

Barite forms from a combination of sulfate coming from ocean water mixing with barium from hydrothermal vents. Barite holds a robust record of ocean chemistry within its structure, useful for reconstructing ancient environments. "The composition of the piece of barite we pick up in the field now that has been on Earth for three and a half billion years, is exactly the same as it was when it when it actually precipitated," says Desiree Roerdink, a geochemist at University of Bergen, Norway, and team leader of the new research. "So in essence, it is really a great recorder to look at processes on the early Earth."

Roerdink and her team tested six different deposits on three different continents, ranging from about 3.2 billion to 3.5 billion years old. They calculated the ratio of strontium isotopes in the barite, and from there, inferred the time where the weathered continental rock made its way to the ocean and incorporated itself into the barite. Based on the data captured in the barite, they found that weathering started about 3.7 billion years ago--about 500 million years earlier than previously thought.

"That is a huge time period," Roerdink says. "It essentially has implications for the way that we think about how life evolved." She added that scientists usually think about life starting in deep sea, hydrothermal settings, but the biosphere is complex. "We don't really know if it is possible that life could have developed at the same time on land," she noted, adding "but then that land has to be there."

Lastly, the emergence of land says something about plate tectonics and the early emergence of a geodynamic Earth. "To get land, you need processes operating to form that continental crust, and form a crust that is chemically different from the oceanic crust," Roerdink says.

Credit: 
European Geosciences Union

Scientists have cultured the first stable coral cell lines

image: A colony of Acropora tenuis grown in a natural sea environment and transferred to an aquarium to induce spawning.

Image: 
Reproduced from

Researchers have successfully grown cells from the stony coral, Acropora tenuis, in petri dishes

The cell lines were created by separating out cells from coral larvae, which then developed into eight distinct cell types

Seven out of eight cell types were stable and could grow indefinitely, remaining viable even after freezing

Some of the cell types represented endoderm-like cells, and could therefore shed light on how coral interacts with photosynthesizing algae and how bleaching occurs

The cell lines could be used in many avenues of coral cell research, including coral development, coral farming and the impact of climate change and pollution

Researchers in Japan have established sustainable cell lines in a coral, according to a study published today in Marine Biotechnology.

Seven out of eight cell cultures, seeded from the stony coral, Acropora tenuis, have continuously proliferated for over 10 months, the scientists reported.

"Establishing stable cells lines for marine organisms, especially coral, has proven very difficult in the past," said Professor Satoh, senior author of the study and head of the Marine Genomics Unit at the Okinawa Institute of Science and Technology Graduate University (OIST). "This success could prove to be a pivotal moment for gaining a deeper understanding of the biology of these vitally important animals."

Acropora tenuis belongs to the Acroporidae family, the most common type of coral found within tropical and subtropical reefs. These stony corals are fast growers and therefore play a crucial role in the structural formation of coral reefs.

However, Acroporidae corals are particularly susceptible to changes in ocean conditions, often undergoing bleaching events when temperatures soar or when oceans acidify. Establishing knowledge about the basic biology of these corals through cell lines could one day help protect them against climate change, explained Professor Satoh.

Creating the cultures

In the study, Professor Satoh worked closely with Professor Kaz Kawamura from Kochi University - an expert in developing and maintaining cell cultures of marine organisms.

Since adult coral host a wide variety of microscopic marine organisms, the group chose to try creating the cell lines from coral larvae to reduce the chances of cross-contamination. Another benefit of using larval cells was that they divide more easily than adult cells, potentially making them easier to culture.

The researchers used coral specimens in the lab to isolate both eggs and sperm and fertilize the eggs. Once the coral larvae developed, they separated the larvae into individual cells and grew them in petri dishes.

Initially, the culture attempts ended in failure. "Small bubble bodies appeared and then occupied most of the petri dish," said Professor Kaz Kawamura. "We later found that these were the fragments of dying stony coral cells."

In the second year, the group discovered that by adding a protease called plasmin to the cell culture medium, right at the beginning of the culture, they could stop the stony coral cells from dying and keep them growing.

Two to three weeks later, the larval cells developed into eight different cell types, which varied in color, form and gene activity. Seven out of the eight continued to divide indefinitely to form new coral cells.

Exploring the symbiosis integral to coral survival

One of the most exciting advancements of this study was that some of the cell lines were similar in form and gene activity to endodermal cells. The endoderm is the inner layer of cells formed about a day after the coral eggs are fertilized.

Importantly, it is the cells in the endoderm that incorporate the symbiotic algae, which photosynthesize and provide nutrients to sustain the coral.

"At this point in time, the most urgent need in coral biology is to understand the interaction between the coral animal and its photosynthetic symbiont at the cellular level, and how this relationship collapses under stress, leading to coral bleaching and death," said Professor David Miller, a leading coral biologist from James Cook University, Australia, who was not involved in the study.

He continued: "Subject to confirmation that these cells in culture represent coral endoderm, detailed molecular analyses of the coral/photosymbiont interaction would then be possible - and from this, real advances in understanding and perhaps preventing coral bleaching could be expected to flow."

For Professor Satoh, his interest is in how the photosymbiotic algae cells, which are almost as big as the larval cells, initially enter the coral.

"The algae are incorporated into the coral cells around a week after the larvae first develop," said Prof. Satoh. "But no one has yet observed this endosymbiotic event on a single-cell level before."

A new era for coral cell research

The scientists also found that the coral cell lines were still viable after being frozen with liquid nitrogen and then thawed. "This is crucial for being able to successfully supply the coral cell lines to research laboratories across the globe," said Professor Satoh.

The implications for future research using these cell lines are far-reaching, ranging from research on how single coral cells respond to pollution or higher temperatures, to studying how corals produce the calcium carbonate that builds their skeleton.

Research could also provide further insight into how corals develop, which could improve our ability to farm coral.

In future research, the team hopes to establish cells lines that are clonal, meaning every cell in the culture is genetically identical.

"This will give us a much clearer idea of exactly which coral cell types we are growing, for example gut-like cells or nerve-like cells, by looking at which genes are switched on and off in the cells," said Professor Satoh.

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Researchers complete high-precision time-frequency dissemination

Prof. PAN Jianwei and his colleagues from the University of Science and Technology of China of the Chinese Academy of Sciences investigated the high-loss free space high-precision time-frequency dissemination experiment between remote locations, simulating the high-precision time-frequency high-orbit satellite-ground links in the channel loss, atmospheric noise, and transmission delay effects.

This link experiment exhibits that the instability of the time-frequency transfer via a satellite in middle-high earth orbits might reach E-18 at 10,000 s, enabling the potential performance of optical atomic clocks and intercontinental comparison of ground clocks. The study was published in the journal Optica.

High-precision time-frequency dissemination and comparison techniques apply in all kinds of large-scale precision measurement systems. At present, the international metrology standard systems are at the quantization stage. The frequency standard is at the core of the precision measurement and international metrology systems. Other basic physical quantities except for the amount of matter (mol) are directly or indirectly traced to the frequency. On the other hand, the novel optical frequency standard technologies develop rapidly, whose accuracy is two orders of magnitude better than that of the original "second" definition frequency standard.

The most important part of the technical roadmap of the change of the "second" definition is to set the intercontinental time-frequency comparison with the optical frequency standard at the E-18 level. To have an ultra-long-distance high-precision time-frequency comparison or dissemination is an unsolved problem, while the satellite-ground link is recognized as the most feasible solution.

In this study, the researchers used a dual-comb linear optical sampling time measurement method. Compared to the continuous-wave or the single-photon link method, this complex link has the advantage of the high time resolution and the large ambiguous range.

The researchers first comprehensively analyzed parameters such as satellite-ground link loss, Doppler effect, link time asymmetry, and atmosphere noise, and found that high-orbit links enable more stable time-frequency comparison or dissemination by taking advantage of the long duration, a large common view range, and the lower relativistic effects.

Then, they performed a high-orbit satellite-ground time-frequency transmission experiment to simulate links with the link loss, atmosphere noise, and delay effects.

Through low-noise optical comb amplification, low-loss high-stability dual-comb interference optical path, and high-precision high-sensitivity linear sampling, the researchers built a 16-kilometer horizontal atmospheric free space and high-precision dual-comb time-frequency transmission link in Shanghai. The frequency transmission link realized an instability of 4E-18 at 3,000 s with an average loss of 72 dB and a 1 s link delay.

Based on these results, they expected that the instability of the time-frequency transfer via a high-orbit satellite-ground link might reach 10-18 at 10,000 s.

Credit: 
University of Science and Technology of China

Synthetic gelatin-like material mimics lobster underbelly's stretch and strength

image: A steel particle is shown piercing through the nanofibrous hydrogel and exiting at a reduced velocity. The difference in velocity before and after gave the researchers a direct measurement of the material's impact resistance, or the amount of energy it can absorb.

Image: 
Courtesy of Jiahua Ni, Shaoting Lin, Xuanhe Zhao, et al

A lobster's underbelly is lined with a thin, translucent membrane that is both stretchy and surprisingly tough. This marine under-armor, as MIT engineers reported in 2019, is made from the toughest known hydrogel in nature, which also happens to be highly flexible. This combination of strength and stretch helps shield a lobster as it scrabbles across the seafloor, while also allowing it to flex back and forth to swim.

Now a separate MIT team has fabricated a hydrogel-based material that mimics the structure of the lobster's underbelly. The researchers ran the material through a battery of stretch and impact tests, and showed that, similar to the lobster underbelly, the synthetic material is remarkably "fatigue-resistant," able to withstand repeated stretches and strains without tearing.

If the fabrication process could be significantly scaled up, materials made from nanofibrous hydrogels could be used to make stretchy and strong replacement tissues such as artificial tendons and ligaments.

The team's results are published in the journal Matter. The paper's MIT co-authors include postdocs Jiahua Ni and Shaoting Lin; graduate students Xinyue Liu and Yuchen Sun; professor of aeronautics and astronautics Raul Radovitzky; professor of chemistry Keith Nelson; mechanical engineering professor Xuanhe Zhao; and former research scientist David Veysset PhD '16, now at Stanford University; along with Zhao Qin, assistant professor at Syracuse University, and Alex Hsieh of the Army Research Laboratory.

Nature's twist

In 2019, Lin and other members of Zhao's group developed a new kind of fatigue-resistant material made from hydrogel -- a gelatin-like class of materials made primarily of water and cross-linked polymers. They fabricated the material from ultrathin fibers of hydrogel, which aligned like many strands of gathered straw when the material was repeatedly stretched. This workout also happened to increase the hydrogel's fatigue resistance.

"At that moment, we had a feeling nanofibers in hydrogels were important, and hoped to manipulate the fibril structures so that we could optimize fatigue resistance," says Lin.

In their new study, the researchers combined a number of techniques to create stronger hydrogel nanofibers. The process starts with electrospinning, a fiber production technique that uses electric charges to draw ultrathin threads out of polymer solutions. The team used high-voltage charges to spin nanofibers from a polymer solution, to form a flat film of nanofibers, each measuring about 800 nanometers -- a fraction of the diameter of a human hair.

They placed the film in a high-humidity chamber to weld the individual fibers into a sturdy, interconnected network, and then set the film in an incubator to crystallize the individual nanofibers at high temperatures, further strengthening the material.

They tested the film's fatigue-resistance by placing it in a machine that stretched it repeatedly over tens of thousands of cycles. They also made notches in some films and observed how the cracks propagated as the films were stretched repeatedly. From these tests, they calculated that the nanofibrous films were 50 times more fatigue-resistant than the conventional nanofibrous hydrogels.

Around this time, they read with interest a study by Ming Guo, associate professor of mechanical engineering at MIT, who characterized the mechanical properties of a lobster's underbelly. This protective membrane is made from thin sheets of chitin, a natural, fibrous material that is similar in makeup to the group's hydrogel nanofibers.

Guo found that a cross-section of the lobster membrane revealed sheets of chitin stacked at 36-degree angles, similar to twisted plywood, or a spiral staircase. This rotating, layered configuration, known as a bouligand structure, enhanced the membrane's properties of stretch and strength.

"We learned that this bouligand structure in the lobster underbelly has high mechanical performance, which motivated us to see if we could reproduce such structures in synthetic materials," Lin says.

Angled architecture

Ni, Lin, and members of Zhao's group teamed up with Nelson's lab and Radovitzky's group in MIT's Institute for Soldier Nanotechnologies, and Qin's lab at Syracuse University, to see if they could reproduce the lobster's bouligand membrane structure using their synthetic, fatigue-resistant films.

"We prepared aligned nanofibers by electrospinning to mimic the chinic fibers existed in the lobster underbelly," Ni says.

After electrospinning nanofibrous films, the researchers stacked each of five films in successive, 36-degree angles to form a single bouligand structure, which they then welded and crystallized to fortify the material. The final product measured 9 square centimeters and about 30 to 40 microns thick -- about the size of a small piece of Scotch tape.

Stretch tests showed that the lobster-inspired material performed similarly to its natural counterpart, able to stretch repeatedly while resisting tears and cracks -- a fatigue-resistance Lin attributes to the structure's angled architecture.

"Intuitively, once a crack in the material propagates through one layer, it's impeded by adjacent layers, where fibers are aligned at different angles," Lin explains.

The team also subjected the material to microballistic impact tests with an experiment designed by Nelson's group. They imaged the material as they shot it with microparticles at high velocity, and measured the particles' speed before and after tearing through the material. The difference in velocity gave them a direct measurement of the material's impact resistance, or the amount of energy it can absorb, which turned out to be a surprisingly tough 40 kilojoules per kilogram. This number is measured in the hydrated state.

"That means that a 5-millimeter steel ball launched at 200 meters per second would be arrested by 13 millimeters of the material," Veysset says. "It is not as resistant as Kevlar, which would require 1 millimeter, but the material beats Kevlar in many other categories."

It's no surprise that the new material isn't as tough as commercial antiballistic materials. It is, however, significantly sturdier than most other nanofibrous hydrogels such as gelatin and synthetic polymers like PVA. The material is also much stretchier than Kevlar. This combination of stretch and strength suggests that, if their fabrication can be sped up, and more films stacked in bouligand structures, nanofibrous hydrogels may serve as flexible and tough artificial tissues.

"For a hydrogel material to be a load-bearing artificial tissue, both strength and deformability are required," Lin says. "Our material design could achieve these two properties."

Credit: 
Massachusetts Institute of Technology

Study reports novel role of enzyme in plant immunity and defense gene expression

A recently published article in the Molecular Plant-Microbe Interactions journal provides new evidence that pathogens are hijacking the plant immune system to cause disease while providing insights into a newly discovered mechanism.

A large variety of pathogens infect plants and cause different diseases, which can lead to reduced crop yields. During infection, pathogens secrete effector proteins into the plant cell. Some of these proteins target plant proteasomal degradation machinery, which is responsible for recycling proteins to regulate cell processes. Some E1, E2 and E3-ligase proteins have been identified as playing a role in plant susceptibility or resistance to pathogen invasion. SALT- AND DROUGHT-INDUCED RING FINGER1 (SDIR1) is an E3-ligase that degrades regulators of the plant hormone abscisic acid (ABA) in response to drought stress.

Ramu Vemanna from the Regional Center for Biotechnology and colleagues at the Noble Research Institute reported a new way SDIR1 impacts plant immunity during pathogen-induced stress. They found that silencing SDIR1 reduced the growth of host-specialized and nonhost Pseudomonas syringae strains in the model plant Nicotiana benthamiana and disease symptom development in the model plant Arabidopsis thaliana. Overexpressing SDIR1 in A. thaliana allowed even the nonhost P. syringae strain to multiply and cause disease symptoms.

In contrast to these results from challenging plants with biotrophic bacterial pathogens, SDIR1 overexpression lines are resistant to the necrotrophic bacterial pathogen Erwinia carotovora. The SDIR1 overexpression plants showed higher levels of ABA and jasmonic acid (JA), a plant hormone involved in defense against necrotrophic pathogens. In response to host-specialized P. syringae strain DC3000, SDIR1 overexpression led to less expression of genes that repress JA-mediated defense (signaling genes JAZ7 and JAZ8). The interaction of SDIR1 with the JA pathway indicate it is a susceptibility gene for biotrophic pathogens like P. syringae yet involved in defense against necrotrophic pathogens like E. carotovora.

"These findings open up new research avenues to discover the SDIR1-associated mechanisms that can harness the crop improvement by altering different plant traits," Ramu said. "The SDIR1 is also a potential target for genome editing in order to enhance crop protection. If the structure of SDIR1 is solved, more opportunities evolve to design CRISPR targets and drug-like molecules to protect crops from pathogens and abiotic stresses."

Credit: 
American Phytopathological Society

New alloy can directly reduce the weight of heat removal systems by a third

The new alloys created by NUST MISIS scientists in cooperation with LG Electronics will help reduce the weight of radiators and heat removal systems in electric vehicles and consumer electronics by one third. The research results are published in the Journal of Magnesium and Alloys.

According to experts, with the development of electronics the problem of efficient heat removal is becoming more and more acute -- with an increase in the productivity of equipment, heat generation also grows. Reducing the temperature directly affects the prolongation of the devices' life cycle. This is especially important for household appliances, electric vehicles, LED panels.

Scientists from NUST MISIS, in collaboration with LG Electronics, have created new high-heat-conductivity magnesium alloys that differ from their counterparts in increased reliability and low cost, and in addition, they can significantly reduce the weight of devices.

"Traditionally, aluminum is used for heat removal, but it turns out to be too massive for modern gadgets. Reducing the weight of devices can significantly reduce energy consumption during operation, as well as reduce greenhouse emissions during transportation, which is becoming increasingly important today. The use of our alloys will reduce the weight of heat-removing elements by a third without losing effciency," said Vyacheslav Bazhenov, associate professor at the Department of Foundry Technology & Artistic Processing of Materials at NUST MISIS.

One of the problems in the operation of magnesium alloys, as noted by scientists, is their ability to catch fire in the air. Due to the addition of calcium and yttrium, scientists managed to significantly increase the ignition temperature, so that new materials can be used in various gadgets without restrictions.

"We wanted to create alloys with a low cost, so we were almost not using expensive elements, which are usually alloyed with magnesium -- neodymium, lanthanum, thorium etc. As a result, we had alloys of two compositions: the cheapest -- alloyed with silicon, zinc and calcium (Mg-Si-Zn-Ca) (https://www.sciencedirect.com/science/article/pii/S2213956720300049) with high thermal conductivity and medium strength, and somewhat more expensive -- alloyed with zinc, yttrium and zirconium (Mg -- Zn -- Y -- Zr) (https://www.sciencedirect.com/science/article/pii/S2213956721000128) with high strength and slightly lower thermal conductivity," said Vyacheslav Bazhenov.

Based on the results of the work, LG Electronics registered patents for a high-heat-conducting magnesium alloy (Mg-Si-Zn-Ca) developed at NUST MISIS and a radiator made of it in the USA, the European Union, Korea and China.

Currently, the research team is working on new compositions of magnesium-based alloys, which can provide high strength and corrosion resistance along with low cost and high thermal conductivity.

Credit: 
National University of Science and Technology MISIS

DeepShake uses machine learning to rapidly estimate earthquake shaking intensity

A deep spatiotemporal neural network trained on more than 36,000 earthquakes offers a new way of quickly predicting ground shaking intensity once an earthquake is underway, researchers report at the Seismological Society of America (SSA)'s 2021 Annual Meeting.

DeepShake analyzes seismic signals in real time and issues advanced warning of strong shaking based on the characteristics of the earliest detected waves from an earthquake.

DeepShake was developed by Daniel J. Wu, Avoy Datta, Weiqiang Zhu and William Ellsworth at Stanford University.

The earthquake data used to train the DeepShake network came from seismic recordings of the 2019 Ridgecrest, California sequence. When its developers tested DeepShake's potential using the actual shaking of the 5 July magnitude 7.1 Ridgecrest earthquake, the neural network sent simulated alerts between 7 and 13 seconds prior to the arrival of high intensity ground shaking to locations in the Ridgecrest area.

The authors stressed the novelty of using deep learning for rapid early warning and forecasting directly from seismic records alone. "DeepShake is able to pick up signals in seismic waveforms across dimensions of space and time," explained Datta.

DeepShake demonstrates the potential of machine learning models to improve the speed and accuracy of earthquake alert systems, he added.

"DeepShake aims to improve on earthquake early warnings by making its shaking estimates directly from ground motion observations, cutting out some of the intermediate steps used by more traditional warning systems," said Wu.

Many early warning systems first determine earthquake location and magnitude, and then calculate ground motion for a location based on ground motion prediction equations, Wu explained.

"Each of these steps can introduce error that can degrade the ground shaking forecast," he added.

To address this, the DeepShake team turned to a neural network approach. The series of algorithms that make up a neural network are trained without the researcher identifying which signals are "important" for the network to use in its predictions. The network learns which features optimally forecast the strength of future shaking directly from the data.

"We've noticed from building other neural networks for use in seismology that they can learn all sorts of interesting things, and so they might not need the epicenter and magnitude of the earthquake to make a good forecast," said Wu. "DeepShake is trained on a preselected network of seismic stations, so that the local characteristics of those stations become part of the training data."

"When training a machine learning model end to end, we really think that these models are able to leverage this additional information to improve accuracy," he said.

Wu, Datta and their colleagues see DeepShake as complementary to California's operational ShakeAlert, adding to the toolbox of earthquake early warning systems. "We're really excited about expanding DeepShake beyond Ridgecrest, and fortifying our work for the real world, including fail-cases such as downed stations and high network latency," added Datta.

Credit: 
Seismological Society of America

Researchers show enhanced electrode-water interactions in metal-free aqueous batteries

Batteries are a part of everyday modern life, powering everything from laptops, phones and robot vacuums to hearing aids, pacemakers and even electric cars. But these batteries potentially pose safety and environmental risks.

In a study recently published in Cell Reports Physical Science, researchers at Texas A&M University investigated the components of a different kind of battery -- a metal-free, water-based battery -- which would reduce the flammable nature of standard batteries and decrease the number of metal elements used in their production.

Most batteries are Li-ion and contain lithium and cobalt, which are globally strategic elements, meaning they are located only in certain countries but essential to the global economy and United States battery manufacturing.

"This work enables the future design of metal-free aqueous batteries," said Dr. Jodie Lutkenhaus, professor and Axalta Coating Systems Chair in the Artie McFerrin Department of Chemical Engineering at Texas A&M. "By going metal-free, we can address the pressing global demand for strategic metals used in batteries, and by going aqueous, we replace the flammable volatile electrolyte with water."

Using a very sensitive measurement technique called electrochemical quartz crystal microbalance with dissipation monitoring, researchers were able to determine how electrons, ions and water transfer in the electrode as it is charged and discharged.

"With this information, we showed that enhanced electrode-water interactions lead to improved energy storage performance," she said.

The energy storage capacity was lower than that of traditional Li-ion batteries, but this paves the way for a more sustainable and less volatile battery in the future.

The research is in its initial stages, and there's opportunity for various applications in the real world. One particular potential is implantable batteries for medical devices.

Lutkenhaus' interest began when she learned about the strain on strategic elements such a lithium and cobalt due to increased battery manufacturing.

"By using completely different materials, such as we do with polymers here, we remove metals from the picture completely," she said. "My favorite aspect of this work is our ability to deeply characterize the molecular transport processes associated with this redox polymer. Only in the last few years have we been able to resolve such effects on this time and mass scale."

For the future, Lutkenhaus said they will need to identify more polymers that are compatible with the design.

"One we have that, we can produce a high-performance, full-cell for practical use," she said.

Credit: 
Texas A&M University

How oxygen radicals protect against cancer

FRANKFURT. Originally, oxygen radicals - reactive oxygen species, or ROS for short - were considered to be exclusively harmful in the body. They are produced, for example, by smoking or UV radiation. Because of their high reactivity, they can damage many important molecules in cells, including the hereditary molecule DNA. As a result, there is a risk of inflammatory reactions and the degeneration of affected cells into cancer cells.

Because of their damaging effect, however, ROS are also deliberately produced by the body, for example by immune or lung epithelial cells, which destroy invading bacteria and viruses with ROS. This requires relatively high ROS concentrations. In low concentrations, on the other hand, ROS play an important role as signalling molecules. For these tasks, ROS are specifically produced by a whole group of enzymes. One representative of this group of enzymes is Nox4, which continuously produces small amounts of H2O2. Nox4 is found in almost all body cells, where its product H2O2 maintains a large number of specialised signaling functions, contributing, for example, to the inhibition of inflammatory reactions.

Researchers at Goethe University Frankfurt, led by Professor Katrin Schröder, have now discovered that by producing H2O2, Nox4 can even prevent the development of cancer. They examined mice that were unable to produce Nox4 due to a genetic modification. When these mice were exposed to a carcinogenic environmental toxin (cancerogen), the probability that they would develop a tumour doubled. Since the mice suffered from very different types of tumours such as skin sarcomas and colon carcinomas, the researchers suspected that Nox4 has a fundamental influence on cellular health.

Molecular investigations showed that the H2O2 formed by Nox4 keeps a cascade going that prevents certain important signalling proteins (phosphatases) from entering the cell nucleus. If Nox4 and consequently H2O2 are absent, those signalling proteins migrate into the cell nucleus and as a consequence, severe DNA damage is hardly recognised.

Severe DNA damage - e.g. double strand breaks - occurs somewhere in the body every day. Cells react very sensitively to such DNA damage, setting a whole repertoire of repair enzymes in motion. If this does not help, the cell activates its cell death programme - a precautionary measure of the body against cancer. When such damage goes unrecognised, as occurs in the absence of Nox4, it spurs cancer formation.

Prof. Katrin Schröder explains the research results: "If Nox4 is missing and there is therefore no H2O2, the cells no longer recognise DNA damage. Mutations accumulate and damaged cells continue to multiply. If an environmental toxin is added that massively damages the DNA, the damage is no longer recognised and repaired. The affected cells are not eliminated either, but multiply, sometimes very quickly and uncontrollably, which eventually leads to the development of tumours. A small amount of H2O2 thus maintains an internal balance in the cell that protects the cells from degeneration."

Credit: 
Goethe University Frankfurt

From toxic ions to single-atom copper

image: Ezekiel Cullen Professor of Engineering at the University of Houston Cullen College of Engineering

Image: 
University of Houston

Copper remains one of the single most ubiquitous metals in everyday life. As a conductor of heat and electricity, it is utilized in wires, roofing and plumbing, as well as a catalyst for petrochemical plants, solar and electrical conductors and for a wide range of energy related applications. Subsequently, any method to harvest more of the valuable commodity proves a useful endeavor.

Debora Rodrigues, Ezekiel Cullen Professor of Engineering at the University of Houston Cullen College of Engineering, in collaboration with Francisco C. Robles Hernandez, professor at the UH College of Technology and Ellen Aquino Perpetuo, professor at the University of Sao Paulo, Brazil offered conclusive research for understanding how bacteria found in copper mines convert toxic copper ions to stable single-atom copper.

In their co-authored paper, "Copper Mining Bacteria: Converting toxic copper ions into a stable single atom copper," their research demonstrates how copper-resistant bacterium from a copper mine in Brazil convert copper sulfate ions into zero-valent metallic copper.

"The idea of having bacteria in mines is not new, but the unanswered question was: what are they doing in the mines?" Robles said. "By putting the bacteria inside an electronic microscope, we were able to figure out the physics and analyze it. We found out the bacteria were isolating single atom copper. In terms of chemistry, this is extremely difficult to derive. Typically, harsh chemicals are used in order to produce single atoms of any element. This bacterium is creating it naturally that is very impressive."

As useful as copper is, the process of mining the metal often leads to toxic exposures and challenges on drawing out substantial volume for commercial use. Approximately one billion tons of copper are estimated in global reserves, according to the Copper Development Association Inc., with roughly 12.5 million metric tons per year mined. This aggregates to roughly 65 years of remaining reserves. Part of the supply challenge comes from limited available copper in high concentration in the earth's crust, but the other challenge is the exposure to sulfur dioxide and nitrogen dioxide in the copper smelting and production process to concentrate the metal into useful quantities.

"The novelty of this discovery is that microbes in the environment can easily transform copper sulfate into zero valent single atom copper. This is a breakthrough because the current synthetic process of single atom zerovalent copper is typically not clean, it is labor intensive and expensive," Rodrigues said.

"The microbes utilize a unique biological pathway with an array of proteins that can extract copper and convert it into single-atom zero-valent copper. The aim of the microbes is to create a less toxic environment for themselves by converting the ionic copper into single-atom copper, but at the same time they make something that is beneficial for us too."

With a focus in electronic microscopy, Robles examined samples from Rodrigues' findings in Brazilian copper mines and he determined the single atom nature of the copper. Rodrigues and Aquino's groups further identified the bacterial process for converting copper sulfate to elemental copper - a rare find.

Research results demonstrate this new conversion process as an alternative to produce single atoms of metallic coper is safer, and more efficient versus current methods (i.e. chemical vapor deposition, sputtering and femtosecond laser ablation).

"We have only worked with one bacterium, but that may not be the only one out there that performs a similar function," Rodrigues concluded. "The next step for this particular research is harvesting the copper from these cells and using it for practical applications."

Credit: 
University of Houston

3D motion tracking system could streamline vision for autonomous tech

Images

A new real-time, 3D motion tracking system developed at the University of Michigan combines transparent light detectors with advanced neural network methods to create a system that could one day replace LiDAR and cameras in autonomous technologies.

While the technology is still in its infancy, future applications include automated manufacturing, biomedical imaging and autonomous driving. A paper on the system is published in Nature Communications.

50928751578_a3702cc26c_q.jpgThe imaging system exploits the advantages of transparent, nanoscale, highly sensitive graphene photodetectors developed by Zhaohui Zhong, U-M associate professor of electrical and computer engineering, and his group. They're believed to be the first of their kind.

"The in-depth combination of graphene nanodevices and machine learning algorithms can lead to fascinating opportunities in both science and technology," said Dehui Zhang, a doctoral student in electrical and computer engineering. "Our system combines computational power efficiency, fast tracking speed, compact hardware and a lower cost compared with several other solutions."

The graphene photodetectors in this work have been tweaked to absorb only about 10% of the light they're exposed to, making them nearly transparent. Because graphene is so sensitive to light, this is sufficient to generate images that can be reconstructed through computational imaging. The photodetectors are stacked behind each other, resulting in a compact system, and each layer focuses on a different focal plane, which enables 3D imaging.

But 3D imaging is just the beginning. The team also tackled real-time motion tracking, which is critical to a wide array of autonomous robotic applications. To do this, they needed a way to determine the position and orientation of an object being tracked. Typical approaches involve LiDAR systems and light-field cameras, both of which suffer from significant limitations, the researchers say. Others use metamaterials or multiple cameras. Hardware alone was not enough to produce the desired results.

They also needed deep learning algorithms. Helping to bridge those two worlds was Zhen Xu, a doctoral student in electrical and computer engineering. He built the optical setup and worked with the team to enable a neural network to decipher the positional information.

The neural network is trained to search for specific objects in the entire scene, and then focus only on the object of interest--for example, a pedestrian in traffic, or an object moving into your lane on a highway. The technology works particularly well for stable systems, such as automated manufacturing, or projecting human body structures in 3D for the medical community.

"It takes time to train your neural network," said project leader Ted Norris, professor of electrical and computer engineering. "But once it's done, it's done. So when a camera sees a certain scene, it can give an answer in milliseconds."

Doctoral student Zhengyu Huang led the algorithm design for the neural network. The type of algorithms the team developed are unlike traditional signal processing algorithms used for long-standing imaging technologies such as X-ray and MRI. And that's exciting to team co-leader Jeffrey Fessler, professor of electrical and computer engineering, who specializes in medical imaging.

"In my 30 years at Michigan, this is the first project I've been involved in where the technology is in its infancy," Fessler said. "We're a long way from something you're going to buy at Best Buy, but that's OK. That's part of what makes this exciting."

The team demonstrated success tracking a beam of light, as well as an actual ladybug with a stack of two 4x4 (16 pixel) graphene photodetector arrays. They also proved that their technique is scalable. They believe it would take as few as 4,000 pixels for some practical applications, and 400x600 pixel arrays for many more.

While the technology could be used with other materials, additional advantages to graphene are that it doesn't require artificial illumination and it's environmentally friendly. It will be a challenge to build the manufacturing infrastructure necessary for mass production, but it may be worth it, the researchers say.

"Graphene is now what silicon was in 1960," Norris said. "As we continue to develop this technology, it could motivate the kind of investment that would be needed for commercialization."

Credit: 
University of Michigan