Tech

New method brings physics to deep learning to better simulate turbulence

image: Computer simulation visualization showing the complex structure of flow turbulence

Image: 
Jonathan Freund, The Grainger College of Engineering

Deep learning, also called machine learning, reproduces data to model problem scenarios and offer solutions. However, some problems in physics are unknown or cannot be represented in detail mathematically on a computer. Researchers at the University of Illinois Urbana-Champaign developed a new method that brings physics into the machine learning process to make better predictions.

The researchers used turbulence to test their method.

"We don't know how to mathematically write down all of turbulence in a useful way. There are unknowns that cannot be represented on the computer, so we used a machine learning model to figure out the unknowns. We trained it on both what it sees and the physical governing equations at the same time as a part of the learning process. That's what makes it magic and it works," said Willett Professor and Head of the Department of Aerospace Engineering Jonathan Freund.

Freund said the need for this method was pervasive.

"It's an old problem. People have been struggling to simulate turbulence and to model the unrepresented parts of it for a long time," Freund said.

Then he and his colleague Justin Sirignano had an epiphany.

"We learned that if you try to do the machine learning without considering the known governing equations of the physics, it didn't work. We combined them and it worked."

When designing an air or spacecraft, Freund said this method will help engineers predict whether or not a design involving turbulent flow will work for their goals. They'll be able to make a change, run it again to get a prediction of heat transfer or lift, and predict if their design is better or worse.

"Anyone who wants to do simulations of physical phenomena might use this new method. They would take our approach and load data into their own software. It's a method that would admit other unknown physics. And the observed results of that unknown physics could be loaded in for training," Freund said.

The work was done using the super-computing facility at the National Center for Supercomputing at UIUC known as Blue Waters, making the simulation faster and so more cost efficient.

The next step is to use the method on more realistic turbulence flows.

"The turbulent flow we used to demonstrate the method is a very simple configuration," Freund said. "Real flows are more complex. I'd also like to use the method for turbulence with flames in it--a whole additional type of physics. It's something we plan to continue to develop in the new Center for Exascale-enabled Scramjet Design, housed in NCSA."

Freund said this work is at the research level but can potentially affect industry in the future.

"Universities were very active in the first turbulence simulations, then industry picked them up. The first university-based large-eddy simulations looked incredibly expensive in the 80s and 90s. But now companies do large-eddy simulations. We expect this prediction capability will follow a similar path. I can see a day in the future with better techniques and faster computers that companies will begin using it."

Credit: 
University of Illinois Grainger College of Engineering

Former piece of Pacific Ocean floor imaged deep beneath China

image: Fenglin Niu is a professor of Earth, environmental and planetary sciences at Rice University.

Image: 
Courtesy of Rice University

HOUSTON - (Nov. 16, 2020) - In a study that gives new meaning to the term "rock bottom," seismic researchers have discovered the underside of a rocky slab of Earth's surface layer, or lithosphere, that has been pulled more than 400 miles beneath northeastern China by the process of tectonic subduction.

The study, published by a team of Chinese and U.S. researchers in Nature Geoscience, offers new evidence about what happens to water-rich oceanic tectonic plates as they are drawn through Earth's mantle beneath continents.

Rice University seismologist Fenglin Niu, a co-corresponding author, said the study provides the first high-resolution seismic images of the top and bottom boundaries of a rocky, or lithospheric, tectonic plate within a key region known as the mantle transition zone, which starts about 254 miles (410 kilometers) below Earth's surface and extends to about 410 miles (660 kilometers).

"A lot of studies suggest that the slab actually deforms a lot in the mantle transition zone, that it becomes soft, so it's easily deformed," Niu said. How much the slab deforms or retains its shape is important for explaining whether and how it mixes with the mantle and what kind of cooling effect it has.

Earth's mantle convects like heat in an oven. Heat from Earth's core rises through the mantle at the center of oceans, where tectonic plates form. From there, heat flows through the mantle, cooling as it moves toward continents, where it drops back toward the core to collect more heat, rise and complete the convective circle.

Previous studies have probed the boundaries of subducting slabs in the mantle, but few have looked deeper than 125 miles (200 kilometers) and none with the resolution of the current study, which used more than 67,000 measurements collected from 313 regional seismic stations in northeastern China. That work, which was done in collaboration with the China Earthquake Administration, was led by co-corresponding author Qi-Fu Chen from the Chinese Academy of Sciences.

The research probes fundamental questions about the processes that shaped Earth's surface over billions of years. Mantle convection drives the movements of Earth's tectonic plates, rigid interlocked pieces of Earth's surface that are in constant motion as they float atop the asthenosphere, the topmost mantle layer and the most fluid part of the inner planet.

Where tectonic plates meet, they jostle and grind together, releasing seismic energy. In extreme cases, this can cause destructive earthquakes and tsunamis, but most seismic motion is too faint for humans to feel without instruments. Using seismometers, scientists can measure the magnitude and location of seismic disturbances. And because seismic waves speed up in some kinds of rock and slow in others, scientists can use them to create images of Earth's interior, in much the same way a doctor might use ultrasound to image what's inside a patient.

Niu, a professor of Earth, environmental and planetary sciences at Rice, has been at the forefront of seismic imaging for more than two decades. When he did his Ph.D. training in Japan more than 20 years ago, researchers were using dense networks of seismic stations to gather some of the first detailed images of the submerged slab boundaries of the Pacific plate, the same plate that was imaged in study published this week.

"Japan is located about where the Pacific plate reaches around 100-kilometer depths," Niu said. "There is a lot of water in this slab, and it produces a lot of partial melt. That produces arc volcanoes that helped create Japan. But, we are still debating whether this water is totally released in that depth. There is increasing evidence that a portion of the water stays inside the plate to go much, much deeper."

Northeastern China offers one of the best vantage points to investigate whether this is true. The region is about 1,000 kilometers from the Japan trench where the Pacific plate begins its plunge back into the planet's interior. In 2009, with funding from the National Science Foundation and others, Niu and scientists from the University of Texas at Austin, the China Earthquake Administration, the Earthquake Research Institute of Tokyo University and the Research Center for Prediction of Earthquakes and Volcanic Eruptions at Japan's Tohoku University began installing broadband seismometers in the region.

"We put 140 stations there, and of course the more stations the better for resolution," Niu said. "The Chinese Academy of Sciences put additional stations so they can get a finer, more detailed image."

In the new study, data from the stations revealed both the upper and lower boundaries of the Pacific plate, dipping down at a 25-degree angle within the mantle transition zone. The placement within this zone is important for the study of mantle convection because the transition zone lies below the asthenosphere, at depths where increased pressure causes specific mantle minerals to undergo dramatic phase changes. These phases of the minerals behave very differently in seismic profiles, just as liquid water and solid ice behave very different even though they are made of identical molecules. Because phase changes in the mantle transition zone happen at specific pressures and temperatures, geoscientists can use them like a thermometer to measure the temperature in the mantle.

Niu said the fact that both the top and bottom of the slab are visible is evidence that the slab hasn't completely mixed with the surrounding mantle. He said heat signatures of partially melted portions of the mantle beneath the slab also provide indirect evidence that the slab transported some of its water into the transition zone.

"The problem is explaining how these hot materials can be dropped into the deeper part of the mantle," Niu said. "It's still a question. Because they are hot, they are buoyant."

That buoyancy should act like a life preserver, pushing upward on the underside of the sinking slab. Niu said the answer to this question could be that holes have appeared in the deforming slab, allowing the hot melt to rise while the slab sinks.

"If you have a hole, the melt will come out," he said. "That's why we think the slab can go deeper."

Holes could also explain the appearance of volcanos like the Changbaishan on the border between China and North Korea.

"It's 1,000 kilometers away from the plate boundary," Niu said. "We don't really understand the mechanism of this kind of volcano. But melt rising from holes in the slab could be a possible explanation."

Credit: 
Rice University

Mobility behavior may be the key to predicting, promoting individual well-being

image: Mobility behavior may be the key to predicting, promoting individual well-being

Image: 
Photo by Matheus Bertelli from Pexels.com

DSI postdoctoral fellow Sandrine Müller uses smartphone sensor data to study human behavior.

A research team led by Sandrine Müller, a Data Science Institute postdoctoral research fellow, and Heinrich Peters, a Columbia Business School (CBS) doctoral candidate, has linked mobility behavior to well-being by exploring associations between different kinds of mobility behaviors (e.g., time spent in transit, number of locations visited, and total distance covered) and several indicators of well-being (e.g., depression, loneliness, and stress).

Müller, Peters, and their co-authors, including Sandra Matz, David W. Zalaznick Associate Professor of Business at CBS; Wang Weichen, a Two Sigma quantitative researcher; and Gabriella Harari, an assistant professor of communication at Stanford University, published their findings in a special issue on behavioral personality science in the age of big data of the European Journal of Personality.

To examine the links between mobility behaviors and well-being, Müller, Peters, et al., examined questionnaire and GPS data from 2,319 psychology students from a large university in the United States. At the beginning of the study, the researchers collected students' reports of their general levels of loneliness and depression. Additionally, students used their smartphones to answer questions about their anxiety, affect, stress, and energy four times a day over the course of the next two weeks.

One unique aspect of the study is that Global Positioning System (GPS) data were also collected during this time. The GPS data were transformed into several measures of mobility behaviors, which were condensed into three broad types of mobility patterns: distance (behaviors related to the distance a person travelled), entropy (the distribution of time a person spent in different places), and routine (the regularity of a person's mobility patterns).

"After linking these mobility patterns to participants' well-being scores, we found that mobility was related to well-being on the daily level, as well as on the level of an aggregate across the study period," Müller said. "This demonstrates that mobility behavior is not only important for understanding how people feel on a particular day, but may also predict how they feel across a longer time."

Distance and entropy specifically related to time spent in social places were related to more positive well-being. Routine behaviors were related to depression and loneliness. Taken together, these findings show that individuals' mobility behavior may indeed be useful in predicting their well-being.

"While it was not something our study was aiming to do, I think it definitely gives ideas for future studies on interventions and real-world applications," Müller said. "There's potential for learning individual patterns and showing that on the days where people go to certain places, they feel better. By giving them suggestions to try certain things, we can try to make them feel better."

Credit: 
Data Science Institute at Columbia

Spiny dogfish eat Atlantic cod: DNA may provide some answers

Conventional observations show that spiny dogfish in the western North Atlantic rarely eat Atlantic cod. However, some believe the rebuilding dogfish populations are limiting depleted cod numbers by competition or predation. To find out what is going on, NOAA Fisheries scientists looked to genetic testing to confirm cod presence in dogfish stomachs.

To get the samples they needed, scientists at the Northeast Fisheries Science Center asked local fishermen for help. Commercial fishing boats from New Bedford, Gloucester, Plymouth and Newburyport in Massachusetts stepped up. All participate in the Study Fleet, a program in the center's Cooperative Research Branch. Spiny dogfish were collected on 15 fishing trips during normal trawling operations between May 2014 and May 2015 in the Gulf of Maine and on Georges Bank.

"This was an excellent example of how cooperating fishing partners supplied fish for a pilot study of interest, and have helped advance this field of study," said Richard McBride, chief of the center's Population Biology Branch and a co-author of the study. "We were able to demonstrate that identifying cod in predator stomachs with environmental DNA works. It let us show fishermen that these innovative laboratory techniques can work on samples collected in the open ocean."

Study findings, published in Ecology and Evolution, reveal rates of interactions between cod and spiny dogfish are higher than previously thought.

Dogfish primarily eat other fish, but also jellyfish, squid and bivalves in some locations. Cod as dogfish prey is rare. Only 14 cod have been visually observed in the stomachs of 72,241 dogfish collected by the science center's bottom trawl surveys from 1977 to 2017. This suggests low predation rates on cod. However, small cod are much more likely to be well-digested when the samples are taken. If dogfish have eaten these smaller cod, it is difficult to identify the species by observation alone. Molecular-level studies, using DNA, offered some answers.

In the recently published study, researchers examined the stomach contents of 295 dogfish samples collected throughout the year. Using the conventional visual method, they observed 51 different prey types and nearly 1600 individual prey items. NOAA Fisheries scientists paired these visual observations with a laboratory technique (real-time polymerase chain reaction, or PCR) to detect small amounts of cod DNA. Using this technique, researchers examined 291 of the 295 available samples and detected cod DNA in 31 of them.

Fishermen have also reported seeing dogfish eating cod during fishing operations. Sometimes this is dogfish actively foraging on live prey. Other times it is due to dogfish depredation ­­­- dogfish eating the fish in the net before it can be brought aboard the fishing vessel. In this study, 50 percent of the sampling trips where cod was eaten indicate scavenging by spiny dogfish.

Members of the study fleet who helped collect the samples recognize the value of their participation in the study.

"It's always good to have more information on the species that live in our waters. I'm happy to contribute to work that furthers our understanding of these populations, especially in regard to cod," said Captain Jim Ford of the F/V Lisa Ann III from Newburyport. "I know there are some different opinions on what role dogfish play in the ecosystem, so the more data we can provide to inform that, the better."

While the findings suggest higher interaction rates between dogfish and cod than previously observed, further study is needed to determine just how much cod dogfish eat. Studies are ongoing to better integrate factors such as predator-prey relationships into stock assessment models used to estimate both current and future fish population numbers.

Researchers say the next step is to use a statistically robust sampling design to examine a population-level assessment of the effects of dogfish predation on cod population size. Estimates of spiny dogfish digestion rates, and ways to consider dogfish scavenging during fishing operations, are also needed.

"The Northeast Fisheries Science Center has the laboratory facilities to detect cod DNA in predator stomachs, and a bottom trawl survey that is designed to measure population level effects among groundfish," said McBride. "We just need to put these two pieces together to estimate the effect of spiny dogfish predation on Atlantic cod. Easier said than done, but all the pieces are there."

Credit: 
NOAA Northeast Fisheries Science Center

Taking charge to find the right balance for advanced optoelectronic devices

image: Professor Jong-Soo Lee (right) with Min-Hye Jeong (left), a student from the integrated Master & Doctorate program, next to their observation devices for the experiment

Image: 
DGIST

2D materials, consisting of a single layer of atoms, are revolutionizing the field of electronics and optoelectronics. They possess unique optical properties that their bulky counterparts do not, spurring the creation of powerful energy devices (for example, optic fibers or solar cells). Interestingly, different 2D materials can be stacked together in a "heterojunction" structure, to generate light-induced electric current (or "photocurrent"). To do this in an optimal manner, it is important to find the right "balance" of the charged particles (called "electrons" and "holes") and the energy produced by them.

While chemically treating the surface of the materials ("chemical doping") can help to some extent, this technique is not very efficient in 2D materials. Another solution is to control the charge properties by tuning the voltage in a precise manner, a technique called "electrostatic doping." This technique, however, needs to be explored further.

A team of researchers from Daegu Gyeongbuk Institute of Science and Technology, Korea, led by Professor Jong-Soo Lee, set out to do this, in a study published in Advanced Science. For this, they built a multifunctional device, called a "phototransistor," composed of 2D heterojunctions. The main strategy in their design was the selective application of electrostatic doping to a specific layer.

Prof Lee further explains the design of their model, "We fabricated a multifunctional 2D heterojunction phototransistor with a lateral p-WSe2/n-WS2/n-MoS2 structure to identify how photocurrents and noise were created in heterojunctions. By controlling the electrostatic conditions in one of the layers (n-WS2), we were able to control the charge that was carried to the other two layers."
The fact that the researchers could control the charge balance enabled them to observe the origin of the photocurrent as well as of the unwanted "noise" current, using a photocurrent mapping system. They could also study the charges in relation to the conditions that they set. But the most interesting part was that when the concentration of charge was optimal, the heterojunction structure showed faster and higher photoresponsivity as well as higher photodetectivity!

These findings shed light on the importance of charge balance in heterojunctions, potentially paving the way for advanced optoelectronic devices. Prof Lee concludes, "Our study reveals that even if the charge densities of the active materials of the layered structures are not perfectly matched, it is still possible to create an optoelectronic device having excellent characteristics by tuning the charge balance through the gate voltage."

Credit: 
DGIST (Daegu Gyeongbuk Institute of Science and Technology)

New phase of modeling the viscous coupling effects of multiphase fluid flow

image: Predicting the multiphase permeability in pore throat by using artificial neural network

Image: 
I2CNER, Kyushu University

Fukuoka, Japan - Many applications, including carbon dioxide storage and oil recovery, involve the simultaneous flow of two or more phases of matter (solid, liquid, gas, etc.) through porous materials. Pore-scale modeling of such multiphase flow has struggled to capture important phenomena referred to as viscous coupling effects. But now, a research team has developed a method that overcomes this limitation with potential applications to improve fuel technologies and carbon capture systems.

In a study published this month in Advances in Water Resources, researchers led by the International Institute for Carbon-Neutral Energy Research (WPI-I2CNER) at Kyushu University present a way to incorporate viscous coupling effects into pore-scale modeling of multiphase flow.

A common technique for studying such multiphase flow is pore network modeling (PNM), whereby simplified transport equations are solved for idealized pore geometries. PNM can be used to quickly estimate transport properties, but it neglects viscous coupling effects. An alternative approach is the lattice Boltzmann method (LBM), whereby equations governing fluid flow are solved for realistic pore geometries. Although the LBM can capture viscous coupling effects, it is extremely computationally inefficient.

The team behind this latest research had the idea to combine these two techniques. "We devised an improved model for PNM that uses data collected from LBM simulations," explains co-author of the study Takeshi Tsuji. "In the simulations, we examined multiphase flow at the pore scale for a wide range of geometric parameters and viscosity ratios."

The researchers found that for some configurations, viscous coupling effects significantly influence multiphase flow in the pore throat. They used the simulation results to derive a modification factor, expressed as a function of viscosity ratios, that can be easily incorporated into PNM to account for viscous coupling effects. The team also developed a machine learning-based method to estimate the permeability associated with multiphase flow.

"We trained an artificial neural network using a database built from the results of simulations. These simulations considered different combinations of geometric parameters, viscosity ratios, and so on," says lead author Fei Jiang. "We found that the trained neural network can predict the multiphase permeability with extremely high accuracy."

This new data-driven approach not only improves PNM by including detailed pore-scale information, but it maintains good computational efficiency. Given that multiphase flow through porous materials is central to many natural and industrial processes, studies such as this one could have far-reaching implications.

Credit: 
Kyushu University, I2CNER

Cancer metastasis: From problem to opportunity

When a patient with cancer is told the devastating news that their disease has spread, or metastasized, to a new part of their body, it has most often moved to their lungs. The branching blood vessels that allow oxygen to diffuse from the lungs' air sacs into red blood cells are so tiny that a rogue cancer cell circulating in the bloodstream can easily get stuck there and take up residence, eventually growing into a secondary tumor. Once established, metastatic tumors unleash a campaign of chemical cues that thwart the body's defenses, hampering efforts to induce an immune response. There are no treatments approved for lung metastasis, which is the leading cause of death from metastatic disease.

That grim prognosis may soon be less grim thanks to a new technique developed by researchers at Harvard's Wyss Institute for Biologically Inspired Engineering and John A. Paulson School for Engineering and Applied Sciences (SEAS). Rather than viewing lung metastasis as unfortunate fallout from a primary tumor elsewhere, the team focused on treating the metastasis itself by delivering immune-cell-attracting chemicals into lung cancers via red blood cells. Not only did this approach halt lung tumor growth in mice with metastatic breast cancer, it also acted as a vaccine and protected the animals against future cancer recurrences. The research is reported in Nature Biomedical Engineering.

"Our approach is the exact opposite of conventional cancer treatments that focus on getting the immune system to recognize and attack the primary tumor, because those tumors are often large and difficult for immune cells to penetrate," said co-first author Zongmin Zhao, Ph.D., a Postdoctoral Fellow at the Wyss Institute and SEAS. "We recognized that the high density of blood vessels in the lungs provides much better access to tumors there, offering a unique opportunity to induce an immune response by targeting the metastasis."

An EASI solution to a hard problem

Delivering therapies to their intended target while sparing the rest of the body is one of the grand challenges of medicine. The liver and spleen are incredibly efficient at filtering out any foreign substances from the blood, meaning that drugs often need to be given at a high dose that can cause harmful off-target side effects. Overcoming this barrier to effective treatment is a major focus of Wyss Core Faculty member Samir Mitragotri's work, and his lab recently discovered that attaching drug-filled nanoparticles to red blood cells allows them to escape detection and stay in the body long enough to deliver their payloads while minimizing toxicity.

Zhao and his co-authors decided to use that technique to see if they could deliver immune-system-stimulating chemicals to metastatic lung tumors rather than chemotherapy, which can damage lung tissue. They chose a chemokine, a small protein that attracts white blood cells, called CXCL10 as their payload.

"Lung metastases deplete certain kinds of chemokines from their local environment, which means the signal that should attract beneficial white blood cells to fight the tumor is gone. We hypothesized that providing that chemokine signal at the tumor site could help restore the body's normal immune response and enable it to attack the tumors," said co-first author Anvay Ukidve, Ph.D., a former Graduate Research Fellow at the Wyss Institute and SEAS who is now a scientist at a pharmaceutical company.

The team first optimized their nanoparticles to ensure that they would detach from their red blood cell hosts only when the blood cells made their tight squeeze through the lungs' tiny capillaries. They also decorated the nanoparticles' surfaces with an antibody that attaches to a protein commonly found on lung blood vessel cells called ICAM-1 to help increase the nanoparticles' retention in the lungs. These nanoparticles were then filled with the chemokine CXCL10, creating a package the researchers named ImmunoBait. ImmunoBait particles were then attached to mouse red blood cells to create a therapeutic delivery system named erythrocyte-anchored systemic immunotherapy (EASI), and injected into the bloodstreams of mice with breast cancer that had metastasized to their lungs.

ImmunoBait particles stayed in the animals' lungs for up to six hours after EASI injection, and most of them were distributed in and around the metastases. Treatment with EASI led to strong expression of CXCL10 for up to 72 hours, suggesting that delivering the chemokine stimulated the body to start producing it on its own, despite the immunosuppressive tumor microenvironment. To find out exactly what effect the delivered CXCL10 had on the mice's immune systems, the team analyzed the different types of cells present in the lungs before and after EASI injection. They observed increases in the infiltration of T helper type 1 (Th1) CD4 cells, which release pro-inflammatory chemicals that help keep tumors under control, as well as effector CD8 cells and natural killer (NK) cells, which drive the direct killing of cancer cells.

Inject locally, protect globally

Armed with evidence that their system could attract immune cells to lung metastases, the team then tested its ability to slow or halt the progression of the disease in mice. They first removed the animals' breast cancer tumors (to mimic the surgery that patients often undergo to treat their primary tumors), then injected them with either CXCL10 alone, ImmunoBait nanoparticles alone, or EASI.

EASI inhibited the progression of lung metastasis with four-fold and six-fold greater efficacy than free CXCL10 and ImmunoBait, respectively. All of the EASI-treated mice had fewer than 20 metastatic nodules after 37 days, and 25% of them had only one nodule. In contrast, mice that received the other therapies had anywhere from two to 100 nodules. The mice that received EASI also had nearly three-fold better survival: while animals in all the other treatment groups succumbed to their disease in less than 20 days, about 25% of the EASI-treated mice survived for 40 days. They were also free of any signs of off-target toxicity or other negative effects from the treatment.

Because EASI effectively activated the immune system against lung metastases, the researchers wondered if that activation could provide lasting protection against future recurrences of the same cancer. They analyzed the blood of mice that had received EASI and observed an increased number of memory CD8 cells, which persist long-term after an immune threat and can sound the alarm if that threat resurfaces. To test whether those memory cells provided sufficient protection, the team re-inoculated mice with the same tumor cells two days after their last treatment. Mice that had been treated with EASI had significantly lower tumor growth and tumor weight than mice that were injected with saline or left untreated, demonstrating that local treatment of lung metastases produced systemic immunity against tumor development.

"These findings highlight the ability of our EASI system to convert the biological adversity of metastasis into a unique therapeutic opportunity against metastatic cancers," said senior author Mitragotri, Ph.D., who is also a Hiller Professor of Bioengineering and the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS. His team is continuing to optimize EASI by experimenting with delivering different types of chemokines via ImmunoBait nanoparticles, and are exploring the combination of EASI with currently approved cancer therapies to identify potential synergies.

"This unique bioinspired approach to cancer therapy is a wonderful example of the out-of-the-box thinking that we encourage and support at the Wyss Institute - by leveraging the body's own red blood cells to deliver drugs to capillary blood vessels of the lung where many metastases form, Samir's team has developed an entirely new type of immunotherapy and opened the door to potentially lifesaving therapies," said Wyss Institute Founding Director Don Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children's Hospital, as well as Professor of Bioengineering at SEAS.

Credit: 
Wyss Institute for Biologically Inspired Engineering at Harvard

Overly reactivated star-shaped cells explain the unpredictability of Alzheimer's disease

video: IBS-KIST researchers have demonstrated that the severity of ‘reactive astrocytes’ is a key indicator for the onset of Alzheimer’s disease.

Image: 
IBS

Though Alzheimer's disease (AD) is a common and fatal neurodegenerative brain disorder, most of AD treatments seem to not be making much headway to unravel the mystery of its cause. Many AD drugs have targeted the elimination of beta-amyloid (Aβ) or amyloid plaques, which block cell-to-cell signaling at synapses. But some AD patients continue to show neurodegeneration and cognitive decline even after the removal of the amyloid plaques. Conversely, many people indicate no signs of neurodegeneration and cognitive impairment even in very high level of Aβ. Also, it has never been precisely clear as to why the star-shaped non-neuronal cells, called astrocytes, change in their shapes and functions from the early onset of AD, and continue such reactive state throughout the AD progression.

Researchers at the Center for Cognition and Sociality, within the Institute for Basic Science (IBS) and Korea Institute of Science and Technology (KIST) have demonstrated that the severity of 'reactive astrocytes' is a key indicator for the onset of AD, raising profound implications of the current theory of AD mechanism. In its toxin-receptor-based animal model, the research team fine-tuned astrocytic reactivity in vivo. They found that the mild reactive astrocytes can naturally reverse its reactivity, whereas severe reactive astrocytes can cause irreversible neurodegeneration, brain atrophy and cognitive deficits, all within 30 days (Figure 2). Notably, this severe-reactive-astrocyte-induced neurodegeneration was successfully replicated in virus-injected APP/PS1 mice, which have been widely known to lack neurodegeneration. These results indicate that severe reactive astrocytes are sufficient for neurodegeneration.

"This finding suggests experiences such as traumatic brain injury, viral infection, and post-traumatic stress disorder might be needed to transform a healthy brain to be vulnerable to Alzheimer's disease via excessive oxidative stress," says Director C. Justin LEE (at IBS), the study's corresponding author. "The excessive oxidative stress disables the body's ability to counteract the harmful effects of overproduced oxygen-containing molecules, subsequently transforming mild reactive astrocytes into neurotoxic severe reactive astrocytes," explains Dr. Lee. The team revealed that toxin-responsive astrocytes activate a cellular restoration mechanism (or autophagy-mediated degradation pathway) and increase hydrogen peroxide (H2O2) by triggering monoamine oxidase B (MAO-B). MAO-B plays an important role in the reduction of dopamine that hinders the signal transmission to produce smooth, purposeful movement.

Such mechanistic system results in morphological hypertrophy of astrocytic processes followed by a cascade of neurodegenerative events: turning-on of the nitric oxide synthesizing enzyme iNOS, nitrosative stress, microglial activation and tauopathy. The research team verified that all of these events of the AD pathology were halted by a recently developed reversible MAO-B inhibitor, KDS2010 or a potent H2O2 scavenger, AAD-2004. This reinforces that severe reactive astrocytes are the cause of neurodegeneration, not the result of it as previously assumed, notes Director Lee. Finally, these molecular features of the severe reactive astrocytes are commonly shared in various animal models of AD and in the brain of human AD patients.

This study offers plausible explanations for why AD has been so unpredictable: neurodegeneration cannot be reversed once severe reactive astrocytes are on; and mild reactive astrocytes can be recovered unless being stretched by other pathological burdens. "Notably, this study suggests that an important step to establishing a new treatment strategy for Alzheimer's disease should be by targeting reactive astrocytes that appear to be overly activated in the early stages," says Dr. RYU Hoon (at KIST), another corresponding author of the study. This should be accompanied by the development of the diagnostic tools for reactive astrocytes and early Alzheimer's disease, adds Dr. Ryu.

Dr. CHUN Heejung (at IBS), the first author of the study says, "The reactive astrocytes are a general phenomenon occurring in various brain diseases such as Parkinson's disease and brain tumors, as well as Alzheimer's disease. Building upon this study, we have plans to expand our mechanistic insights of the reactivity-dependent neuronal death into other brain diseases for which treatment has not yet been developed."

Credit: 
Institute for Basic Science

Half of researchers worried about long-term impact of COVID-19 to funding -- global study

The impact of the coronavirus pandemic has created concerns amongst the scientific research community that funding to their area will be impacted in the long term, a global survey shows. Half (47%)?of those surveyed believe less funding will be available in their area in the future because of COVID-19, signaling a potentially lasting impact on the scientific research landscape. Just one in 10 (9%) said they expected an increase.

More than 22,000 researchers responded to the question in the survey report, released by open access publisher Frontiers, which surveyed members of its research community from 152 countries between May and June.

As well as the long-term impact, findings also reveal that COVID-19 has created a sense of uncertainty around funding in the immediate term. When asked 'how has funding in your research area been affected since the pandemic?' results revealed:

One in four (25%) said funding had already been redirected from their research area;

Just 6% indicated an increase in their research area;

One in three (33%) said there had been no changes.

Those in environmental science and geology reported the highest level of long term concern, with 54% and 53% respectively saying funding will be redirected from their area or that less will be available in the future.

Kamila Markram, CEO of Frontiers, said: "The impact of COVID-19 is manifesting itself across the funding landscape. While it is critical that collectively, we do everything we can right now to combat the virus, we must also recognize that diverting or the 'covidization' of funding away from other fields is not a sustainable solution. The environment, for example, is an area we simply cannot afford to neglect. Doing so will have potentially irreversible consequences. We have to adopt a more holistic, interdisciplinary approach to problem solving."

Commenting in the report, Prof. James Wilsdon, professor of research policy at the University of Sheffield, and director of the Research on Research Institute said: "It's useful to see how researchers are perceiving and experiencing the effects of COVID-19 on funding priorities. But we're still at an early stage in understanding such effects, which are likely to come in waves.

"We've already seen the first wave - a vital injection of investment to virology, epidemiology, vaccines, and therapeutics. A second wave - of support for research on the wider effects - is now getting under way. But the likely force of the third wave - longer-term shifts in the priorities of funders - is far less certain. This will be determined as much by the wider economic outlook as by changes to the balance of disciplinary and thematic priorities. We may see the focus extending into broader investment in resilience across a range of economic, social, health and environmental systems and vulnerabilities.

"If this crisis teaches us anything, it should be the importance of investing in wider preparedness and resilience. We need to avoid a lurch into the 'Covid-isation' of research systems, if it comes at the expense of other areas which may be the source of the next crisis, or the one after that."

Credit: 
Frontiers

New tool predicts geological movement and the flow of groundwater in old coalfields

image: Figure 1 shows land surface uplift measured using satellite data which has been utilised to calculate changes in groundwater levels.

Image: 
University of Nottingham

A remote monitoring tool to help authorities manage public safety and environmental issues in recently abandoned coal mines has been developed by the University of Nottingham.

The tool uses satellite radar imagery to capture millimetre-scale measurements of changes in terrain height. Such measurements can be used to monitor and forecast groundwater levels and changes in geological conditions deep below the earth's surface in former mining areas.

With a long history of coal mining, the project was tested in the UK at a regional scale, but has global implications given the worldwide decline in the demand for coal in favour of more sustainable energy sources.

The method was implemented over the Nottinghamshire coalfields, which were abandoned as recently as 2015, when the last deep mine, Thoresby Colliery, shut its doors for good.

When deep mines are closed, the groundwater that was previously pumped to the surface to make mining safe, is allowed to rise again until it is restored to its natural level in a process called rebound.

The rebound of groundwater through former mine workings needs careful monitoring; often containing contaminants it can pollute waterways and drinking water supplies; lead to localised flooding; renew mining subsidence, land uplift and reactivate geological faults if it rises too fast. Such issues can cause costly and hazardous problems that need to be addressed prior to the land being repurposed.

The Coal Authority therefore needs detailed information on the rebound rate across the vast mine systems it manages so it knows exactly where to relax or increase pumping to control groundwater levels.

Measuring the rate and location of mine water rebound is therefore vital to effectively manage the environmental and safety risks in former coalfields, but difficult to achieve. Groundwater can flow in unanticipated directions via cavities within and between neighbouring collieries and discharge at the surface in areas not thought to be at risk.

In the past, predicting where mine water will flow was heavily-reliant on mine plans; inaccurate or incomplete documents that are sometimes more than a century old; and borehole data. Costing approximately £20,000 to £350K each, boreholes are expensive to drill and are often sparsely situated across vast coalfields, leaving measurement gaps.

More recently uplift, subsidence and other geological motion has been monitored by applying Interferometric Synthetic Aperture Radar (InSAR) to images acquired from radar satellites. However, this interferometry technique has historically worked only in urban areas (as opposed to rural ones), where the radar can pick up stable objects, such as buildings or rail tracks, on the ground to reflect back regularly to the satellite.

This study uses an advanced InSAR technique, called Intermittent Small Baseline Subset (ISBAS), developed by the University of Nottingham and its spin-out company Terra Motion Ltd. InSAR uses stacks of satellite images of the same location taken every few days or weeks which makes it possible to pick up even the slightest topographical changes over time. Uniquely, ISBAS InSAR can compute land deformation measurements over both urban and rural terrain. This is beneficial when mapping former mining areas, which are often located in rural areas. Over the Nottinghamshire coalfields, for example, the land cover is predominantly rural, with nearly 80 per cent comprising agricultural land, pastures and semi-natural areas.

Such a density of measurements meant study lead, University of Nottingham PhD student David Gee could develop a cost-effective and simple method to model groundwater rebound from the surface movement changes.

The study found a definitive link between ground motion measurements and rising mine water levels. Often land subsidence or uplift occurs as a result of changes in groundwater, where the strata acts a little like a sponge, expanding when filling with fluid and contracting when drained.

With near-complete spatial coverage of the InSAR data, he could fill in the measurement gaps between boreholes to map the change in mine water levels across the whole coalfield. The model takes into account both geology and depth of groundwater to determine the true rate of rebound and help identify where problems associated with rebound may occur.

The findings have been published in a paper 'Modelling groundwater rebound in recently abandoned coalfields using DInSAR' in the journal Remote Sensing of Environment.

David Gee, who is based in the Nottingham Geospatial Institute at the University, said, "There are several coalfields currently undergoing mine water rebound in the UK, where surface uplift has been measured using InSAR. In the Nottinghamshire coalfields, the quantitative comparison between the deformation measured by the model and InSAR confirms that the heave is caused by the recovery of mine water."

At first a forward model was generated to estimate surface uplift in response to measured changes in groundwater levels from monitoring boreholes. David calibrated and validated the model using ISBAS InSAR on ENVISAT and Sentinel-1 radar data. He then inverted the InSAR measurements to provide an estimate of the change in groundwater levels. Subsequently, the inverted rates were used to estimate the time it will take for groundwater to rebound and identify areas of the coalfield most at risk of surface discharges.

"InSAR measurements, when combined with modelling, can assist with the characterisation of the hydrogeological processes occurring at former mining sites. The technique has the potential to make a significant contribution to the progressive abandonment strategy of recently closed coalfields," David said.

The InSAR findings offer a supplementary source of data on groundwater changes that augment the borehole measurements. It means monitoring can be done remotely so is less labour-intensive for national bodies such as the Environment Agency (which manages hazards such as flooding, pollution and contaminated land) and the Coal Authority (which has a mandate to manage the legacy of underground coal mining in terms of public safety and subsidence).

The model has already flagged that some parts of the coal fields that are not behaving as previously predicted, which could influence existing remediation plans.

David explains, "The deepest part of the North Nottinghamshire coalfield, for example, is not rebounding as expected which suggests that the mine plans here might not be completely accurate. The stability is confirmed by the InSAR and the model - future monitoring of this area will help to identify if or when rebound does eventually occur.

"Next steps for the project are to integrate our results into an existing screening tool developed by the Environment Agency and Coal Authority to help local planning authorities, developers and consultants design sustainable drainage systems in coalfield areas. The initial results, generated at a regional scale, have the potential to be scaled to all coalfields in the UK, with the aid of national InSAR maps," adds David.

Luke Bateson, Senior Remote Sensing Geologist from the British Geological Survey, said, "InSAR data offers a fantastic opportunity to reveal how the ground is moving, however we need studies such as David's in order to understand what these ground motions relate to and what they mean. David's study, not only provides this understanding but also provides a tool which can convert InSAR ground motions into information on mine water levels that can be used to make informed decisions."

Dr Andrew Sowter, Chief Technical Officer at Terra Motion Ltd, explains, "Studies like this demonstrate the value to us, as a small commercial company, in investing in collaborative work with the University. We now have a remarkable, validated, result that is based upon our ISBAS InSAR method and demonstrably supported by a range of important stakeholders. This will enable us to further penetrate the market in a huge range of critical applications hitherto labelled as difficult for more conventional InSAR techniques, particularly those markets relating to underground fluid extraction and injection in more temperate, vegetated zones."

Credit: 
University of Nottingham

New Fred Hutch-led trial shows no benefits of dairy foods for blood sugar regulation

image: Results from a new trial published by a team led by researchers at Fred Hutchinson Cancer Research Center suggests lower dairy intake may be beneficial for people with metabolic syndrome.

Image: 
Darryl Leja, National Human Genome Research Institute, National Institutes of Health

SEATTLE -- Nov. 16, 2020 -- Results from a new trial published by a team led by researchers at Fred Hutchinson Cancer Research Center suggests lower dairy intake may be beneficial for people with metabolic syndrome. In a new study published in the American Journal of Clinical Nutrition, Dr. Mario Kratz, an associate professor in Fred Hutch's Public Health Sciences Division, led a team that looked at dairy's impact on regulating blood sugar levels in people with metabolic syndrome.

This project was motivated by previous observational studies suggesting people who ate the most yogurt or full-fat dairy foods tended to have a lower risk of Type 2 diabetes. Findings from the new Fred Hutch-led trial showed that the body's ability to regulate blood sugar levels was not directly affected by whether participants consumed dairy foods. However, consumption of either low-fat or full-fat milk, yogurt, and cheese reduced insulin sensitivity.

Metabolic syndrome is a group of risk factors that raises risk of heart disease, diabetes, stroke and other health problems. Insulin sensitivity refers to how the body's cells respond to insulin. High insulin sensitivity allows the cells of the body to use blood sugar more effectively, reducing blood sugar.

The study involved 72 volunteer men and women who had the metabolic syndrome. Using a parallel-design, randomized, controlled trial, the research team randomized the volunteers into three groups over 12 weeks:

A limited dairy diet, consisting of no dairy foods other than -- at most -- three servings of skim milk per week

A low-fat dairy diet, consisting of more than three servings of skim milk, nonfat yogurt and low-fat cheese per day

A full-fat dairy diet, consisting of more than three servings of whole milk, full-fat yogurt and full-fat cheese per day

After the 12 weeks, Kratz and his research team measured a variety of biomarkers, including blood sugar throughout a glucose tolerance test, systemic inflammation and liver-fat content. They found that blood sugar regulation was not directly affected by whether participants consumed dairy foods. However, participants on the full-fat dairy diet gained a modest amount of weight, and participants on both low-fat and full-fat dairy diets saw a decrease in insulin sensitivity. A reduction in insulin sensitivity could lead to an increased risk of Type 2 diabetes in the long term; however, because blood sugar levels were not affected by dairy foods, the long-term impact of reduced insulin sensitivity in people eating a diet rich in dairy on Type 2 diabetes risk is unclear.

"Unlike previous observational studies which suggested a beneficial relationship between fermented dairy foods such as yogurt, as well as high-fat dairy foods, and better metabolic health, our rigorous randomized, controlled trial could not confirm that eating more dairy foods lowers people's blood sugar levels," said Kratz. "While more work needs to be done examining the impact of diets rich in dairy in healthy populations, the finding of reduced insulin sensitivity that resulted from higher dairy intake may be concerning for people with metabolic syndrome and similar conditions such as prediabetes or Type 2 diabetes."

Kratz also feels strongly that a single study should always be interpreted with caution. First, dairy-rich diets had not reduced insulin sensitivity in all previous trials. Second, it is important to consider that even though the dairy-rich diets reduced insulin sensitivity, this did not lead to higher blood sugar levels in these participants. Because blood sugar is the clinical endpoint the research team cared more about (and it was the primary endpoint in this trial), these results' interpretation is not straightforward. And, lastly, when evaluating the overall health effects of a food group such as dairy, the food's impact on the regulation of blood sugar levels is just one of many considerations.

Credit: 
Fred Hutchinson Cancer Center

SwRI scientists expand space instrument's capabilities

image: This image shows the Chemistry Organic and Dating Experiment (CODEX), an instrument for in-situ dating of samples, capable of accuracy of ±20 million years. It was created by Southwest Research Institute scientists and is intended for future missions to Mars.

Image: 
SwRI

SAN ANTONIO -- Nov. 16, 2020 -- A new study by Southwest Research Institute scientists describes how they have expanded the capabilities of the prototype spaceflight instrument Chemistry Organic and Dating Experiment (CODEX), designed for field-based dating of extraterrestrial materials. CODEX now uses two different dating approaches based on rubidium-strontium and lead-lead geochronology methods. The instrument uses laser ablation resonance ionization mass spectrometry (LARIMS) to obtain dates using these methods.

"The central aim of CODEX is to better understand some of the outstanding questions of solar system chronology, such as the duration of heavy meteoroid bombardment or how long Mars was potentially habitable," said SwRI Staff Scientist F. Scott Anderson, who is leading development of the instrument.

"In a way, we've given CODEX binocular vision in dating," said Jonathan Levine, associate professor of physics at Colgate University and Anderson's collaborator on CODEX. "When you can look at something from two different perspectives, you get a deeper view of the object you are examining, whether you are using your eyes or any other tool. In dating planetary specimens, or any rock really, the same holds true."

Earlier versions of CODEX exploited the natural radioactive decay of rubidium into strontium as our measure of how much time had elapsed since the sample, usually an Earth rock, formed. CODEX continues to use that measurement method but is now also capable of measuring lead isotopes that are produced by the natural decay of uranium in a sample. By comparing two isotopes of lead, an independent estimate of the samples' age can be obtained.

"Sometimes the two dating systems indicate the same age for a sample, and the agreement gives us confidence that we understand the history of the specimen," Anderson said. "But sometimes the ages disagree, and we learn that the rock's history was more nuanced or more complex than we thought."

Anderson and Levine used CODEX's two dating methods to measure the ages of six samples: one from Earth, two from Mars, and three from the Moon.

"This suite of rocks showed us the kinds of challenges we are likely to encounter when CODEX eventually gets to fly to either Mars or the Moon, and also shows us where CODEX is most likely to work successfully," Levine said. "Among three meteorites from the Moon which we studied, we reproduced the known ages in two cases, and found evidence in the third case for a much older age than has been reported before for this meteorite."

The ages of inner solar system objects are commonly estimated by counting impact craters, with the assumption that objects with more craters have existed for longer periods of time. These estimates are also partially calibrated by the ages of Moon rocks obtained by astronauts in the 1960s. However, in areas not explored by astronauts, the age estimates could be wrong by 100 million to billions of years. Thus, dating more samples is critical to our understanding of the age of the solar system.

"Dating is a challenging process. Traditional techniques are not easily adapted to spaceflight, requiring a sizable laboratory, considerable staff and several months to determine a date," Anderson said. "CODEX can date samples from these surfaces with an accuracy of ±20-80 million years, more than sufficient to reduce the existing uncertainties of 100-1000 million years, and considerably more accurate than other methods, which have a precision of about ±350 million years."

There are potentially hundreds of sites on the Moon and Mars that scientists are interested in dating, but sample return missions are expensive and time-consuming. For this reason, CODEX is designed to be compact enough to be incorporated into a spacecraft and could conduct on-site dating of samples.

"This experiment raises the prospect of equipping a future lander mission to the Moon or Mars with a single dating instrument capable of exploiting two complementary isotopic systems," Anderson said. "This combination would permit consistency checks and afford us a more nuanced understanding of planetary history."

Credit: 
Southwest Research Institute

No losses: Scientists stuff graphene with light

image: Header Image

Image: 
Daria Sokol/MIPT Press Office

Physicists from MIPT and Vladimir State University, Russia, have achieved a nearly 90% efficiency converting light energy into surface waves on graphene. They relied on a laser-like energy conversion scheme and collective resonances. The paper came out in Laser & Photonics Reviews.

Manipulating light at the nanoscale is a task crucial for being able to create ultracompact devices for optical energy conversion and storage. To localize light on such a small scale, researchers convert optical radiation into so-called surface plasmon-polaritons. These SPPs are oscillations propagating along the interface between two materials with drastically different refractive indices -- specifically, a metal and a dielectric or air. Depending on the materials chosen, the degree of surface wave localization varies. It is the strongest for light localized on a material only one atomic layer thick, because such 2D materials have high refractive indices.

The existing schemes for converting light to SPPs on 2D surfaces have an efficiency of no more than 10%. It is possible to improve that figure by using intermediary signal converters -- nano-objects of various chemical compositions and geometries.

The intermediary converters used in the recent study in Laser & Photonics Reviews are semiconductor quantum dots with a size of 5 to 100 nanometers and a composition similar to that of the solid semiconductor they are manufactured from. That said, the optical properties of a quantum dot vary considerably with its size. So by changing its dimensions, researchers can tune it to the optical wavelength of interest. If an assembly of variously sized quantum dots is illuminated with natural light, each dot will respond to a particular wavelength.

Quantum dots come in various shapes -- cylinders, pyramids, spheres, etc. -- and chemical compositions. In its study, the team of Russian researchers used ellipsoid-shaped quantum dots 40 nanometers in diameter. The dots served as scatterers positioned above the surface of graphene, which was illuminated with infrared light at a wavelength of 1.55 micrometers. A dielectric buffer several nanometers thick separated the graphene sheet from the quantum dots (fig. 1).

The idea to use a quantum dot as a scatterer is not new. Some of the previous graphene studies used a similar arrangement, with the dots positioned above the 2D sheet and interacting both with light and with surface electromagnetic waves at a common wavelength shared by the two processes. This was made possible by choosing a quantum dot size that was exactly right. While such a system is fairly easy to tune to a resonance, it is susceptible to luminescence quenching -- the conversion of incident light energy into heat -- as well as reverse light scattering. As a result, the efficiency of SPP generation did not exceed 10%.

"We investigated a scheme where a quantum dot positioned above graphene interacts both with incident light and with the surface electromagnetic wave, but the frequencies of these two interactions are different. The dot interacts with light at a wavelength of 1.55 micrometers and with the surface plasmon-polariton at 3.5 micrometers. This is enabled by a hybrid interaction scheme," comments study co-author Alexei Prokhorov, a senior researcher at the MIPT Center for Photonics and 2D Materials, and an associate professor at Vladimir State University.

The essence of the hybrid interaction scheme is that rather than using just two energy levels -- the upper and lower ones -- the setup also includes an intermediate level. That is, the team used an energetic structure akin to that of the laser. The intermediate energy level serves to enable the strong connection between the quantum dot and the surface electromagnetic wave. The quantum dot undergoes excitation at the wavelength of the laser illuminating it, whereas surface waves are generated at the wavelength determined by the SPP-quantum dot resonance.

"We have worked with a range of materials for manufacturing quantum dots, as well as with various types of graphene," Prokhorov explained. "Apart from pure graphene, there is also what's called doped graphene, which incorporates elements from the neighboring groups in the periodic table. Depending on the kind of doping, the chemical potential of graphene varies. We optimized the parameters of the quantum dot -- its chemistry, geometry -- as well as the type of graphene, so as to maximize the efficiency of light energy conversion into surface plasmon-polaritons. Eventually we settled on doped graphene and indium antimonide as the quantum dot material."

Despite the highly efficient energy input into graphene via the quantum dot intermediary, the intensity of the resulting waves is extremely low. Therefore, large numbers of dots have to be used in a specific arrangement above the graphene layer. The researchers had to find precisely the right geometry, the perfect distance between the dots to ensure signal amplification due to the phasing of the near fields of each dot. In their study, the team reports discovering such a geometry and measuring a signal in graphene that was orders of magnitude more powerful than for randomly arranged quantum dots. For their subsequent calculations, the physicists employed self-developed software modules.

The calculated conversion efficiency of the newly proposed scheme is as high as 90%-95%. Even accounting for all the potential negative factors that might affect this figure of merit, it will remain above 50% -- several times higher than any other competing system.

"A large part of such research focuses on creating ultracompact devices that would be capable of converting light energy into surface plasmon-polaritons with a high efficiency and on a very small scale in space, thereby recording light energy into some structure," said the director of the MIPT Center for Photonics and 2D Materials, Valentyn Volkov, who co-authored the study. "Moreover, you can accumulate polaritons, potentially designing an ultrathin battery composed of several atomic layers. It is possible to use the effect in light energy converters similar to solar cells, but with a several times higher efficiency. Another promising application has to do with nano- and bio-object detection."

Credit: 
Moscow Institute of Physics and Technology

Novel glass materials made from organic and inorganic components

image: Dr Courtney Calahoo from the University of Jena presents organic glass (l.) and inorganic glass (r.) - two starting materials for the new composite glass.

Image: 
Image: Jens Meyer/University of Jena

Cambridge/Jena (16.11.2020) Linkages between organic and inorganic materials are a common phenomenon in nature, e.g., in the construction of bones and skeletal structures. They often enable combinations of properties that could not be achieved with just one type of material. In technological material development, however, these so-called hybrid materials still represent a major challenge today.

A new class of hybrid glass materials

Researchers from the Universities of Jena (Germany) and Cambridge (GB) have now succeeded in creating a new class of hybrid glass materials that combine organic and inorganic components. To do this, the scientists use special material combinations in which chemical bonds between organometallic and inorganic glasses can be generated. They included materials composed of organometallic networks - so-called metal-organic frameworks (MOFs) - which have recently been experiencing rapidly increasing research interest. This is primarily because their framework structures can be created in a targeted manner, from the length scale of individual molecules up to a few nanometers. This achieves a control of porosity which can be adapted to a large number of applications, both in terms of the size of the pores and their permeability, and in terms of the chemical properties prevailing on the pore surfaces. For example, separating membranes or storage devices for gases and liquids, supports for catalysts or new types of components for electrical energy storage devices can be designed.

"The chemical design of MOF materials follows a modular principle, according to which inorganic nodes are connected to one another via organic molecules to form a three-dimensional network. This results in an almost infinite variety of possible structures. A few of these structures can be converted into a glassy state by heat treatment. While crystalline MOF materials are typically synthesized in powder form, the liquid and glass states open up a wide range of processing options and potential shapes", explains Louis Longley from the University of Cambridge, UK.

Best of both worlds combined

"The combination of such MOF-derived glasses with classic inorganic glass materials could make it possible to combine the best of both worlds," says Courtney Calahoo, a senior scientist at the Chair of Glass Chemistry at Friedrich Schiller University Jena, Germany. For example, composite glasses of this kind could lead to significantly improved mechanical properties by combining the impact and fracture toughness of plastics with the high hardness and rigidity of inorganic glasses. The decisive factor in ensuring that the materials involved are not simply mixed with one another is the creation of a contact area within which chemical bonds can form between the organometallic network and conventional glass. "Only in this way can really new properties be obtained, for example in electrical conductivity or mechanical resistance," explains Lothar Wondraczek, Professor of Glass Chemistry in Jena.

Credit: 
Friedrich-Schiller-Universitaet Jena

Scientific journal launches new series on the biology of invasive plants

image: Ground Woodpecker (Geocolaptes olivaceus) sitting on Fire thorn (Pyracantha angustifolia) near the town of Clarens, Free State Province, South Africa. Photo: Grant Martin

Image: 
Photo: Grant Martin

WESTMINSTER, Colorado - November 16 2020 - Today the journal Invasive Plant Science and Management (IPSM) announced the launch of a new series focused on the biology and ecology of invasive plants.

"Our goal is to assemble information that can serve as a trusted point of reference for key stakeholders," says Darren Kriticos of Australia's Commonwealth Scientific and Industrial Research Organization, associate editor of IPSM and coeditor of the new series. "Each article will focus on an emerging threat and provide practical recommendations for how to intervene."

Maps will be included to highlight the potential for the featured weed to spread to new countries or new continents. The information is intended to serve as an alert for biosecurity managers, who can determine whether they need to take preventive measures.

The inaugural article in the new series focuses on the weedy shrub Pyrachantha augustifolia. Historically grown as a garden ornamental and hedge plant, pyracantha has become a global invader now prohibited in many countries.

Birds and small animals attracted to pyracantha's bright red berries have helped to spread the plant well outside its cultivated range. It can form dense, impenetrable thickets with sharp thorns that reduce the value of grazing lands and can harbor insects and diseases that damage crops.

Pyracantha is tolerant of cold, frost, strong winds and seasonally dry conditions. That means many habitats around the globe are at risk of invasion, including areas of Argentina, the United States and Central Europe. Pyracantha thrives in semi-arid regions where it can be difficult to manage cost-effectively using conventional methods. Researchers say biological controls may provide a safe, sustainable solution, especially in sensitive conservation settings and in "low input" production systems.

"In this age of globalization, there are numerous invasive plants like pyracantha that are spreading throughout the world," says David Clements, coeditor of the series and a professor at Trinity Western University. "Our new series is designed to look not only at the biology of invasive weeds, but also at the pathways they are most likely to use to invade new jurisdictions."

Scientists interesting in contributing articles to the new series are encouraged to send inquires to wssa@cambridge.org.

Credit: 
Cambridge University Press