Tech

International study shows alternative seafood networks provided resiliency during pandemic

Local alternative seafood networks (ASNs) in the United States and Canada, often considered niche segments, experienced unprecedented growth in the early months of the COVID-19 pandemic while the broader seafood system faltered, highlighting the need for greater functional diversity in supply chains, according to a new international study led by the University of Maine.

The spike in demand reflected a temporary relocalization phenomenon that can occur during periods of systemic shock -- an inverse yet complementary relationship between global and local seafood systems that contributes to the resilience of regional food systems, according to the research team, which published its findings in Frontiers in Sustainable Food Systems.

The globalization of seafood has made food systems more vulnerable to systemic shocks, which can impact those dependent on seafood for sustenance and employment, according to the research team, led by Joshua Stoll, assistant professor of marine policy at the University of Maine.

Policy changes and greater investments in data collection and infrastructure are needed to support ASN development, increase functional diversity in supply chains, and bolster the resilience and sustainability of regional food systems and the global seafood trade, according to the researchers.

"This research shows that alternative seafood networks help to make seafood supply chains more diverse. In doing so, it brings attention to the critical role that local seafood systems play in supporting resilient fisheries in times of crisis," says Stoll, who collaborated with researchers from the University of Guelph, Haverford College, University of Vermont, North American Marine Alliance and 11 community-supported fisheries from the U.S. and Canada.

Seafood is a highly perishable commodity that demands efficient distribution. Alternative seafood networks distribute seafood through local and direct marketing, conducted by the very people who caught it, as opposed to the long and complex supply chains of their global counterparts.

According to the study, this physical and social "connectedness" may help to insulate local and regional seafood systems from the deadlock caused by systemic global shocks that disrupts the broader seafood trade.

"ASNs emphasize shorter supply chains and direct-to-consumer models," says Philip Loring, associate professor and Arrell Chair in Food, Policy and Society at the University of Guelph. "They're not locked into a single system. They have access to diverse fisheries, they know how to get straight to the consumer. All these things came together and made a unique ability to pivot quickly."

Credit: 
University of Maine

Thermal power nanogenerator created without solid moving parts

image: Schematic of a thermoacoustically driven liquid-metal-based triboelectric nanogenerator.

Image: 
Shunmin Zhu, Guoyao Yu, Wei Tang, Jianying Hu, and Ercang Luo

WASHINGTON, March 31, 2021 -- As environmental and energy crises become increasingly more common occurrences around the world, a thermal energy harvester capable of converting abundant thermal energy -- such as solar radiation, waste heat, combustion of biomass, or geothermal energy -- into mechanical energy appears to be a promising energy strategy to mitigate many crises.

The majority of thermal power generation technologies involve solid moving parts, which can reduce their reliability and lead to frequent maintenance. This inspired researchers in China to develop a thermal power nanogenerator without solid moving parts.

In Applied Physics Letters, from AIP Publishing, the researchers propose a thermal power nanogenerator, called a thermoacoustically driven liquid-metal-based triboelectric nanogenerator, or TA-LM-TENG, which converts thermal energy into electrical energy.

"This generator includes two parts: a thermoacoustic engine and a liquid-metal-based triboelectric nanogenerator (LM-TENG)," said Guoyao Yu, a professor at the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences.

First, the thermoacoustic engine converts thermal energy into acoustic energy via oscillatory thermal expansion and contraction of a gas. Next, the LM-TENG converts acoustic energy into electrical energy via the coupling effect of contact electrification and electrostatic induction.

When heating the heat exchanger of the thermoacoustic engine, "the gas in the engine starts a spontaneous oscillation," Yu said. "The oscillatory motion of the gas pushes a liquid metal column flowing back and forth within a U-shaped tube. This makes the liquid metal periodically immerse and separate with a polyimide film, generating an alternate voltage at the electrodes. This extracts electrical power from the TA-LM-TENG."

A TA-LM-TENG's most desirable feature is the lack of any solid moving parts that can break, which will ensure the nanogenerator is more reliable and help it achieve a long lifespan.

"This generator also promises a theoretically high heat-to-electric conversion efficiency," said Yu. "And we designed and constructed a conceptual prototype to validate the feasibility of our concept. In preliminary experiments, the highest open-circuit voltage amplitude of 15 volts was achieved, which implies that our concept has been well demonstrated."

As long as the proposed thermal power nanogenerator can be reduced in size, it "shows potential for applications, such as waste heat recovery, power supply within (microelectromechanical systems), solar power, and space power systems," said Yu.

Credit: 
American Institute of Physics

Temperature sensor could help safeguard mRNA vaccines

image: When the temperature of a glass vial containing simulated vaccine rises above -60 C for longer than 2 minutes, a blue dye in an adjacent tube diffuses into a white absorbent, leaving an irreversible color trace.

Image: 
Adapted from <i>ACS Omega</i> <b>2021</b>, DOI: 10.1021/acsomega.1c00404

Scientists have developed vaccines for COVID-19 with record speed. The first two vaccines widely distributed in the U.S. are mRNA-based and require ultracold storage (-70 C for one and -20 C for the other). Now, researchers reporting in ACS Omega have developed a tamper-proof temperature indicator that can alert health care workers when a vial of vaccine reaches an unsafe temperature for a certain period, which could help ensure distribution of effective mRNA vaccines.

The two COVID mRNA vaccines contain instructions for building harmless pieces of the SARS-CoV-2 spike protein. Once the vaccine is injected into the body, human cells use the mRNA instructions to make the spike protein, which they temporarily display on their surface, triggering an immune response. But mRNA is highly unstable, requiring ultracold storage and transport conditions for the vaccines to remain effective. Sung Yeon Hwang, Dongyeop Oh, Jeyoung Park and colleagues wanted to develop a time-temperature indicator (TTI) to identify mRNA vaccines that are exposed to undesirable temperatures during storage or transport, so that they could be discarded.

To make their TTI, the researchers added a mixture of ethylene glycol (antifreeze), water and blue dye to a small tube and froze it in liquid nitrogen. Then, they added a white cellulose absorbent to the top of the frozen coolant, turned the tube upside down, and adhered it to a larger glass vial containing simulated vaccine at -70 C. At temperatures above -60 C, the antifreeze mixture melted, and the dye diffused into the white absorbent, turning it light blue. The color change happened about 2 minutes after the simulated vaccine was exposed to a higher temperature. Importantly, exposures of less than 2 minutes -- which are unlikely to impair vaccine efficacy -- did not turn the TTI blue. The color change persisted if the tube was refrozen at -70 C, making the system tamper-proof. By changing the coolants or their mixing ratio, or by using different absorbents, the TTI could be tailored to monitor the ideal storage conditions of different mRNA vaccines, the researchers say.

Credit: 
American Chemical Society

Study details how Middle East dust intensifies summer monsoons on Indian subcontinent

image: Dust-monsoon interactions in South and West Asia.

Image: 
Qinjian Jin

LAWRENCE -- New research from the University of Kansas published in Earth-Science Reviews offers insight into one of the world's most powerful monsoon systems: the Indian summer monsoon. The study details how the monsoon, of vital social and economic importance to the people of the region, is supercharged by atmospheric dust particles swept up by winds from deserts in the Middle East.

"We know that dust coming from the desert, when lifted by strong winds into the atmosphere, can absorb solar radiation," said lead author Qinjian Jin, lecturer and academic program associate with KU's Department of Geography & Atmospheric Science. "The dust, after absorbing solar radiation, becomes very hot. These dust particles suspended in the atmosphere can heat the atmosphere enough that the air pressure will change -- and it can result in changes in the circulation patterns, like the winds."

This phenomenon, dubbed an "elevated heat pump," drives moisture onto the Indian subcontinent from the sea.

"The Indian summer monsoon is characterized by strong winds in the summer," Jin said. "So once the winds change, the moisture transport from ocean to land will change, and consequently they will increase the precipitation. The precipitation is very important for people living in South Asia, especially India, and important for agriculture and drinkable water."

While the dust from the Middle East boosts the power of the monsoons on the Indian subcontinent, there is also a reverse effect that results in a positive feedback loop where the monsoons can increase the winds in the Middle East to produce yet more dust aerosols.

"The monsoon can influence dust emission," Jin said. "When we have a stronger monsoon, we have heating in the upper atmosphere. The convection associated with the monsoon can go up to a very high elevation, as much as 10 kilometers. When this pattern of air over the monsoon is heated, you produce something like a wave. Across the area, you'll have high pressure, then low pressure, then high pressure. Those waves can transport air to the Middle East. The air comes upward over the Indian subcontinent, then goes to the Middle East and goes downward -- and when the downward air strikes the surface, it can pick up a lot of dust aerosols."

Jin's co-authors on the paper are Bing Pu, assistant professor of geography & atmospheric science at KU; Jiangfeng Wei of the Nanjing University of Information Science and Technology in Nanjing, Jiangsu, China; William K.M. Lau of the University of Maryland and Chien Wang of Laboratoire d'Aerologie, CNRS/UPS, Toulouse, France.

Their work encompassed a review of literature on the relationship between Middle East dust and the Indian summer monsoons, along with original research using supercomputers to better understand the phenomenon.

While it has been known that an "elevated heat pump" exists due to dust coming from the Arabian Peninsula, Jin and his colleagues argue for another source in South Asia fueling the effect of aerosolized dust upon the Indian summer monsoons: the Iranian Plateau.

"The Iranian Plateau is located between the Middle East and the Tibetan Plateau, and the Iranian Plateau is also very high," Jin said. "The land at high elevation, when solar radiation reaches the surface, will become very hot. Over this hot Iranian Plateau, we can expect changes to the monsoon circulation and at the same time the hot air over the Iranian Plateau can also strengthen the circulation over the deserts of the Arabian Peninsula. So, the Iranian Plateau can increase dust emission from the Middle East, as well as monsoon circulation and monsoon precipitation. The Iranian Plateau is another driver that can explain the relationship between Middle East dust and Indian summer monsoon."

Jin said the paper explores three other mechanisms influencing the Indian summer monsoon. One is the snow-darkening effect, where black carbon and dust reduce snow's reflectivity, warming the land and the troposphere above. Another is the solar-dimming effect, where aerosols in the atmosphere cause the land surface to cool. Lastly, the research considers how aerosolized dust can serve as ice-cloud nuclei, which can "alter the microphysical properties of ice clouds and consequently the Indian summer monsoon rainfall."

The KU researcher said understanding these mechanisms and climactic effects of dust will prove to be of increasing importance in the face of global climate change.

"This is especially true for Asia -- there are climate projections showing that in some areas of Asia, the land will become drier," Jin said. "So, we expect more dust emissions and dust will play a more important role for some time to come in Asia. We have a lot of anthropogenic emission, or air pollution, in eastern China, East Asia and India. But as people try to improve our air quality, the ratio between natural dust to anthropogenic aerosol will increase -- that means dust will play a more important role in the future."

Credit: 
University of Kansas

Revealing meat and fish fraud with a handheld 'MasSpec Pen' in seconds

image: The MasSpec Pen can authenticate the type and purity of meat samples in as little as 15 seconds.

Image: 
Adapted from <i>Journal of Agricultural and Food Chemistry</i> <b>2021</b>, DOI: 10.1021/acs.jafc.0c07830

Meat and fish fraud are global problems, costing consumers billions of dollars every year. On top of that, mislabeling products can cause problems for people with allergies, religious or cultural restrictions. Current methods to detect this fraud, while accurate, are slower than inspectors would like. Now, researchers reporting in ACS' Journal of Agricultural and Food Chemistry have optimized their handheld MasSpec Pen to identify common types of meat and fish within 15 seconds.

News stories of food fraud, such as beef being replaced with horse meat, and cheaper fish being branded as premium fillets, have led people to question if what is on the label is actually in the package. To combat food adulteration, the U.S. Department of Agriculture conducts regular, random inspections of these products. Although current molecular techniques, such as the polymerase chain reaction (PCR), are highly accurate, these analyses can take hours to days, and are often performed at off-site labs. Previous studies have devised more direct and on-site food analysis methods with mass spectrometry, using the amounts of molecular components to verify meat sources, but they also destroyed samples during the process or required sample preparation steps. More recently, Livia Eberlin and colleagues developed the MasSpec Pen -- a handheld device that gently extracts compounds from a material's surface within seconds and then analyzes them on a mass spectrometer. So, the team wanted to see whether this device could rapidly and effectively detect meat and fish fraud in pure filets and ground products.

The researchers used the MasSpec Pen to examine the molecular composition of grain-fed and grass-fed beef, chicken, pork, lamb, venison and five common fish species collected from grocery stores. Once the device's tip was pressed against a sample, a 20-μL droplet of solvent was released, extracting sufficient amounts of molecules within three seconds for accurate analysis by mass spectrometry. The whole process took 15 seconds, required no preprocessing, and the liquid extraction did not harm the samples' surfaces. Then the team developed authentication models using the unique patterns of the molecules identified, including carnosine, anserine, succinic acid, xanthine and taurine, to distinguish pure meat types from each other, beef based on feeding habit and among the five fish species. Finally, the researchers applied their models to the analysis of test sets of meats and fish. For these samples, all models had a 100% accuracy identifying the protein source, which is as good as the current method of PCR and approximately 720 times faster. The researchers say they plan to expand the method to other meat products and integrate the MasSpec Pen into a portable mass spectrometer for on-site meat authentication.

Credit: 
American Chemical Society

Estimating lifetime microplastic exposure

Every day, people are exposed to microplastics from food, water, beverages and air. But it's unclear just how many of these particles accumulate in the human body, and whether they pose health risks. Now, researchers reporting in ACS' Environmental Science & Technology have developed a lifetime microplastic exposure model that accounts for variable levels from different sources and in different populations. The new model indicates a lower average mass of microplastic accumulation than previous estimates.

Microplastics, which are tiny pieces of plastic ranging in size from 1 μm to 5 mm (about the width of a pencil eraser), are ingested from a variety of sources, such as bottled water, salt and seafood. Their fate and transport in the human body are largely unknown, although the particles have been detected in human stool. In addition to possibly causing tissue damage and inflammation, microplastics could be a source of carcinogens and other harmful compounds that leach from plastic into the body. Previous studies have tried to estimate human exposure to the particles and their leached chemicals, but they have limitations, including discrepancies in the databases used, a failure to consider the entire microplastic size range and the use of average exposure rates that do not reflect global intakes. Nur Hazimah Mohamed Nor, Albert Koelmans and colleagues wanted to develop a comprehensive model to estimate the lifetime exposure of adults and children to microplastics and their associated chemicals.

To make their model, the researchers identified 134 studies that reported microplastic concentrations in fish, mollusks, crustaceans, tap or bottled water, beer, milk, salt and air. They performed corrections to the data so that they could be accurately compared among the different studies. Then, the team used data on food consumption in different countries for various age groups to estimate ranges of microplastic ingestion. This information, combined with rates of microplastic absorption from the gastrointestinal tract and excretion by the liver, was used to estimate microplastic distribution in the gut and tissues. The model predicted that, by the age of 18, children could accumulate an average of 8,300 particles (6.4 ng) of microplastics in their tissues, whereas by the age of 70, adults could accrue an average of 50,100 microplastic particles (40.7 ng). The estimated amounts of four chemicals leaching from the plastics were small compared with a person's total intake of these compounds, the researchers concluded. These data suggest that prior studies might have overestimated microplastic exposure and possible health risks, but it will be important to assess the contributions of other food types to ingestion and accumulation, the researchers say.

Credit: 
American Chemical Society

Century-old problem solved with first-ever 3D atomic imaging of an amorphous solid

image: At left, an experimental 3D atomic model of a metallic glass nanoparticle, 8 nanometers in diameter. Right, the 3D atomic packing of a representative ordered supercluster in the metallic glass structure, with differently colored balls representing different types of atoms.

Image: 
Yao Yang and Jianwei "John" Miao/UCLA

Glass, rubber and plastics all belong to a class of matter called amorphous solids. And in spite of how common they are in our everyday lives, amorphous solids have long posed a challenge to scientists.

Since the 1910s, scientists have been able to map in 3D the atomic structures of crystals, the other major class of solids, which has led to myriad advances in physics, chemistry, biology, materials science, geology, nanoscience, drug discovery and more. But because amorphous solids aren't assembled in rigid, repetitive atomic structures like crystals are, they have defied researchers' ability to determine their atomic structure with the same level of precision.

Until now, that is.

A UCLA-led study in the journal Nature reports on the first-ever determination of the 3D atomic structure of an amorphous solid -- in this case, a material called metallic glass.

"We know so much about crystals, yet most of the matter on Earth is non-crystalline and we know so little about their atomic structure," said the study's senior author, Jianwei "John" Miao, a UCLA professor of physics and astronomy and member of the California NanoSystems Institute at UCLA.

Observing the 3D atomic arrangement of an amorphous solid has been Miao's dream since he was a graduate student. That dream has now been realized, after 22 years of relentless pursuit.

"This study just opened a new door," he said.

Metallic glasses tend to be both stronger and more shapeable than standard crystalline metals, and they are used today in products ranging from electrical transformers to high-end golf clubs and the housings for Apple laptops and other electronic devices. Understanding the atomic structure of metallic glasses could help engineers design even better versions of these materials, for an even wider array of applications.

The researchers used a technique called atomic electron tomography, a type of 3D imaging pioneered by Miao and collaborators. The approach involves beaming electrons through a sample and collecting an image on the other side. The sample is rotated so that measurements can be taken from multiple angles, yielding data that is stitched together to produce a 3D image.

"We combined state-of-the-art electron microscopy with powerful algorithms and analysis techniques to study structures down to the level of single atoms," said co-author Peter Ercius, a staff scientist at Lawrence Berkeley National Laboratory's Molecular Foundry, where the experiment was conducted. "Direct knowledge of amorphous structures at this level is a game changer for the physical sciences."

The researchers examined a sample of metallic glass about 8 nanometers in diameter, made of eight different metals. (A nanometer is one-billionth of a meter.) Using 55 atomic electron tomography images, Miao and colleagues created a 3D map of the approximately 18,000 atoms that made up the nanoparticle.

Because amorphous solids have been so difficult to characterize, the researchers expected the atoms to be arranged chaotically. And although about 85% of the atoms were in a disordered arrangement, the researchers were able to identify pockets where a fraction of atoms coalesced into ordered superclusters. The finding demonstrated that even within an amorphous solid, the arrangement of atoms is not completely random.

Miao acknowledged one limitation of the research, borne of the limits of electron microscopy itself. Some of the metal atoms were so similar in size that electron imaging couldn't distinguish between them. For the purposes of the study, the researchers grouped the metals into three categories, uniting neighbors from the periodic table of elements: cobalt and nickel in the first category; ruthenium, rhodium, palladium and silver in the second; and iridium and platinum in the third.

The research was supported primarily by the STROBE National Science Foundation Science and Technology Center, of which Miao is deputy director, and in part by the U.S. Department of Energy.

"This groundbreaking result exemplifies the power of a transdisciplinary team," said Charles Ying, the National Science Foundation program officer who oversees funding for the STROBE center. "It demonstrates the need for long-term support of a center to address this type of complex research project."

The study's co-first authors are graduate student Yao Yang, former assistant project scientist Jihan Zhou, former postdoctoral researcher Fan Zhu, and postdoctoral researcher Yakun Yuan, all current or former members of Miao's research group at UCLA. Other UCLA co-authors are graduate students Dillan Chang and Arjun Rana; former postdoctoral scholars Dennis Kim and Xuezeng Tian; assistant adjunct professor of mathematics Minh Pham; and mathematics professor Stanley Osher.

Other co-authors are Yonggang Yao and Liangbing Hu of University of Maryland, College Park; and Andreas Schmid and Peter Ercius of Lawrence Berkeley National Laboratory.

"This work is a great illustration of how to address longstanding grand challenges by bringing together scientists with many different backgrounds in physics, mathematics, materials and imaging science, with strong partnerships between universities and national laboratories," said Margaret Murnane, director of the STROBE center. "This is a spectacular team."

Credit: 
University of California - Los Angeles

Vitamin A for nerve cells

image: Using electron microscopy images, the researchers visualize the dendritic spines (yellow) with their spine apparatus (red) and the synapse terminal buttons (blue).

Image: 
Source: Andreas Vlachos

Neuroscientists agree that a person's brain is constantly changing, rewiring itself and adapting to environmental stimuli. This is how humans learn new things and create memories. This adaptability and malleability is called plasticity. "Physicians have long suspected that remodeling processes also take place in humans at the contact points between nerve cells, i.e. directly at the synapses. Until now, however, such a coordinated adaptation of structure and function could only be demonstrated in animal experiments," says Prof. Dr. Andreas Vlachos from the Institute of Anatomy and Cell Biology at the University of Freiburg. But now Vlachos, together with Prof. Dr. Jürgen Beck, head of the Department of Neurosurgery at the University Medical Center Freiburg, has provided experimental evidence for synaptic plasticity in humans. In addition to Vlachos and Beck, the research team consists of Dr. Maximilian Lenz, Pia Kruse and Amelie Eichler from the University of Freiburg, Dr. Jakob Strähle from the University Medical Center Freiburg and colleagues from Goethe University Frankfurt. The results were presented in the scientific journal eLife.

In the experiments, the team investigated whether so-called dendritic spines change when exposed to a vitamin A derivative called retionic acid. Dendritic spines are the parts of the synapse that receive, process and transmit signals during communication between neurons. As such, they play a crucial role in brain plasticity and are constantly adapting to everyday experience. For example, learning can change the number and shape of dendritic spines. However, a transformation in the number or shape of the spines is also found in diseases such as depression or dementia.

The research shows that retinoic acid not only increases the size of dendritic spines, but also strengthens their ability to transmit signals between neurons. "We have concluded from our results that retinoic acids are important messengers for synaptic plasticity in the human brain. Thus, this finding contributes to the identification of key mechanisms of synaptic plasticity in the human brain and could support the development of new therapeutic strategies for brain diseases, such as depression," says Vlachos.

To experimentally demonstrate that synaptic plasticity also exists in humans, the researchers use tiny samples of human cerebral cortex, which must be compulsorily removed during neurosurgical procedures for therapeutic reasons. The removed brain tissue was then treated with retinoic acid before functional and structural properties of neurons were analyzed using electrophysiological and microscopic techniques.

Credit: 
University of Freiburg

Quantum material's subtle spin behavior proves theoretical predictions

image: Spin chains in a quantum system undergo a collective twisting motion as the result of quasiparticles clustering together. Demonstrating this KPZ dynamics concept are pairs of neighboring spins, shown in red, pointing upward in contrast to their peers, in blue, which alternate directions.

Image: 
Michelle Lehman/ORNL, U.S. Dept. of Energy

Using complementary computing calculations and neutron scattering techniques, researchers from the Department of Energy's Oak Ridge and Lawrence Berkeley national laboratories and the University of California, Berkeley, discovered the existence of an elusive type of spin dynamics in a quantum mechanical system.

The team successfully simulated and measured how magnetic particles called spins can exhibit a type of motion known as Kardar-Parisi-Zhang, or KPZ, in solid materials at various temperatures. Until now, scientists had not found evidence of this particular phenomenon outside of soft matter and other classical materials.

These findings, which were published in Nature Physics, show that the KPZ scenario accurately describes the changes in time of spin chains -- linear channels of spins that interact with one another but largely ignore the surrounding environment -- in certain quantum materials, confirming a previously unproven hypothesis.

"Seeing this kind of behavior was surprising, because this is one of the oldest problems in the quantum physics community, and spin chains are one of the key foundations of quantum mechanics," said Alan Tennant, who leads a project on quantum magnets at the Quantum Science Center, or QSC, headquartered at ORNL.

Observing this unconventional behavior provided the team with insights into the nuances of fluid properties and other underlying features of quantum systems that could eventually be harnessed for various applications. A better understanding of this phenomenon could inform the improvement of heat transport capabilities using spin chains or facilitate future efforts in the field of spintronics, which saves energy and reduces noise that can disrupt quantum processes by manipulating a material's spin instead of its charge.

Typically, spins proceed from place to place through either ballistic transport, in which they travel freely through space, or diffusive transport, in which they bounce randomly off impurities in the material - or each other - and slowly spread out.

But fluid spins are unpredictable, sometimes displaying unusual hydrodynamical properties, such as KPZ dynamics, an intermediate category between the two standard forms of spin transport. In this case, special quasiparticles roam randomly throughout a material and affect every other particle they touch.

"The idea of KPZ is that, if you look at how the interface between two materials evolves over time, you see a certain kind of scaling akin to a growing pile of sand or snow, like a form of real-world Tetris where shapes build on each other unevenly instead of filling in the gaps," said Joel Moore, a professor at UC Berkeley, senior faculty scientist at LBNL and chief scientist of the QSC.

Another everyday example of KPZ dynamics in action is the mark left on a table, coaster or other household surface by a hot cup of coffee. The shape of the coffee particles affects how they diffuse. Round particles pile up at the edge as the water evaporates, forming a ring-shaped stain. However, oval particles exhibit KPZ dynamics and prevent this movement by jamming together like Tetris blocks, resulting in a filled in circle.

KPZ behavior can be categorized as a universality class, meaning that it describes the commonalities between these seemingly unrelated systems based on the mathematical similarities of their structures in accordance with the KPZ equation, regardless of the microscopic details that make them unique.

To prepare for their experiment, the researchers first completed simulations with resources from ORNL's Compute and Data Environment for Science, as well as LBNL's Lawrencium computational cluster and the National Energy Research Scientific Computing Center, a DOE Office of Science user facility located at LBNL. Using the Heisenberg model of isotropic spins, they simulated the KPZ dynamics demonstrated by a single 1D spin chain within potassium copper fluoride.

"This material has been studied for almost 50 years because of its 1D behavior, and we chose to focus on it because previous theoretical simulations showed that this setting was likely to yield KPZ hydrodynamics," said Allen Scheie, a postdoctoral research associate at ORNL.

The team then used the SEQUOIA spectrometer at the Spallation Neutron Source, a DOE Office of Science user facility located at ORNL, to examine a previously unexplored region within a physical crystal sample and to measure the collective KPZ activity of real, physical spin chains. Neutrons are an exceptional experimental tool for understanding complex magnetic behavior due to their neutral charge and magnetic moment and their ability to penetrate materials deeply in a nondestructive fashion.

Both methods revealed evidence of KPZ behavior at room temperature, a surprising accomplishment considering that quantum systems usually must be cooled to almost absolute zero to exhibit quantum mechanical effects. The researchers anticipate that these results would remain unchanged, regardless of variations in temperature.

"We're seeing pretty subtle quantum effects surviving to high temperatures, and that's an ideal scenario because it demonstrates that understanding and controlling magnetic networks can help us harness the power of quantum mechanical properties," Tennant said.

This project began during the development of the QSC, one of five recently launched Quantum Information Science Research Centers competitively awarded to multi-institutional teams by DOE. The researchers had realized their combined interests and expertise perfectly positioned them to tackle this notoriously difficult research challenge.

Through the QSC and other avenues, they plan to complete related experiments to cultivate a better understanding of 1D spin chains under the influence of a magnetic field, as well as similar projects focused on 2D systems.

"We showed spin moving in a special quantum mechanical way, even at high temperatures, and that opens up possibilities for many new research directions," Moore said.

Credit: 
DOE/Oak Ridge National Laboratory

Researchers: Plants play leading role in cycling toxic mercury through the environment

LOWELL, Mass. - Researchers studying mercury gas in the atmosphere with the aim of reducing the pollutant worldwide have determined a vast amount of the toxic element is absorbed by plants, leading it to deposit into soils.

Hundreds of tons of mercury each year are emitted into the atmosphere as a gas by burning coal, mining and other industrial and natural processes. These emissions are absorbed by plants in a process similar to how they take up carbon dioxide. When the plants shed leaves or die, the mercury is transferred to soils where large amounts also make their way into watersheds, threatening wildlife and people who eat contaminated fish.

Exposure to high levels of mercury over long periods can lead to neurological and cardiovascular problems in humans, according to UMass Lowell's Daniel Obrist, professor and chair of the Department of Environmental, Earth and Atmospheric Sciences, who is leading the research group.

Obrist is an expert on the cycling of mercury in the environment. In his latest project, he and UMass Lowell Research Associate Jun Zhou collected more than 200 published studies with data on mercury levels in vegetation from more than 400 locations around the world. In evaluating this data, they determined about 88 percent of the mercury found in plants originates from plants' leaves absorbing gaseous mercury from the atmosphere. Globally, vegetation can take up more than 1,300 tons of mercury each year, accounting for 60 to 90 percent of it being deposited over land, according to Zhou.

The team's findings were published this month in the academic journal Nature Reviews - Earth & Environment. The study represents the largest comprehensive review of the uptake of mercury in vegetation and its impact on mercury cycling around the world, according to the researchers.

"When I walk outside here in New England, I am always amazed at the greenness of our forest, grasslands and salt marshes. One goal of my research is to determine how strongly vegetation controls the cycling of elements - some of which can be toxic pollutants - so we can better mitigate damaging effects," Obrist said.

The work moves scientists toward a greater understanding of how mercury cycling works, according to Zhou.

"Researchers have worked on the role that vegetation plays on cycling of mercury for over 30 years now, but the full extent of these impacts are still not yet fully realized. It was timely to write this comprehensive review and communicate to colleagues and the public about the current state of knowledge in this area," Zhou said.

Other contributors to the study include scientists from the Environment and Climate Change Canada's Air Quality Research Division in Quebec, and the University of Basel in Switzerland. Support for the research was provided by the U.S. National Science Foundation and Swiss National Science Foundation.

In a separate but related project led by Obrist, researchers continue to measure how vegetation affects mercury cycling in New England forests, focusing on those in Maine and Massachusetts. Obrist's team is using a variety of instruments and sensors to measure the forests' uptake of mercury in the atmosphere at various heights from above the tree canopy down to near the forest floor, allowing for daily tracking of how mercury deposition may be different in each forest and may change with the seasons.

Credit: 
University of Massachusetts Lowell

Getting to the core of HIV replication

image: The HIV-1 virus has evolved a way to import into its core the nucleotides it needs to fuel DNA synthesis, according to research led by Juan R. Perilla at the University of Delaware. Using the TACC Stampede2 and PSC Bridges supercomputers, Perilla's team has shown for the first time that a virus performs an activity such as recruiting small molecules from a cellular environment into its core to conduct a process beneficial for its life cycle.

Image: 
Xu, et al.

Viruses lurk in the grey area between the living and the nonliving, according to scientists. Like living things, they replicate but they don't do it on their own. The HIV-1 virus, like all viruses, needs to hijack a host cell through infection in order to make copies of itself.

Supercomputer simulations supported by the National Science Foundation-funded Extreme Science and Engineering Discovery Environment (XSEDE) have helped uncover the mechanism for how the HIV-1 virus imports into its core the nucleotides it needs to fuel DNA synthesis, a key step in its replication. It's the first example found where a virus performs an activity such as recruiting small molecules from a cellular environment into its core to conduct a process beneficial for its life cycle.

The computational biophysics research, published December 2020 in PLOS Biology, challenges the prevailing view of the viral capsid, long considered to be just a static envelope housing the genetic material of the HIV-1 virus.

"To my knowledge, it's the first piece of work that comprehensively shows an active role of the capsids in regulating a very specific lifecycle of the virus, not only computationally, but also in vitro assays and ultimately in the cells," said study co-author Juan R. Perilla, a biophysical chemist at the University of Delaware.

The research team collaborated with several research groups, including experimental groups at the University of Pittsburgh School of Medicine and the Harvard Medical School. These groups validated the predictions from molecular dynamics (MD) simulations by using atomic force microscopy and transmission electron microscopy.

"For our part, we used MD simulations," said lead author Chaoyi Xu, a graduate student in the Perilla Lab. "We studied how the HIV capsid allows permeability to small molecules, including nucleotides, IP6, and others." IP6 is a metabolite that helps stabilize the HIV-1 capsid.

It's rare for a computational paper to be in a biology journal, explained Perilla. "The reason this is possible is that we are discovering new biology," he said. The biology relates to the stability of the virus to import small molecules that it needs for certain metabolic pathways. "In the context of HIV, it's the fuel for the reverse transcription that occurs inside of the capsid."

The enzyme reverse transcriptase generates complimentary DNA, one-half of DNA that pairs up in the cell to complete the full invading viral DNA. The viral DNA enters the host cell nucleus, integrates into the host cell DNA, and uses the cell's machinery to crank out new viral DNA.

"In these series of experiments and computational predictions, what we have shown is that the capsid itself plays an active role in the infective cycle," Perilla said. "It regulates the reverse transcription -- how the viral DNA synthesizes inside of the capsid." He explained that these processes are the result of millions of years of co-evolution between the virus and the target cell.

"Without supercomputers, the computational part of the study would have been impossible," added Xu. The challenge was that the biological problem of nucleotide translocation would require a longer timescale than would be possible to sample using atomistic molecular dynamics simulations.

Instead, the researchers used a technique called umbrella sampling coupled with Hamiltonian replica exchange. "The advantage of using this technique is that we can separate the whole translocation process into small windows," Xu said. In each small window, they ran individual small MD simulations in parallel on supercomputers.

"By using the resources provided from XSEDE, we were able to run and not only test the translocation processes, but also the effects of small molecules binding on the translocation process by comparing the free energy differences calculated from our results."

XSEDE awarded Perilla and his lab access to two supercomputing systems used in the HIV capsid research: Stampede2 at the Texas Advanced Computing Center (TACC); and Bridges at the Pittsburgh Supercomputing Center (PSC).

"TACC and PSC have been extremely generous to us and very supportive," Perilla said.

"When I transferred from Stampede1 to Stampede2, the hardware was a big improvement. At the time, we were fascinated with the Intel Xeon Skylake nodes. They were fantastic," Perilla said.

"On Bridges, we took advantage of the high memory nodes. They have these massive memory machines with 3 and 12 terabytes of inline memory. They're really good for analysis. Bridges provides a very unique service to the community," he continued.

On related work, the Perilla Lab has also employed through XSEDE the PSC Bridges-AI system, and they have been part of the early user science program for PSC's Bridges-2 platform.

"We've enjoyed this early science period on Bridges-2," Perilla said. "The experts at PSC want us to hammer the machine as much as we can, and we're happy to do that. We have a lot of work that needs to be done."

Perilla related that the XSEDE Campus Champion program has also helped in his mission for training the next generation of computational scientists. The program enlists 600+ faculty and staff at more than 300 universities to help students, faculty, and postdocs take full advantage of XSEDE's cyberinfrastructure resources.

"We received an immense amount of help from our XSEDE Campus Champion, Anita Schwartz." Perilla said. "She helped us with everything that is related to XSEDE. We also took advantage of the training programs. The younger members of our lab took advantage of the training opportunities offered by XSEDE."

Xu recalled finding them helpful for learning how to get started using XSEDE supercomputers, and also for learning the Simple Linux Utility for Resource Management (SLURM), which is the project job management used for supercomputers.

"By taking these courses, I familiarized myself with using these supercomputers, and also to use them to solve our research questions," Xu said.

What's more, the University of Delaware launched in December 2020 the Darwin supercomputer, a new XSEDE-allocated resource.

"The students in the group have had the opportunity to train on these fantastic machines provided by XSEDE, they're now at the point that they're expanding that knowledge to other researchers on campus and explaining the details of how to make the best use of the resource," Perilla said. "And now that we have an XSEDE resource here on campus, it's helping us create a local community that is as passionate about high performance computing as we are,"

Perilla sees this latest work on the HIV-1 capsid as providing a new target for therapeutic development. Because there is no cure for HIV and the virus keeps getting drug resistance, there's a constant need to optimize anti-retroviral drugs.

Said Perilla: "We're very enthusiastic about supercomputers and what they can do, the scientific questions they allow us to pose. We want to reproduce biology. That's the ultimate goal of what we do and what supercomputers enable us to do."

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

First images of freshwater plumes at sea

image: Eric Attias and team deploy CSEM system offshore Hawai'i island.

Image: 
University of Hawai'i

The first imaging of substantial freshwater plumes west of Hawai'i Island may help water planners to optimize sustainable yields and aquifer storage calculations. University of Hawai'i at Mānoa researchers demonstrated a new method to detect freshwater plumes between the seafloor and ocean surface in a study recently published in Geophysical Research Letters.

The research, supported by the Hawai'i EPSCoR 'Ike Wai project, is the first to demonstrate that surface-towed marine controlled-source electromagnetic (CSEM) imaging can be used to map oceanic freshwater plumes in high-resolution. It is an extension of the groundbreaking discovery of freshwater beneath the seafloor in 2020. Both are important findings in a world facing climate change, where freshwater is vital for preserving public health, agricultural yields, economic strategies, and ecosystem functions.

Profound implications

While the CSEM method has been used to detect the presence of resistive targets such as oil, gas and freshwater beneath the seafloor, this study is the first time CSEM was applied to image freshwater in the ocean water column, according to 'Ike Wai research affiliate faculty Eric Attias, who led the study.

"This study has profound implications for oceanography, hydrogeology and ocean processes that affect biogeochemical cycles in coastal waters worldwide," said Attias. "Using CSEM, we now can estimate the volumes of freshwater emanating to the water column. This is indicative of the renewability of Hawai'i's submarine freshwater system."

Submarine groundwater discharge (SGD), the leaking of groundwater from a coastal aquifer into the ocean, is a key process, providing a water source for people, and supporting sea life such as fish and algae. According to UH Mānoa Department of Earth Sciences Associate Professor and study co-author Henrietta Dulai, the location of offshore springs is extremely hard to predict because of the unknown underlying geology and groundwater conduits.

"The flux of such high volumes of nutrient-rich, low salinity groundwater to the ocean has great significance for chemical budgets and providing nutrients for offshore food webs," said Dulai. "It is great to have a method that can pinpoint discharge locations and plumes as it opens up new opportunities to sample and identify the age of the water, its origin, chemical composition, and its significance for marine ecosystems in this otherwise oligotrophic (relatively low in plant nutrients and containing abundant oxygen in the deeper parts) ocean."

Four Olympic swimming pools

This study included electromagnetic data driven 2D CSEM inversion, resistivity-to-salinity calculation, and freshwater plume volumetric estimation. Through the use of CSEM, the research team was able to image surface freshwater bodies and multiple large-scale freshwater plumes that contained up to 87% freshwater offshore Hawai'i Island. The results imply that at the study site substantial volumes of freshwater are present in the area between the seafloor and the ocean's surface. A conservative estimate for one of the plumes suggests 10,720 cubic meters or approximately the volume of four Olympic-sized swimming pools.

The methodology used in this study can be applied to coastal areas worldwide, thus improving future hydrogeological models by incorporating offshore SGD and optimizing sustainable yields and storage calculations. Attias plans to extend the novel use of CSEM to further prove its application in imaging freshwater at other volcanic islands around the globe.

Attias will present his work at the International Tropical Island Water Conference taking place April 12-15, 2021. Hosted by the UH Water Resources Research Center and Hawai'i EPSCoR, this conference brings together water scientists, water managers and community members from around the world to share cutting-edge research and learn from each other's experiences managing and understanding water resources across a broad range of tropical island settings.

Credit: 
University of Hawaii at Manoa

Low-cost solar-powered water filter removes lead, other contaminants

image: In a study conducted at Princeton University, researchers placed the gel in lake water where it absorbed pure water, leaving contaminants behind. The researchers then placed the gel in the sun, where solar energy heated up the gel, causing the discharge of the pure water into the container.

Image: 
Image Xiaohui Xu , Princeton University

A new invention that uses sunlight to drive water purification could help solve the problem of providing clean water off the grid.

The device resembles a large sponge that soaks up water but leaves contaminants - like lead, oil and pathogens - behind. To collect the purified water from the sponge, one simply places it in sunlight. The researchers described the device in a paper published this week in the journal Advanced Materials.

The inspiration for the device came from the pufferfish, a species that takes in water to swell its body when threatened, and then releases water when danger passes, said the device's co-inventor Rodney Priestley, the Pomeroy and Betty Perry Smith Professor of Chemical and Biological Engineering, and Princeton's vice dean for innovation.

"To me, the most exciting thing about this work is it can operate completely off-grid, at both large and small scales," Priestley said. "It could also work in the developed world at sites where low-cost, non-powered water purification is needed."

Xiaohui Xu, a Princeton Presidential Postdoctoral Research Fellow in the Department of Chemical and Biological Engineering and co-inventor, helped develop the gel material at the heart of the device.

"Sunlight is free," Xu said, "and the materials to make this device are low-cost and non-toxic, so this is a cost-effective and environmentally friendly way to generate pure water."

The authors noted that the technology delivers the highest passive solar water- purification rate of any competing technology.

One way to use the gel would be to place it in a water source in the evening and the next day place it in the sunlight to generate the day's drinking water, Xu said.

The gel can purify water contaminated with petroleum and other oils, heavy metals such as lead, small molecules, and pathogens such as yeast. The team showed that the gel maintains its ability to filter water for at least ten cycles of soaking and discharge with no detectable reduction in performance. The results suggest that the gel can be used repeatedly.

To demonstrate the device in real-world conditions, Xu took the device to Lake Carnegie on the Princeton University campus.

Xu placed the gel into the cool water (25 degree Celsius, or 77 degrees Fahrenheit) of the lake, which contains microorganisms that make it unsafe to drink, and let it soak up the lake water for an hour.

At the end of the hour, Xu lifted the gel out of the water and set it on top of a container. As the sun warmed the gel, pure water trickled into the container over the next hour.

The device filters water much more quickly than existing methods of passive solar-powered water purification methods, the researchers said. Most other solar-powered approaches use sunlight to evaporate water, which takes much longer than absorption and release by the new gel.

Other water filtration methods require electricity or another source of power to pump water through a membrane. Passive filtration via gravity, as with typical household countertop filters, requires regular replacement of filters.

At the heart of the new device is a gel that changes depending on temperature. At room temperature, the gel can act as a sponge, soaking up water. When heated to 33 degrees Celsius (91 degrees Fahrenheit), the gel does the opposite - it pushes the water out of its pores.

The gel consists of a honeycomb-like structure that is highly porous. Closer inspection reveals that the honeycomb consists of long chains of repeating molecules, known as poly(N-isopropylacrylamide), that are cross-linked to form a mesh. Within the mesh, some regions contain molecules that like to have water nearby, or are hydrophilic, while other regions are hydrophobic or water-repelling.

At room temperature, the chains are long and flexible, and water can easily flow via capillary action into the material to reach the water-loving regions. But when the sun warms the material, the hydrophobic chains clump together and force the water out of the gel.

This gel sits inside two other layers that stop contaminants from reaching the inner gel. The middle layer is a dark-colored material called polydopamine that transforms sunlight into heat and also keeps out heavy metals and organic molecules. With PDA in place, the sun's light can heat up the inner material even if the actual outdoor temperature is not very warm.

The final external layer is a filtering layer of alginate, which blocks pathogens and other materials from entering the gel.

Xu said that one of the challenges to making the device was to formulate the inner gel to have the correct properties for water absorption. Initially the gel was brittle, so she altered the composition until it was flexible. Xu synthesized the materials and conducted studies to assess the device's ability to purify water, aided by coauthors Sehmus Ozden and Navid Bizmark, postdoctoral research associates in the Princeton Institute for the Science and Technology of Materials.

Sujit Datta, assistant professor of chemical and biological engineering, and Craig Arnold, the Susan Dod Brown Professor of Mechanical and Aerospace Engineering and director of the Princeton Institute for the Science and Technology of Materials, collaborated on the development of the technology.

The team is exploring ways to make the technology widely available with the help of Princeton Innovation, which supports University researchers in the translation of discoveries into technologies and services for the benefit of society.

Credit: 
Princeton University

A Skoltech method helps model the behavior of 2D materials under pressure

image: Modeling the behavior of 2D materials under pressure

Image: 
ACS Nano (2021). DOI: 10.1021/acsnano.0c10609

Scientists from the Skoltech Center for Energy Science and Technology (CEST) have developed a method for modeling the behavior of 2D materials under pressure. The research will help create pressure sensors based on silicene or other 2D materials. The paper was published in the ACS Nano journal.

Silicene, which is regarded as the silicon analog of graphene, is a two-dimensional allotrope of silicon. In its normal state, bulk silicon is a semiconductor with a diamond crystal type structure. As it thins down to one or several layers, its properties change dramatically. However, it has not yet been possible to study the change in the electronic properties of 2D materials at high pressure.

Scientists from Russia, Italy, the United States, and Belgium have developed a theoretical research method relying on quantum chemistry to study the electronic properties of 2D materials under pressure using silicene as an example. In contrast to carbon, which is stable in both 3D and 2D states, silicene is metastable and easy to interact with the environment.

"Silicon is a semiconductor in its bulk state and a metal in the 2D state. The properties of monolayer and multilayered silicene are extensively studied theoretically. Silicene is corrugated rather than flat due to the interactions between the neighboring silicon atoms. An increase in pressure should flatten silicene and change its properties, but this effect cannot yet been investigated experimentally," explains Skoltech research scientist Christian Tantardini.

In most cases, experimental tools used to apply pressure to the material along the axis normal to its plane simultaneously produce compression in the in-plane directions of 2D material. Thus, the resulting measurements would hardly be accurate, so right now modeling appears to be the only plausible approach.

"In our case, a new theoretical approach was the only solution. As pressure is applied only along one direction, we simulate the compression of our material and try to figure out what is the reason for the changes in the electronic structure, arrangement of silicon atoms and their hybridization under different pressures, and why the layers flatten," Skoltech Senior Research Scientist Alexander Kvashnin comments.

Accurate prediction of the behavior of silicene or other 2D materials under pressure would make silicene a promising candidate for pressure sensors. When placed inside the sensor, silicene could help determine pressure based on the material's response to compression. This kind of sensor could be used, for instance, in drilling rigs with a high requirement for pressure control to increase the drilling force without damaging the equipment.

"We used silicene in our modeling study to test the method which could also work for other 2D materials, including more stable ones that are already manufactured and used extensively, at zero pressure," says Xavier Gonze, a visiting professor at Skoltech and a professor at the Université Catholique de Louvain (UCLouvain) in Belgium.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Scientists find genetic link to clogged arteries

image: A new study from Washington University School of Medicine in St. Louis has identified a gene -- called SVEP1 -- that makes a protein that influences the risk of coronary artery disease independent of cholesterol. SVEP1 induces proliferation of vascular smooth muscle cells in the development of atherosclerosis. Shown is a stained section of atherosclerotic plaque from a mouse aorta, the largest artery in the body. Vascular smooth muscle cells are red; proliferating cells are cyan; nuclei of any cell are blue.

Image: 
In-Hyuk Jung, PhD, Stitziel Lab

High cholesterol is the most commonly understood cause of atherosclerosis, a hardening of the arteries that raises the risk of heart attack and stroke. But now, scientists at Washington University School of Medicine in St. Louis have identified a gene that likely plays a causal role in coronary artery disease independent of cholesterol levels. The gene also likely has roles in related cardiovascular diseases, including high blood pressure and diabetes.

The study appears March 24 in the journal Science Translational Medicine.

Studying mice and genetic data from people, the researchers found that the gene -- called SVEP1 -- makes a protein that drives the development of plaque in the arteries. In mice, animals missing one copy of SVEP1 had less plaque in the arteries than mice with both copies. The researchers also selectively reduced the protein in the arterial walls of mice, and this further reduced the risk of atherosclerosis.

Evaluating human genetic data, the researchers found that genetic variation influencing the levels of this protein in the body correlated with the risk of developing plaque in the arteries. Genetically determined high levels of the protein meant higher risk of plaque development and vice versa. Similarly, they found higher levels of the protein correlated with higher risk of diabetes and higher blood pressure readings.

"Cardiovascular disease remains the most common cause of death worldwide," said cardiologist Nathan O. Stitziel, MD, PhD, an associate professor of medicine and of genetics. "A major goal of treatment for cardiovascular disease has appropriately been focused on lowering cholesterol levels. But there must be causes of cardiovascular disease that are not related to cholesterol -- or lipids -- in the blood. We can decrease cholesterol to very low levels, and some people still harbor residual risk of future coronary artery disease events. We're trying to understand what else is going on, so we can improve that as well."

This is not the first nonlipid gene identified that has been implicated in cardiovascular disease. But the exciting aspect of this discovery is that it lends itself better to developing future therapies, according to the investigators.

The researchers -- including co-first authors In-Hyuk Jung, PhD, a staff scientist, and Jared S. Elenbaas, a doctoral student in Stitziel's lab -- further showed that this protein is a complex structural molecule and is manufactured by vascular smooth muscle cells, which are cells in the walls of blood vessels that contract and relax the vasculature. The protein was shown to drive inflammation in the plaques in the artery walls and to make the plaques less stable. Unstable plaque is particularly dangerous because it can break loose, leading to the formation of a blood clot, which can cause heart attack or stroke.

"In animal models, we found that the protein induced atherosclerosis and promoted unstable plaque," Jung said. "We also saw that it increased the number of inflammatory immune cells in the plaque and decreased collagen, which serves a stabilizing function in plaques."

According to Stitziel, other genes previously identified as raising the risk of cardiovascular disease independent of cholesterol appear to have widespread roles in the body and are therefore more likely to have far-reaching undesirable side effects if blocked in an effort to prevent cardiovascular disease. Although SVEP1 is required for early development of the embryo, eliminating the protein in adult mice did not appear to be detrimental, according to the researchers.

"The human genetic data showed a naturally occurring wide range of this protein in the general population, suggesting that we might be able to alter its levels in a safe way and potentially decrease coronary artery disease," Elenbaas said.

Ongoing work in Stitziel's group is focused on seeking ways to block the protein or reduce its levels in an effort to identify new compounds or possible treatments for coronary artery disease and, perhaps, high blood pressure and diabetes. The researchers have worked with Washington University's Office of Technology Management (OTM) to file a patent for therapies that target the SVEP1 protein.

Credit: 
Washington University School of Medicine