Tech

CRI scientists discover metabolic feature that allows melanoma cells to spread

image: Researchers involved in the study include (from left) Drs. Brandon Faubert, Alpaslan Tasdogan, Sean Morrison, and Ralph DeBerardinis.

Image: 
UTSW

DALLAS - Dec. 18, 2019 - Researchers at Children's Medical Center Research Institute at UT Southwestern (CRI) have uncovered why certain melanoma cells are more likely to spread through the body. The discovery opens up a potential new avenue of treatment and could be used to help reduce the proportion of patients who progress from stage 3 melanoma to more-deadly stage 4 cancer.

"In prior studies we found there are intrinsic differences among melanomas in their ability to metastasize or spread. Some are efficient metastasizers that readily form distant tumors whether you take them out surgically or not, while others are inefficient metastasizers that spread more slowly and that can be cured through surgery," said Dr. Sean Morrison, Director of CRI and a Howard Hughes Medical Institute (HHMI) Investigator. "Since metastasis is a major determinant of clinical outcomes, we have focused for several years on understanding why some melanoma cells are better at it than others."

Scientists have long known that most cancer cells die when they attempt to metastasize from a primary tumor to other parts of the body. Those that are able to survive during metastasis must undergo poorly understood metabolic changes.

A previous study conducted by the Morrison lab found one factor that limits the ability of melanoma cells to spread to other parts of the body is the high level of oxidative stress cancer cells experience during metastasis when they enter the bloodstream. Recently, another study at CRI in Dr. Ralph DeBerardinis' lab found that more aggressive lung cancer cells consume higher levels of lactate. Based on these findings, scientists in the two labs hypothesized that some melanoma cells might be better at metastasizing if they were better at consuming lactate.

To test this hypothesis, researchers used techniques developed by the Morrison laboratory for studying the metastasis of human melanoma cells in specialized mice and techniques developed by the DeBerardinis lab to label and track the use of nutrients in tumors. The researchers discovered that efficient metastasizers take up more lactate because they have higher levels of a lactate transporter on their cell surface, called monocarboxylate transporter 1 (MCT1), as compared with inefficient metastasizers.

"Efficient metastasizers are able to take up more lactate, which allows them to increase their production of antioxidants that help them to survive in the blood," said Dr. Alpaslan Tasdogan, lead author of the study and a postdoctoral researcher in the Morrison lab. "The findings in our paper, along with those made previously by the DeBerardinis lab, strongly suggest that increased lactate uptake by cancer cells promotes disease progression. This correlates with clinical data showing that patients with higher levels of MCT1 in their cancers have worse outcomes."

In the study, published in Nature, melanomas growing in mice that were treated with an MCT1 inhibitor led to fewer melanoma cells in the blood and fewer metastatic tumors. These data raise the possibility that MCT1 inhibitors, if given to patients before their cancer spreads, could reduce the proportion of patients who develop distant metastases, which are associated with systemic disease and much less likely to be curable.

"This paper makes a compelling case for analyzing metabolism in tumors," said Dr. DeBerardinis, Professor at CRI and an HHMI Investigator. "It's a great example of how assessing tumor metabolism can identify differences that correlate with cancer aggressiveness. Then you can identify an activity related to metastasis, inhibit it with a drug, and reduce metastasis in the mouse. That's remarkable."

Credit: 
UT Southwestern Medical Center

Texas A&M study reveals domestic horse breed has third-lowest genetic diversity

A new study by Dr. Gus Cothran, professor emeritus at the Texas A&M School of Veterinary Medicine & Biomedical Sciences (CVM), has found that the Cleveland Bay (CB) horse breed has the third-lowest genetic variation level of domestic horses, ranking above only the notoriously inbred Friesian and Clydesdale breeds. This lack of genetic diversity puts the breed at risk for a variety of health conditions.

Genetic variation refers to the differences between different individuals' DNA codes. Populations where there is high genetic diversity will have a wider range of different traits and will be more stable, in part because disease traits will be more diluted. In populations with low genetic variation, many individuals will have the same traits and will be more vulnerable to disease.

The CB is the United Kingdom's oldest established horse breed and the only native warm-blood horse in the region. Used for recreational riding, driving, and equestrian competition, the CB is considered a critically endangered breed by the Livestock Conservancy.

Because maintaining genetic diversity within the breed is important to securing the horses' future, Cothran and his team worked to gain comprehensive genetic information about the breed to develop more effective conservation and breeding strategies.

In this study, published in Diversity, researchers genotyped hair from 90 different CB horses and analyzed their data for certain genetic markers. These samples were then compared to each other, as well as to samples from other horse breeds to establish the genetic diversity within the breed and between other breeds.

Both the heterozygosity and mean allele number for the breed were below average, indicating lower than average genetic diversity within the breed. This low genetic diversity should be seen as a red flag for possible health conditions.

"Low diversity is a marker for inbreeding, which can cause low fertility or any number of hereditary diseases or deformities," Cothran said. "With overall population numbers for the breed being so small, such problems could rapidly lead to the extinction of the breed."

The Cleveland Bay Horse Society of North America estimates that only around 900 CB purebreds exist globally. Such low population numbers mean the breed is considered to be critically rare.

This study also evaluated the diversity between the CB and other breeds using a majority-rule consensus tree, a type of analysis that shows an estimate of how different clades, or groups of organisms sharing a common ancestor, might fit together on their ancestral tree.

Cothran and his team's analysis found that the CB did not show a strong relationship with any other breeds, including other breeds within the same clade. Though this could be a result of the low genetic diversity within the breed, these data suggest that the CB is genetically unique from other breeds. These findings place emphasis on the importance of CB horses as a genetic resource.

"The CB is an unusual horse in that it is a fairly large sized horse but it is built like a riding horse rather than a draft horse," Cothran said, noting the uniqueness of the breed. "It frequently is bred to other breeds such as the Thoroughbred to create eventing or jumping horses, although this is a potential threat to maintaining diversity in the CB."

Cothran hopes his research will help to inform conservation efforts supporting the longevity of the CB breed, as well as inform breeders on how they can more responsibly further their horses' genetic lines.

"If any evidence of inbreeding is observed, breeders should report it to scientists for further analysis," Cothran said. "Efforts should be made to keep the numbers of CB horses as high as possible and to monitor breeding practices to minimize inbreeding and loss of variability."

"Domestic animals, including horses, are also at risk of declining populations, just like endangered species, but research can help determine which populations (breeds) are at risk and provide possible directions to help reduce risks or consequences," he said.

Though CB horses are currently at risk, Cothran remains optimistic that careful monitoring and management of the breed can preserve them as a cultural and genetic resource for years to come.

Credit: 
Texas A&M University

When cells cycle fast, cancer gets a jumpstart

The progression of cancer has been studied extensively, and the key steps in this journey have been well mapped, at least in some solid tumors: Lesions to genes that confer risk of cancer accumulate and alter normal cell behaviors, giving rise, scientists believe, to early stage cancer cells that eventually swamp normal cells and become deadly.

But Yale researchers have now identified another bit of cellular chicanery that jumpstarts cancer. In at least one form of blood cancer, they report Dec. 18 in the journal Nature Communications, cells with cancer-causing gene lesions can remain normal and healthy -- until cell division, or cycling, speeds up.

"Many people with cancer-causing genes remain healthy for many, many years,'' said senior author Shangqin Guo, assistant professor of cell biology and a researcher at the Yale Stem Cell Center. "So, in these cases, you have to wonder whether the dogma 'mutations cause cancer' is the complete truth."

Guo and first author Xinyue Chen, a graduate student in Guo's lab, wanted to focus on the role of cell cycling speed, which varies in normal tissue, on the formation of cancer. They studied acute myeloid leukemia (AML), an aggressive type of blood cancer that carries the lowest number of mutations among human cancers. They introduced a known leukemia-causing mutation, MLL-AF9, into the genomes of mice and tracked individual blood cells for signs of cancer. While the great majority of cells remained normal, the few cells that divided most quickly almost always became malignant.

"When a normal cell that divides quickly meets MLL-AF9, this combination creates a monster cell that is stuck in a state of perpetual fast division," Guo said. "Their more slowly cycling counterparts remain normal and do not display the malignant cancer traits, even in the presence of cancer-causing mutations."

Understanding how a normal cell crosses over to the dark side has important implications.

For example, the researchers said, certain infections can trigger a proliferation of rapidly-dividing cells to fight the pathogens, perhaps opening the door to cancer.

Also, stem cells generally experience functional decline during aging, which requires healthier stem cells to divide more quickly to repair damaged tissue. The increased proliferation may provide an explanation of why people become more susceptible to cancer as they age, the scientists said.

Credit: 
Yale University

Online hate speech could be contained like a computer virus, say Cambridge researchers

image: This is an example of a possible approach for a quarantine screen, complete with Hate O'Meter.

Image: 
Stefanie Ullman

The spread of hate speech via social media could be tackled using the same "quarantine" approach deployed to combat malicious software, according to University of Cambridge researchers.

Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual: graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats, for example.

As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended "psychological harm" is inflicted, with armies of moderators required to judge every case.

This is the new front line of an ancient debate: freedom of speech versus poisonous language.

Now, an engineer and a linguist have published a proposal in the journal Ethics and Information Technology that harnesses cyber security techniques to give control to those targeted, without resorting to censorship.

Cambridge language and machine learning experts are using databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.

As these algorithms get refined, potential hate speech could be identified and "quarantined". Users would receive a warning alert with a "Hate O'Meter" - the hate speech severity score - the sender's name, and an option to view the content or delete unseen.

This approach is akin to spam and malware filters, and researchers from the 'Giving Voice to Digital Democracies' project believe it could dramatically reduce the amount of hate speech people are forced to experience. They are aiming to have a prototype ready in early 2020.

"Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining," said co-author and linguist Dr Stefanie Ullman. "In fact, a lot of hate speech is actually generated by software such as Twitter bots."

"Companies like Facebook, Twitter and Google generally respond reactively to hate speech," said co-author and engineer Dr Marcus Tomalin. "This may be okay for those who don't encounter it often. For others it's too little, too late."

"Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation," he said.

Former US Secretary of State Hillary Clinton recently told a UK audience that hate speech posed a "threat to democracies", in the wake of many women MPs citing online abuse as part of the reason they will no longer stand for election.

While in a Georgetown University address, Facebook CEO Mark Zuckerberg spoke of "broad disagreements over what qualifies as hate" and argued: "we should err on the side of greater expression".

The researchers say their proposal is not a magic bullet, but it does sit between the "extreme libertarian and authoritarian approaches" of either entirely permitting or prohibiting certain language online.

Importantly, the user becomes the arbiter. "Many people don't like the idea of an unelected corporation or micromanaging government deciding what we can and can't say to each other," said Tomalin.

"Our system will flag when you should be careful, but it's always your call. It doesn't stop people posting or viewing what they like, but it gives much needed control to those being inundated with hate."

In the paper, the researchers refer to detection algorithms achieving 60% accuracy - not much better than chance. Tomalin's machine learning lab has now got this up to 80%, and he anticipates continued improvement of the mathematical modeling.

Meanwhile, Ullman gathers more "training data": verified hate speech from which the algorithms can learn. This helps refine the "confidence scores" that determine a quarantine and subsequent Hate O'Meter read-out, which could be set like a sensitivity dial depending on user preference.

A basic example might involve a word like 'bitch': a misogynistic slur, but also a legitimate term in contexts such as dog breeding. It's the algorithmic analysis of where such a word sits syntactically - the types of surrounding words and semantic relations between them - that informs the hate speech score.

"Identifying individual keywords isn't enough, we are looking at entire sentence structures and far beyond. Sociolinguistic information in user profiles and posting histories can all help improve the classification process," said Ullman.

Added Tomalin: "Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses."

However, the researchers, who work in Cambridge's Centre for Research into Arts, Humanities and Social Sciences (CRASSH), say that - as with computer viruses - there will always be an arms race between hate speech and systems for limiting it.

The project has also begun to look at "counter-speech": the ways people respond to hate speech. The researchers intend to feed into debates around how virtual assistants such as 'Siri' should respond to threats and intimidation.

Credit: 
University of Cambridge

Scientists find way to supercharge protein production

image: Tubes of green fluorescent protein glow more brightly when they contain more of the protein. Researchers at Washington University School of Medicine have found a way to increase protein production up to a thousandfold, a discovery that could aid production of proteins used in the medical, food, agriculture, chemical and other industries.

Image: 
Sergej Djuranovic

Medicines such as insulin for diabetes and clotting factors for hemophilia are hard to synthesize in the lab. Such drugs are based on therapeutic proteins, so scientists have engineered bacteria into tiny protein-making factories. But even with the help of bacteria or other cells, the process of producing proteins for medical or commercial applications is laborious and costly.

Now, researchers at Washington University School of Medicine in St. Louis have discovered a way to supercharge protein production up to a thousandfold. The findings, published Dec. 18 in Nature Communications, could help increase production and drive down costs of making certain protein-based drugs, vaccines and diagnostics, as well as proteins used in the food, agriculture, biomaterials, bioenergy and chemical industries.

"The process of producing proteins for medical or commercial applications can be complex, expensive and time-consuming," said Sergej Djuranovic, PhD, an associate professor of cell biology and physiology and the study's senior author. "If you can make each bacterium produce 10 times as much protein, you only need one-tenth the volume of bacteria to get the job done, which would cut costs tremendously. This technique works with all kinds of proteins because it's a basic feature of the universal protein-synthesizing machinery."

Proteins are built from chains of amino acids hundreds of links long. Djuranovic and first author Manasvi Verma, an undergraduate researcher in Djuranovic's lab, stumbled on the importance of the first few amino acids when an experiment for a different study failed to work as expected. The researchers were looking for ways to control the amount of protein produced from a specific gene.

"We changed the sequence of the first few amino acids, and we thought it would have no effect on protein expression, but instead, it increased protein expression by 300%," Djuranovic said. "So then we started digging in to why that happened."

The researchers turned to green fluorescent protein, a tool used in biomedical research to estimate the amount of protein in a sample by measuring the amount of fluorescent light produced. Djuranovic and colleagues randomly changed the sequence of the first few amino acids in green fluorescent protein, generating 9,261 distinct versions, identical except for the very beginning.

The brilliance of the different versions of green fluorescent protein varied a thousandfold from the dimmest to the brightest, the researchers found, indicating a thousandfold difference in the amount of protein produced. With careful analysis and further experiments, Djuranovic, Verma and their collaborators from Washington University and Stanford University identified certain combinations of amino acids at the third, fourth and fifth positions in the protein chain that gave rise to sky-high amounts of protein.

Moreover, the same amino-acid triplets not only ramped up production of green fluorescent protein, which originally comes from jellyfish, but also production of proteins from distantly related species like coral and humans.

The findings could help increase production of proteins not only for medical applications, but in food, agriculture, chemical and other industries.

"There are so many ways we could benefit from ramping up protein production," Djuranovic said. "In the biomedical space, there are many proteins used in drugs, vaccines, diagnostics and biomaterials for medical devices that might become less expensive if we could improve production. And that's not to mention proteins produced for use in the food industry - there's one called chymosin that is very important in cheese-making, for example - the chemical industry, bioenergy, scientific research and others. Optimizing protein production could have a broad range of commercial benefits."

Credit: 
Washington University School of Medicine

Interfacial chemistry improves rechargeability of Zn batteries

image: In situ formed and artificial protective interphases to tame Zn electrochemistry

Image: 
ZHAO Jingwen, ZHAO Zhiming and QIU Huayu

With strong interest in environmentally benign and efficient resource utilization, green and safe battery systems are in demand and improving rechargeability is a goal. Since the surface chemistry of the solid-electrolyte interphase (SEI) is a critical factor governing the cycling life of rechargeable batteries, it is a key research focus.

Zn batteries (ZBs) are characterized by low cost, superior volumetric energy output and cost-effective raw materials, making them a promising candidate to meet the demand for rechargeable batteries. However, some characteristics of the Zn-electrolyte interface restrict the development of rechargeable ZBs and their application.

Prof. CUI Guanglei's group from the Qingdao Institute of Bioenergy and Bioprocess Technology of the Chinese Academy of Sciences has proposed new concepts concerning in situ formed and artificial SEIs as a means of fundamentally modulating the electrochemical characteristics of Zn.

By manipulating the decomposition of a eutectic liquid with a peculiar anion-associated cation solvation structure, the researchers observed zinc fluoride-rich organic/inorganic SEI on a Zn anode for the first time.

A combination of experimental and modeling investigations revealed that the presence of anion-complexing Zn species with markedly lowered decomposition energies contributed to the in-situ formation of the interphase.

"The protective interphase enables reversible and dendrite-free Zn plating/stripping even at high areal capacities. This is due to the fast ion migration coupled with high mechanical strength," said Prof. CUI.

With this interfacial design, the assembled Zn batteries exhibited excellent cycling stability with negligible capacity loss at both low and high rates.

In addition, coating the Zn surface with an artificial protective polyamide layer is easy to implement. The polyamide layer has all the desirable characteristics for supporting highly reversible Zn chemistry with enhanced cycling performance of Zn anodes at neutral pH, even at a high depth of discharge.

The study offers new insights into the rational regulation of Zn anodes and provides an unprecedented avenue for tackling the dilemmas raised by the intrinsic properties of multivalent metal anodes.

Credit: 
Chinese Academy of Sciences Headquarters

If the world can capture carbon, there's capacity to store it

image: This visualization illustrates how CO2 is injected into a subsea geologic formation at the Sleipner field. Equinor began injecting CO2 into the formation in 1996. More than 20 million tonnes of CO2 have been injected into the formation since then. This is the equivalent to the annual emissions from 10 million cars.

Image: 
Illustration: Equinor

Carbon capture and storage (CCS) will play a vital role in helping the world cut its carbon dioxide emissions, the Intergovernmental Panel on Climate Change (IPCC) says.

Yet less than two dozen CCS projects have been initiated globally, partly because of costs, but also because of uncertainty about the viability of the technology.

As policymakers wrapped up their meetings in Madrid last week to discuss the next steps to curb global warming, a new study demonstrates that there's more than enough suitable storage for captured carbon dioxide on the world's continental shelves.

The study, published in Nature Scientific Reports, also shows that it's fully possible to develop enough CO2 injection wells over a relatively short period to meet the IPCC goals of using CCS to provide 13 per cent of worldwide emissions cuts by 2050.

"The great thing about this study is that we have inverted the decarbonization challenge by working out how many wells are needed to achieve emissions cuts under the 2-degree (Celsius) scenario," said lead author Philip Ringrose, an adjunct professor at the Norwegian University of Science and Technology (NTNU) and a geoscientist at the Equinor Research Centre in Trondheim.

"It turns out to be only a fraction of the historical petroleum industry -- or around 12,000 wells globally. Shared among 5-7 continental CCS hubs -- that is only about 2,000 wells per region. Very doable! But we need to get cracking as soon as possible."

Pressure, not volume, the deciding factor

Ringrose and his co-author, Tip Meckel from the University of Texas Bureau of Economic Geology, first looked at continental shelves worldwide to get a sense of how much capacity there would be to store carbon dioxide.

Previous studies of how much storage would be available offshore have mainly looked at estimated volumes in different rock formations on the continental shelf. The authors argue, however, that the ability of the rock formation to handle pressure is more important in figuring out where CO2 can be safely stored.

That's because injecting CO2 into a rock formation will increase the pressure in the formation. If the pressures exceed what the formation can safely handle, it could develop cracks that would require early closure of projects.

A classification system and history

Given that assumption, the researchers developed a way to classify different storage formations according to their ability to store CO2. Under this approach, Class A formations are those without significant pressure limits, and thus the easiest to use, while Class B formations are those where CO2 can be injected into the system up to a certain limit, and Class C formations are those where pressures will have to be actively managed to allow the CO2 to be injected.

"We argue that this transition from early use of CO2 injection into aquifers without significant pressure limits (Class A), through to CO2 storage in pressure-limited aquifers (Class B) and eventually to pressure management at the basin scale (Class C), represents a global technology development strategy for storage which is analogous to the historic oil and gas production strategy," the researchers wrote.

Essentially, the authors say, as experience with injecting CO2 into offshore formations grows, the ability to use the Class B and C areas will improve, much as geologists and petroleum engineers have gotten better over the decades at extracting hydrocarbons from more and more challenging offshore formations.

Can we drill fast enough?

It's one thing to have enough space to store CO2 -- you also have to inject it into the storage formations fast enough to meet the IPCC estimates of 6 to 7 gigatons of carbon dioxide a year by 2050.

By comparison, "Four existing large-scale projects inject 4 million tonnes CO2 per year. If all 19 large-scale CCS facilities in operation together with a further 4 under construction are considered, they will have an installed capture capacity of 36 million tonnes per year," the researchers wrote. This is clearly not enough, since a gigatonne is 1,000 million tonnes.

Nevertheless, the history of the oil and gas industry suggests that ramping up the technology and infrastructure required to reach the IPCC target by 2050 is very doable, the researchers wrote. Assuming an average injection rate per well, they calculated that more than 10000 CO2 wells would need to be operating worldwide by 2050.

While this may seem like an enormous number, it's equivalent to what has been developed in the Gulf of Mexico over the last 70 years, or five times what has been developed by Norwegians in the North Sea.

"Using this analysis, it is clear that the required well rate for realizing global CCS in the 2020-2050 timeframe is a manageable fraction of the historical well rate deployed from historic petroleum exploitation activities," the researchers wrote.

"With this paper, we provide an actionable, detailed pathway for CCS to meet the goals," Ringrose's co-author Meckel said. "This is a really big hammer that we can deploy right now to put a dent in our emissions profile."

Credit: 
Norwegian University of Science and Technology

Improved 3D nanoprinting technique to build nanoskyscrapers

image: Near-field electrospinning (NFES) technique and charges. The IBS team achieved precise control of the layer-by-layer nanofiber deposition by just adding salt to the polymer solution. Optical images of the 3D printed nanofibers were prepared with solutions made of: (i) only polymer poly(ethylene oxide) (PEO), (ii) PEO and salt and using a conducting platform, and (iii) PEO and salt using an insulating platform. In (i), the nanostructure is not well aligned, because the deposited fibers have a weak positive surface charge, but adding salt increases the conductivity of the starting solution and the attraction between the nanofiber jet and the deposited fibers. An insulating plate made of silica reduced the effect, confirming the hypothesis. Thanks to this technique, IBS researchers constructed nanowalls with the desired height and number of layers.

Image: 
IBS

Nanowalls, nanobridges, nano "jungle gyms": it could seem the description of a Lilliputian village, but these are actual 3D-printed components with tremendous potential applications in nanoelectronics, smart materials and biomedical devices. Researchers at the Center for Soft and Living Matter (CSLM), within the Institute for Basic Science (IBS, South Korea) have improved the 3D nanoprinting process that enables to build precise, self-stacked, tall-and-narrow nanostructures. As shown in their latest publication in Nano Letters, the team also used this technique to produce transparent nanoelectrodes with high optical transmission and controllable conductivity.

The near-field electrospinning (NFES) technique consists of a syringe filled with a polymer solution suspended above a platform, which collects the ejected nanofiber and is pre-programmed to move left-and-right, back-and-forth, depending on the shape of the desired final product. The syringe and the platform have opposite charges, so that the polymer jet coming out from the needle of the syringe is attracted to the platform, forming a continuous fiber that solidifies on the platform. Since the electrospun jets are difficult to handle, this technique was limited to two-dimensional (2D) structures or hollow cylindrical three-dimensional (3D) structures, often with relatively large fiber diameters of a few micrometers.

IBS researchers were able to achieve a better control of the nanofiber deposition on the platform, by adding an appropriate concentration of sodium chloride (NaCl) to the polymer solution. This ensured the spontaneous alignment of the nanofiber layers stacked on top of each other forming walls.

"Although it is highly applicable to various fields, it is difficult to build stacked nanofibers with multiple designs using the conventional electrospinning techniques," says Yoon-Kyoung Cho, the corresponding author of the study. "Our experiment showed that salt did the trick."

The benefit provided by salt is related to the charges. The difference in voltage between the syringe and the platform creates positive charges in the polymer solution and negative charges in the platform, but a residual positive charge stays in the solidified fibers on the platform. The team found that applying salt to the polymer solution enhances the charge dissipation, leading to higher electrostatic attraction between the nanofiber jet and the fibers deposited on the platform.

Based on this mechanism, the team was able to produce tall-and-narrow nanowalls, with a minimum width of around 92 nanometers and a maximum height of 6.6 micrometers, and construct a variety of 3D nanoarchitectures, such as curved nanowall arrays, nano "jungle gyms," and nanobridges, with controllable dimensions.

To demonstrate the potential application of these nanostructures, the researchers in collaboration with Hyunhyub Ko, professor at Ulsan National Institute of Science and Technology (UNIST), prepared 3D nanoelectrodes with silver-coated nanowalls embedded in transparent and flexible polydimethylsiloxane (PDMS) films. They confirmed that electrical resistance could be tuned with the number of nanofiber layers (the taller the nanowalls, the smaller the resistance), without affecting light transmission.

"Interestingly, this method can potentially avoid the trade-off between optical transmittance and sheet resistance in transparent electrodes. Arrays of 3D silver nanowires made with 20, 40, 60, 80, or 100 layers of nanofibers had variable conductivity, but stable light transmission of around 98%," concludes Yang-Seok Park, the first author of the study.

Credit: 
Institute for Basic Science

Mealworms safely consume toxic additive-containing plastic

video: Mealworms eating styrofoam.

Image: 
Josef Schneider

Tiny mealworms may hold part of the solution to our giant plastics problem. Not only are they able to consume various forms of plastic, as previous Stanford research has shown, they can eat Styrofoam containing a common toxic chemical additive and still be safely used as protein-rich feedstock for other animals, according to a new Stanford study published in Environmental Science & Technology.

The study is the first to look at where chemicals in plastic end up after being broken down in a natural system - a yellow mealworm's gut, in this case. It serves as a proof of concept for deriving value from plastic waste.

"This is definitely not what we expected to see," said study lead author Anja Malawi Brandon, a PhD candidate in civil and environmental engineering at Stanford. "It's amazing that mealworms can eat a chemical additive without it building up in their body over time."

In earlier work, Stanford researchers and collaborators at other institutions revealed that mealworms, which are easy to cultivate and widely used as a food for animals ranging from chickens and snakes to fish and shrimp, can subsist on a diet of various types of plastic. They found that microorganisms in the worms' guts biodegrade the plastic in the process - a surprising and hopeful finding. However, concern remained about whether it was safe to use the plastic-eating mealworms as feed for other animals given the possibility that harmful chemicals in plastic additives might accumulate in the worms over time.

"This work provides an answer to many people who asked us whether it is safe to feed animals with mealworms that ate Styrofoam", said Wei-Min Wu, a senior research engineer in Stanford's Department of Civil and Environmental Engineering who has led or co-authored most of the Stanford studies of plastic-eating mealworms.

Styrofoam solution

Brandon, Wu and their colleagues looked at Styrofoam or polystyrene, a common plastic typically used for packaging and insulation, that is costly to recycle because of its low density and bulkiness. It contained a flame retardant called hexabromocyclododecane, or HBCD, that is commonly added to polystyrene. The additive is one of many used to improve plastics' manufacturing properties or decrease flammability. In 2015 alone, nearly 25 million metric tons of these chemicals were added to plastics, according to various studies. Some, such as HBCD, can have significant health and environmental impacts, ranging from endocrine disruption to neurotoxicity. Because of this, the European Union plans to ban HBCD, and U.S. Environmental Protection Agency is evaluating its risk.

Mealworms in the experiment excreted about half of the polystyrene they consumed as tiny, partially degraded fragments and the other half as carbon dioxide. With it, they excreted the HBCD - about 90 percent within 24 hours of consumption and essentially all of it after 48 hours. Mealworms fed a steady diet of HBCD-laden polystyrene were as healthy as those eating a normal diet. The same was true of shrimp fed a steady diet of the HBCD-ingesting mealworms and their counterparts on a normal diet. The plastic in the mealworms' guts likely played an important role in concentrating and removing the HBCD.

The researchers acknowledge that mealworm-excreted HBCD still poses a hazard, and that other common plastic additives may have different fates within plastic-degrading mealworms. While hopeful for mealworm-derived solutions to the world's plastic waste crisis, they caution that lasting answers will only come in the form of biodegradable plastic replacement materials and reduced reliance on single-use products.

"This is a wake-up call," said Brandon. "It reminds us that we need to think about what we're adding to our plastics and how we deal with it."

Credit: 
Stanford University

Malaria under arrest: New drug target prevents deadly transmission

image: Mosquito that transmits malaria.

Image: 
Walter and Eliza Hall Institute of Medical Research

Australian researchers have found a new drug target for stopping the spread of malaria, after successfully blocking the world's deadliest malaria parasite - Plasmodium falciparum - from completing the 'transmission stage' of its lifecycle.

Using small molecule inhibitors developed at the Walter and Eliza Hall Institute, the researchers blocked plasmepsin V, an enzyme essential for the development of gametocytes which are the only form of the parasite that can be transmitted from humans to mosquitoes.

The research, published today in Cell Reports, was led by Associate Professor Justin Boddey from the Walter and Eliza Hall Institute and University of Melbourne, in collaboration with Professor Vicky Avery from Griffith University in Queensland.

At a glance

A new drug target has been discovered for preventing the deadliest malaria parasite from spreading infection.

Using small molecule inhibitors developed at the Institute, the researchers blocked the export of gametocyte proteins - a process essential for malaria transmission.

Blocking the transmission stage of the malaria parasite lifecycle is vital for developing preventative therapies that stop the spread of disease.

More than half a million people die from malaria every year and Plasmodium falciparum - the most lethal of all malaria parasites - is responsible for 90 per cent of infection cases. Due to the parasite's ability to constantly mutate and develop resistance to therapies, new preventions and treatments that act across different stages of the malaria parasite lifecycle - the liver stage, blood stage and transmission stage - are now required.

Arrested development

Associate Professor Boddey said the team had gained new ground towards malaria elimination because blocking the parasite's transmission stage was important for developing preventative therapies that stop the spread of disease.

"It was exciting to find that plasmepsin V plays a role in malaria transmission, and that our inhibitors could target plasmepsin V and block transmission to the mosquito from occurring," Associate Professor Boddey said.

"We showed that an optimal concentration of the inhibitors could kill gametocytes, and that even with a lower dose, the gametocytes made it all the way through their two-week development phase but still couldn't complete the task of transmitting infection to mosquitoes. This shows plasmepsin V is a target for transmission-blocking drugs," he said.

Using the Institute's insectary facilities, the researchers were able to study how gametocytes transmit malaria from human blood to a mosquito. They demonstrated, using gametocyte-specific fluorescent 'tags', that plasmepsin V was critical for the export of gametocyte proteins - a process essential to gametocyte transmission - before proving their compounds could stop this process in its tracks.

Double whammy for disease

The results build on previous Institute studies including in 2014 when researchers discovered plasmepsin V was an effective drug target for killing the malaria parasite in the asexual blood stage of its lifecycle, when malaria symptoms - such as fever, chills, muscle pain and nausea - occur.

Institute chemical biologist Dr Brad Sleebs, who was involved in both the current and previous plasmepsin V studies, said the enzyme was proving to be an ideal drug target because of its importance for parasite survival at different stages of the malaria lifecycle.

"It's encouraging to observe inhibitors that target plasmepsin V are effective against both the asexual blood and sexual transmission stages of the parasite's lifecycle," he said. "Our research demonstrates that an antimalarial treatment targeting plasmepsin V has potential, not only in treatment of the disease, but also as a preventative population control measure," Dr Sleebs said.

Associate Professor Boddey said the research had been an example of how basic and translational knowledge was established from each new study building on the last.

"It's been a rewarding journey from identifying the function of plasmepsin V, to developing inhibitors that block it and kill the malaria parasite, to now validating this enzyme's dual function as an effective blood stage and transmission-blocking drug target."

Sights set on final pillar

The researchers are now turning their attention to the role of plasmepsin V in the remaining pillar of the malaria lifecycle: the liver stage.

Associate Professor Boddey said the aim was to assess plasmepsin V as a multi-stage drug target for treating, as well as preventing, the spread of malaria; and to understand the unique biology occurring during liver infection.

"We are also collaborating with Merck and the Wellcome Trust to develop drugs targeting plasmepsin V in multiple parasite species," he said.

Credit: 
Walter and Eliza Hall Institute

Turning light energy into heat to fight disease

image: Sensing the size-dependent light-to-heat conversion efficiency of nanoparticles by terahertz radiation.

Image: 
Roberto Morandotti

WASHINGTON, D.C., December 17, 2019 -- An emerging technology involving tiny particles that absorb light and turn it into localized heat sources shows great promise in several fields, including medicine. For example, photothermal therapy, a new type of cancer treatment, involves aiming infrared laser light onto nanoparticles near the treatment site.

Localized heating in these systems must be carefully controlled since living tissue is delicate. Serious burns and tissue damage can result if unwanted heating occurs in the wrong place. The ability to monitor temperature increases is crucial in developing this technology. Several approaches have been tried, but all of them have drawbacks of various kinds, including the need to insert probes or inject additional materials.

In this week's issue of APL Photonics, from AIP Publishing, scientists report the development of a new method to measure temperatures in these systems using a form of light known as terahertz radiation. The study involved suspensions of gold nanorods of various sizes in water in small cuvettes, which were illuminated by a laser focused on a small spot within the cuvette.

The tiny gold rods absorbed the laser light and converted it to heat that spread through the water by convection. "We are able to map out the temperature distribution by scanning the cuvette with terahertz radiation, producing a thermal image," co-author Junliang Dong said.

The study also looked at the way the temperature varied over time. "Using a mathematical model, we are able to calculate the efficiency by which the gold nanorod suspensions converted infrared light to heat," said co-author Holger Breitenborn.

The smallest gold particles, which had a diameter of 10 nanometers, converted laser light to heat with the highest efficiency, approximately 90%. This value is similar to previous reports for these gold particles, indicating the measurements using terahertz radiation were accurate.

Although the smaller gold rods had the highest light-to-heat conversion efficiency, the largest rods -- those with a diameter of 50 nanometers -- displayed the largest molar heating rate. This quantity has been recently introduced to help evaluate the use of nanoparticles in biomedical settings.

"By combining measurements of temperature transients in time and thermal images in space at terahertz frequencies, we have developed a noncontact and noninvasive technique for characterizing these nanoparticles," co-author Roberto Morandotti said. This work offers an appealing alternative to invasive methods and holds promise for biomedical applications.

Credit: 
American Institute of Physics

Compound in green tea plant shows potential for fighting TB, finds NTU-led research team

video: An antioxidant found in the green tea plant could become key to tackling tuberculosis one day, a team of international scientists led by NTU Singapore has found. The team led by NTU Professor Gerhard Grüber discovered how the prominent compound, known as epigallocatechin gallate (EGCG), can inhibit the growth of a tuberculosis-causing bacteria strain. These findings could pave the way for the creation of novel drugs to combat tuberculosis, one of the most deadly infectious diseases in the world.

Image: 
NTU Singapore

An antioxidant found in the green tea plant could become key to tackling tuberculosis one day, a team of international scientists led by Nanyang Technological University, Singapore (NTU Singapore) has found.

Through laboratory investigations, the team led by NTU Professor Gerhard Grüber discovered how the prominent compound, known as epigallocatechin gallate (EGCG), can inhibit the growth of a tuberculosis-causing bacteria strain.

The EGCG does so by binding to an enzyme that provides biological energy for cellular activity. The process results in a dip in the amount of energy the bacteria has for its cellular processes vital for growth and stability, such as cell wall formation.

The team, which includes NTU Associate Professor Roderick Bates, National University of Singapore (NUS) Professor Thomas Dick, and collaborators from the US and New Zealand, also identified the exact sites on the enzyme at which the EGCG needs to bind to in order to affect energy production in the bacterial cell.

The findings were published in the journal Scientific Reports in November. A patent has been filed for the identification of the EGCG as a possible form of treatment for tuberculosis.

These findings could pave the way for the creation of novel drugs to combat tuberculosis, one of the most deadly infectious diseases in the world. Southeast Asia accounts for 41 per cent of the world's tuberculosis cases, with 4 million new cases every year.

While there are already drugs that target mycobacterium tuberculosis (M. tuberculosis) - the bacteria that causes the airborne disease - new ones are needed because the bacteria is increasingly showing resistance to many of the drugs.

Professor Gerhard Grüber from the NTU School of Biological Sciences said: "Though tuberculosis is curable, the success of current drugs on the market is increasingly being overshadowed by the bacteria's clinical resistance. Our discovery of the EGCG's ability to inhibit the growth of M. tuberculosis will allow us to look at how we can improve the potency of this compound in green tea, and other similar compounds, to develop new drugs to tackle this airborne disease."

How EGCG disrupts tuberculosis

Cells require energy for vital processes such as cell wall formation to take place. They get their energy from an energy storage molecule made by an enzyme called ATP synthase. Without energy for essential cellular activity, a cell loses its stability and eventually dies.

To determine the factors affecting the production of ATP synthase, and thus the amount of energy a bacterial cell has for growth, the NTU-led team studied mycobacterium smegmatis and mycobacterium bovis, both of which belong to the same family as M. tuberculosis. These mycobacterial strains share a similar structural composition.

The team first found that alterations to the genetic code for ATP synthase resulted in an enzyme that produced fewer energy storage molecules in the bacterial cells, slower cell growth, and an altered colony shape.

With this data, the scientists then screened for and found 20 compounds that could possibly bind to ATP synthase and cause the same blocking effect, and then tested them for their efficacy. Only EGCG, a natural antioxidant that occurs in a large amount in green tea, showed it had the same crucial effect of reducing energy storage molecules in the bacterial cell.

The NTU-led team is now looking at optimising the activity of EGCG for increased efficiency and potency in fighting the tuberculosis bacteria. Their ultimate goal is to develop a drug cocktail that will tackle multi-drug resistant tuberculosis.

Credit: 
Nanyang Technological University

Limiting global warming would relieve populations from wet and dry extremes in China

Limiting global warming to a lower level, such as the 1.5°C Paris Agreement target, would substantially relieve populations from precipitation extremes in China, according to a study recently published in Science Bulletin.

The research, which is an extension of climate projections, sheds light on how extreme precipitation changes would translate into social impacts. Taking population into account, even a half-degree global warming increment could result in a robust increase in extreme rainfall-related impacts, particularly in the densely populated southeastern China.

"China has long been overwhelmed by precipitation extremes such as floods and droughts, as a result of the influences of monsoon, complex topography, and the large population. The accompanying social and economic losses are huge. In addition to traditional climate projections, decision-making also requires impact projections," said Prof. Tianjun Zhou, the corresponding author on the paper. Zhou is a senior scientist at the State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics at the Institute of Atmospheric Physics and CAS Center for Excellence in Tibetan Plateau Earth Sciences in the Chinese Academy of Sciences. He is also a professor at the University of Chinese Academy of Sciences.

Zhou and his team combined climate projections from CMIP5, an archive of comprehensive climate models, with socio-economic projections to investigate future climate changes and the accompanying impacts at various global warming thresholds. It is demonstrated that heavy precipitation events would intensify with global warming all over China, affecting all the populations around. Meanwhile, dry extremes would intensify in South China and exert adverse impact on the large population there.

"To understand the future impacts, we further separated the roles of future climate change and population redistributions. We found that climate change dominates the future impacts on population, while population redistributions play a minor role," said Prof. Zhou.

"Our results would, hopefully, provide useful information for mitigation and adaptation planning. Regional information is important in this regard. The uneven population distribution, particularly the dense population in southeastern China, has made it a hotspot in face of global warming as a consequence of high risks of both floods and droughts," Zhou said. "Hence, efficient and timely adaptation activities are in urgent need for this region."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Nonlinear fureai: How connectedness can nurture complex dynamics across diverse networks

image: The idea underlying this study is that in a network arranged with a given architecture (for example, a star network) and under suitable conditions, the node(s) having the largest number of connections (top) spontaneously develop more complex activity than those having only a few or even just one connection (bottom). Here, an example involving electronic oscillators is shown.

Image: 
Ludovico Minati

Scientists at Tokyo Institute of Technology have uncovered some new aspects of how connections in networks can influence their behavior over time. Usually, network elements with many connections generate more complex activity than others, but this effect can become inverted if the connections are overly strong. In contrast, in cases such as neurons, which behave in a seemingly random way when by themselves, connectivity can result in more regular and predictable patterns.

It is common to find examples of how people with many connections--social or professional--tend to have a rather turbulent and unpredictable daily life compared to those with fewer relationships, who usually follow routines that are more regular. This difference is particularly evident when specific individuals or communities are compared, such as top managers versus operatives, or people living in a metropolis versus people living in the countryside.

This can be extended to natural and engineered networks of interacting elements--from neurons to coupled oscillators and wireless terminals--where the "nodes" (the network elements where the connections intertwine) having more connections tend to have richer dynamics (activity unfolding over time). Understanding the intricacies of networks within a system can give us a holistic view of that system, which is useful in both biology and engineering.

In a study published in the journal IEEE Access, researchers in Japan and Italy studied using theoretical and experimental methods the dynamics of networks in various natural and engineered systems. This research was the result of a collaboration between scientists from Tokyo Institute of Technology (Tokyo Tech), in part funded by the World Research Hub Initiative, and the Universities of Catania, Palermo, and Trento in Italy.

The research team began by analyzing purely mathematical scenarios. First, they simulated elementary star-shaped networks, where most nodes (called "leaves") have a single connection to a central node (called "hub"); each node consisted of a so-called Rössler system, which is an elegant set of equations able to generate quite intricate behaviors. It became evident that the hubs in these networks almost always exhibit a more complicated behavior than the leaves, because they are influenced by many different nodes at the same time. But, if the connections between nodes are too strong, their outputs become rigidly bound to each other and this relationship is lost, whereas if they are too weak, the effect vanishes.

Interestingly, this phenomenon was also seen in a physical network made of electronic oscillators connected to each other using resistors (Fig. 1). "It was quite surprising to notice how strong the tendency for the hub and leaf nodes to behave differently is," explains Assoc. Prof. Hiroyuki Ito, co-author and head of the laboratory where these concepts will be applied to solve sensing problems in the field of Internet of Things (IoT).

To dig deeper into this phenomenon, the researchers conducted further numerical simulations with more complicated networks containing higher numbers of nodes and more intricate connection patterns. They found that the relationship also generally applies to such systems unless the individual connections are too strong, in which case the trend can even flip and cause nodes with fewer connections to exhibit more complex activity. The reason for this inversion is not known yet, but it can be imagined of as the highly connected nodes becoming "paralyzed" and the rest "taking over" (Fig. 2). "There remains much to be clarified about how the structure and dynamics of networks relate to each other, even in simple cases," says Assoc. Prof. Mattia Frasca, from the University of Catania.

The scientists then moved on to investigating one of the most complicated types of natural networks: those made of neurons. Unlike mathematical or engineered systems, isolated living neurons are quite unpredictable because they are often subjected to forms of randomness or "noise". By analyzing the activity of living neurons through simulations as well as measurements, the researchers found that a greater connectedness may help them reduce this noise and express more structured patterns, ultimately allowing them to function "usefully." "Earlier studies about brain function show similar relationships between cortical areas. We think that a better understanding of these phenomena could also help us improve brain-computer interfaces," adds Prof. Yasuharu Koike, head of the laboratory focused on topics at the interface between engineering and biology.

This study sheds light on how knowledge of the intricacies of a network system can be used in different fields. Assoc. Prof. Ludovico Minati, lead author of the study, talks about the implications of the study, "While caution and humbleness need to be exercised not to fall into making excessively generalist statements, studies such as this one may exemplify the potential inspirational value of multidisciplinary research, which can impact not only engineering and biology but even management concepts."

Credit: 
Tokyo Institute of Technology

Taking an X-ray of an atomic bond

image: This is an artistic representation of an x-ray interacting with layers containing different orbital character. Ionic character orbitals are colored green; while covalent character orbitals are pink.

Image: 
Image courtesy of Tiffany Bowman, Brookhaven National Lab.

Understanding the behavior of materials at their interfaces - where they connect to and interact with other materials - is central to engineering a variety of devices used to process, store and transfer information. Devices such as transistors, magnetic memory and lasers could all improve as researchers delve into the nature of these bonds, which affect the materials' properties of conductivity and magnetism.

In this effort, Steven May, PhD, and his colleagues from Drexel University's College of Engineering, along with researchers from the University of Saskatchewan and Lawrence Berkeley, Brookhaven and Argonne National Labs have recently demonstrated a new approach for examining - with atomic-layer precision - changes in the behavior of electrons at the interfaces between two materials.

In particular, the approach provides a glimpse into how the degree of covalent and ionic bonding between metal and oxygen atoms is altered in moving from one material to the next.

The demonstration of this method, which was recently published in the journal Advanced Materials, provides scientists with a powerful resource for unlocking the potential of engineering materials at the atomic level.

"These interfaces can impart new functionality into the material stacks, but directly studying how the properties of electrons at the interfaces differ from the non-interfacial electrons requires techniques that can spatially resolve properties across individual atomic layers," said May, a professor in the Department of Materials Science and Engineering at Drexel. "For example, a measurement of a material's conductivity provides information on its average ability to conduct electricity but doesn't reveal differences between how the electrons are behaving at the interfaces and away from the interfaces."

Ionic and covalent bonding is a central concept in materials science that describes how atoms are held together to form solid materials. In an ionic bond, electrons from one atom are transferred to another atom. The attraction between the resulting positively charged ion - cation - and negatively charged ion - anion - is what draws the atoms together, thus creating a bond. Conversely, a covalent bond forms when two atoms share their electrons with each other - instead of fully transferring them.

Understanding electron behavior in an atomic bond is an important factor to understanding or predicting the behavior of materials. For example, materials with ionic bonds tend to be insulators that block the flow of electricity; while materials with covalent bonds can be electrically conductive.

But many materials contain bonds that are best described as a mixture of ionic and covalent. In those materials, the degree to which the bond is ionic or covalent strongly influences its electronic properties.

"The details of this mixture depend on what electron orbitals the highest energy electrons - those that form the bonds - come from," May said. "The orbital character of these electrons, in turn, has profound effects on their electronic and magnetic behavior. While scientists have developed computational approaches to describe how covalent or ionic a bond is, experimentally measuring how the orbital character of electrons or the changes in covalency across interfaces remains a significant challenge in materials research."

The team's approach for making this experimental measurement involves a technique called resonant x-ray reflectivity. Experiments like this can only be conducted in the large synchrotron x-ray facilities, such as those operated by the U.S. Department of Energy. These massive laboratories generate x-ray radiation to probe the structure of materials.

In a reflectivity experiment, researchers analyze the pattern of x-rays that are scattered from the material to understand the relative electron density within a material. The reflectivity data can be used to determine the concentration of electrons, in relation to their distance from the surface of the material.

By tuning the wavelength of the x-rays to excite electronic transitions specific to individual elements in the material stack, the team was able to measure each element's electron contributions to their shared bond - thus, revealing how ionic or covalent the bond is.

"This is something like how climatologists would use ice-core samples to analyze the chemical makeup of each layer as a function of depth from the surface," May said. "We can do the same thing at the atomic scale using x-ray reflectivity. But the information we're obtaining tells us about the orbital character of electrons and how this changes from one atomic layer to the next."

The materials used in the study are composed of alternating layers of two transition metal oxide compounds - strontium ferrite and calcium ferrite. These materials are of interest because they exhibit many of the exotic electronic behaviors found in quantum materials, including changing from metallic to insulating states as they cool.

At the heart of these materials' unusual properties is the iron-oxygen bond. Theory predicts that the bond in this material is much more covalent than typical iron-oxygen bonds, which tend to be quite ionic in most iron-containing compounds.

Using the x-ray reflectivity approach, the team was able to measure - for the first time - how the oxygen and iron contributions to the electronic character differs in the layers and at the interface of the two compounds.

"By individually probing the electron density of the oxygen states and the iron states, we could determine the degree of covalency between iron and oxygen across these oxide interfaces at the atomic scale," said Paul Rogge, PhD, a postdoctoral researcher at Drexel who is the first author on the paper. "We were surprised to find a dramatic change in covalency between the materials because their individual electronic structures are very similar, but by interfacing thin films of these two materials we can tweak their physical structure and thus alter their atomic bonding, which ultimately affects their electronic and magnetic properties."

Understanding how unusual material interfaces, like those of quantum materials, function could be the first step toward harnessing their properties to improve the processing power, storage and communications capabilities of electronic devices.

"Moving forward, we are excited about applying this technique to other classes of quantum materials, such as topological insulators and semimetals, to gain new insights into how interfaces alter magnetic and electronic character in those materials," May said. "Because the majority of electronic and magnetic devices rely on interfaces to operate, having a deep understanding of how electrons behave at interfaces is critical for the design of future electronic technologies."

Credit: 
Drexel University