Tech

A new 'gold standard' for safer ceramic coatings

image: Ceramic samples were coated with a cost-effective, vibrant glaze made with gold and silver salts that are less toxic than most other colorants.

Image: 
Ryan Coppage

WASHINGTON, March 23, 2020 — Making your own ceramics can be a way to express your creativity, but some techniques and materials used in the process could spell bad news for your health and the environment. If not prepared properly, some glazed ceramics can leach potentially harmful heavy metals. Scientists now report progress toward a new type of glaze that includes gold and silver nanoparticles, which are less toxic and more environmentally friendly than currently used formulations, while still providing vibrant colors.

The researchers are presenting their results through the American Chemical Society (ACS) SciMeetings online platform.

Glazes make ceramics shiny and waterproof, and they also add color, which is revealed by firing the clay object in a kiln. These materials have been known to contain potentially harmful ingredients, though many manufacturers have now removed them. “But even today, you can still find ceramic glazes on the market that contain harmful heavy metals,” says Ryan Coppage, Ph.D., the project’s principal investigator. “Achieving the brightest colors has traditionally required using higher amounts of heavy metals, such as barium and cadmium, which can leach from the surface and are toxic at such levels.”

To develop a safer glaze of the desired color, the researchers turned to tiny nanoparticles of gold and silver. Although these metals are technically deemed “heavy,” they are considered benign in small quantities. In fact, they are often used in medical applications, such as in injections for rheumatoid arthritis and as ingredients in antimicrobial preparations. Gold and silver have a highly recognizable yellow or white metallic sheen when seen on the macroscale, but when whittled down to the nanoscale level — that is, between 1 and 100 nanometers per particle — they can take on totally different hues. Their color changes, depending on the size of the particle; nanoscale gold particles can produce deep reds and blues, and tiny silver ones can appear red or even bright green.

As it turns out, gold and silver nanoparticles have been used in works of art for centuries, without artisans even knowing it. “Nanoparticles used in historic works were incidental rather than intentional,” says Nathan Dinh, who worked on the team. He and Coppage are at the University of Richmond. In medieval times, artisans would grind down gold and silver into a very fine powder or use gold or silver salts in their crafts, such as vibrantly colored stained glass windows and chalices. “In modern times, nobody has really put the gold and silver nanoparticles into glazes from scratch, so they can be fired in the same kilns artists use today,” Dinh says. “That’s what we’re trying to do with our work. Our goal is to implement historical techniques using modern technology and know-how.”

To achieve the optimal ceramic glaze, the researchers started with a simple glaze base and mixed it with different combinations and sizes of gold and silver salts and nanoparticles. From there, they applied these test glazes to clay objects and fired them in a traditional ceramic kiln used by local artists. By microscopic examination, the researchers found that the firing process changes the shape and size of the nanoparticles, which in turn influences the final color. The resulting hues depended on the source of the metals, as well as the concentrations used. And by combining both metals into the same glaze, Dinh and Coppage could produce a wide range of colors with the same equipment as hobby ceramicists.

As only a tiny fraction of precious metal is required for this new glaze, it is both cost-effective and environmentally friendly. In fact, the researchers estimate that glazing a single cup would only cost between 30 and 40 cents. And because these nanoparticles are much more efficient at producing colors than other metals, the researchers only needed to add a very small amount of gold or silver, approximately 0.01% by weight. That’s in contrast to conventional glazes that contain heavy metals; these glazes often contain 5-15% heavy metals by weight — with manganese glazes containing up to 50% for metallic sheen.

Next, Coppage and team plan to further explore exactly how the firing process changes gold and silver nanoparticles, which will help them fine-tune the resulting colors. Then, they plan to incorporate other metal nanoparticles into glazes, potentially leading to a broader array of colors for the art community to take advantage of.

This research was generously supported by funding from The Camille & Henry Dreyfus Foundation—Henry Dreyfus Teacher-Scholar Award and the University of Richmond.

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and its people. The Society is a global leader in providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a specialist in scientific information solutions (including SciFinder® and STN®), its CAS division powers global research, discovery and innovation. ACS’ main offices are in Washington, D.C., and Columbus, Ohio.

To automatically receive press releases from the American Chemical Society, contact newsroom@acs.org.

Title

Nanoparticles as ceramic glaze colorant alternatives

Abstract

Many ceramic coatings contain high levels of heavy metal elements as colorants that are environmentally unfriendly and toxic to users. Metallic nanoparticles, including gold and silver, have been proposed as an alternative to these metal colorants due to their versatile color profiles, benign nature, and more efficient color mechanism via surface plasmon resonance. This research explores the effects of common ceramic sintering processes on nanoparticles within glazes in common studio reductive and oxidative kilns. This work also employs direct application of gold and silver salts in glazes to probe and better understand the mechanistic processes for nanoparticle-laced glazes that are viable in the art community. These processes avoid the need for complicated reductants, glassware, or preliminary heating elements, thus allowing for environmentally friendlier yet economically viable applications of nanoparticle colorants.

Credit: 
American Chemical Society

OncoMX knowledgebase enables research of cancer biomarkers and related evidence

WASHINGTON (March 23, 2020) -- The OncoMX knowledgebase will improve the exploration and research of cancer biomarkers in the context of related evidence, according to a recent article from the George Washington University (GW). The article is published in the Journal of Clinical Oncology Clinical Cancer Informatics and is part of a special series called "Informatics Tools for Cancer Research and Care."

Cancer biomarkers, a sort of biological fingerprint, are molecules in bodily fluids or tissues that can indicate processes associated with various cancers. With increased research and funding, more potentially novel cancer biomarkers are being reported. However, challenges remain when it comes to reproducibility of initial findings, clinical validation, and access to harmonized biomarker data.

OncoMX, a knowledgebase and web portal for exploring cancer biomarker data and related evidence, was developed to integrate cancer biomarker and relevant data types into a meta-portal, enabling the research of cancer biomarkers side by side with other pertinent multidimensional data types.

"Many research groups and consortia have conducted studies reporting potentially actionable and available cancer biomarker data," said Hayley Dingerdissen, a PhD student at the GW Institute for Biomedical Sciences and first author of the paper. "It seems reasonable that these data could be combined into a single meta-resource to facilitate more efficient cancer biomarker research and exploration, but the reality is there are numerous challenges to combining heterogeneously structured data in a unified way."

To address the challenges associated with data collection and access, the OncoMX team worked to integrate public cancer biomarker data from the Early Detection Research Network (EDRN) and the FDA, as well as additional related data around persistent identifiers, which are long-lasting references to a document, file, webpage, or other object. The team integrated information such as cancer mutation, cancer differential expression, cancer expression specificity, healthy gene expression from human and mouse, and biomarker data.

The resulting data provides the foundation for integration of heterogeneous biomarker evidence, using the BioCompute Object framework, into OncoMX for improved cancer biomarker exploration.

"OncoMX is designed to combine existing biomarker-relevant data with newly generated data as it becomes available, establishing a resource customized for cancer biomarker research," said Raja Mazumder, PhD, professor of biochemistry and molecular medicine at the GW School of Medicine and Health Sciences, a member of the GW Cancer Center, and senior author on the study. "The focus on ontology-driven unification of biomarker data and cross comparison of various related experimental data, particularly inclusion of large-scale literature mining findings, NCI's EDRN and FDA biomarkers, and healthy gene expression data from Bgee are unique to OncoMX compared to other integrated cancer resources."

Moving forward, the OncoMX team is actively seeking new data types, such as imaging, glycan biomarkers, drug targets, alternative splicing, and more. They also continue to work on extending the data model to new types, integrating FDA data sets for additional cancers upon user request, and expanding cross references to key cancer resources.

Credit: 
George Washington University

Isoflavones, in tofu and plant proteins, associated with lower heart disease risk

DALLAS, March 23, 2020 -- Eating tofu and foods that contain higher amounts of isoflavones was associated with a moderately lower risk of heart disease, especially for younger women and postmenopausal women not taking hormones, according to observational research published today in Circulation, the flagship journal of the American Heart Association.

Researchers at Harvard Medical School and Brigham and Women's Hospital analyzed data from more than 200,000 people who participated in three prospective health and nutrition studies; all participants were free of cancer and heart disease when the studies began. After eliminating a number of other factors known to increase heart risk, investigators found:

Consuming tofu, which is high in isoflavones, more than once a week was associated with a 18% lower risk of heart disease, compared to a 12% lower risk for those who ate tofu less than once a month; and

The favorable association with eating tofu regularly was found primarily in young women before menopause or postmenopausal women who were not taking hormones.

"Despite these findings, I don't think tofu is by any means a magic bullet," said lead study author Qi Sun, M.D., Sc.D., a researcher at Harvard's T.H. Chan School of Public Health in Boston. "Overall diet quality is still critical to consider, and tofu can be a very healthy component."

Sun noted that populations that traditionally consume isoflavone-rich diets including tofu, such as in China and Japan, have lower heart disease risk compared to populations that follow a largely meat-rich and vegetable-poor diet. However, the potential benefits of tofu and isoflavones as they relate to heart disease needs more research.

Tofu, which is soybean curd, and whole soybeans such as edamame are rich sources of isoflavones. Chickpeas, fava beans, pistachios, peanuts and other fruits and nuts are also high in isoflavones. Soymilk, on the other hand, tends to be highly processed and is often sweetened with sugar, Sun noted. This study found no significant association between soymilk consumption and lower heart disease risk.

"Other human trials and animal studies of isoflavones, tofu and cardiovascular risk markers have also indicated positive effects, so people with an elevated risk of developing heart disease should evaluate their diets," he said. "If their diet is packed with unhealthy foods, such as red meat, sugary beverages and refined carbohydrates, they should switch to healthier alternatives. Tofu and other isoflavone-rich, plant-based foods are excellent protein sources and alternatives to animal proteins."

In the study, researchers analyzed health data of more than 74,000 women from the Nurses' Health Study (NHS) from 1984 to 2012; approximately 94,000 women in the NHSII study between 1991 and 2013; and more than 42,000 men who participated in the Health Professionals Follow-Up Study from 1986 to 2012. All participants were free of cardiovascular disease and cancer at the beginning of each study. Dietary data was updated using patient surveys, conducted every two to four years. Data on heart disease was collected from medical records and other documents, while heart disease fatalities were identified from death certificates.

A total of 8,359 cases of heart disease were identified during 4,826,122 person-years of follow-up, which is the total number of years that study participants were free of heart disease and helps to measure how fast it occurs in a population.

Sun emphasized that the study should be interpreted with caution because their observations found a relationship but did not prove causality. Many other factors can influence the development of heart disease, including physical exercise, family history and a person's lifestyle habits. "For example, younger women who are more physically active and get more exercise tend to follow healthier, plant-based diets that may include more isoflavone-rich foods like tofu. Although we have controlled for these factors, caution is recommended when interpreting these results," said Sun.

In 2000, the U.S. Food and Drug Administration approved health claims that soy edibles protect against cardiovascular disease. However, since then, clinical trials and epidemiological studies have been inconclusive, and the agency is reconsidering its now twenty-year-old decision. The American Heart Association's 2006 Diet and Lifestyle Recommendations and a 2006 science advisory on soy protein, isoflavones and cardiovascular health

found minimal evidence that isoflavones convey any cardiovascular benefits and any protections provided by higher soy intake was likely due to higher levels of polyunsaturated fats, fiber, vitamins and minerals, and lower levels of saturated fat.

Credit: 
American Heart Association

Star formation project maps nearby interstellar clouds

image: Montage of the CO molecule radio emission-line intensities in the three regions observed by the Star Formation Project and
the Nobeyama 45 m Radio Telescope.

Image: 
NAOJ

Astronomers have captured new, detailed maps of three nearby interstellar gas clouds containing regions of ongoing high-mass star formation. The results of this survey, called the Star Formation Project, will help improve our understanding of the star formation process.

We know that stars such as the Sun are born from interstellar gas clouds. These interstellar gas clouds are difficult to observe in visible light, but emit strong radio wavelength, which can be observed by the Nobeyama 45-m Radio Telescope in Japan. A research team led by Fumitaka Nakamura, an Associate Professor at the National Astronomical Observatory of Japan (NAOJ), used the telescope to create detailed radio maps of interstellar gas clouds, the birthplaces of stars. The team including members from NAOJ, the University of Tokyo, Tokyo Gakugei University, Ibaraki University, Otsuma Women's University, Niigata University, Nagoya City University, and other universities, will use the observational data to investigate the star formation process.

The team targeted three interstellar clouds: the Orion A, Aquila Rift, and M17 regions. For the Orion A region, the group collaborated with the CARMA interferometer in the United States, combining their data to create the most detailed map ever of the region. The resultant map has a spatial resolution of about 3200 astronomical units. This means that the map can reveal details as small as 60 times the size of the Solar System.

Even the world's most powerful radio telescope, the Atacama Large Millimeter/submillimeter Array (ALMA), could not obtain a similar large-scale map of Orion A because of ALMA's limited field-of-view and observation time constraints. But ALMA can investigate more distant interstellar clouds. Therefore, this large-scale, most-detailed radio map of the Orion A gas cloud obtained by the Star Formation Project is complementary with other observational research.

Credit: 
National Institutes of Natural Sciences

Organellogenesis still a work in progress in novel dinoflagellates

image: image

Image: 
University of Tsukuba

Tsukuba, Japan - Many algae and plant species contain photosynthetic membrane-bound organelles called plastids that are actually remnants of a free-living cyanobacterium. At some point in evolutionary history, a cyanobacterium was engulfed by an ancestral alga, trapping it forever as a host-controlled endosymbiont in a process called organellogenesis. All modern algae and plants are the descendants of this ancestral alga containing the first plastid. But as if by karmic intervention, some of these algae were themselves engulfed during secondary endosymbiotic events, generating what are known as complex algae.

In most cases, endosymbionts lose large portions of their genomes as well as most other cellular components except plastids during organellogenesis. However, in rare cases, the relic endosymbiont nucleus is retained within the host cell, forming a nucleomorph. While researchers know that endosymbiont genes are integrated into the host genome, there are currently only a few model systems in which to study the process of organellogenesis, meaning that it is still somewhat of a mystery.

However, in a study published last month in PNAS, researchers led by the University of Tsukuba reported an exciting discovery that may shed light on the process of organellogenesis.

The team discovered two novel dinoflagellates, strains MGD and TGD, containing nucleomorphs that were undergoing endosymbiont-host DNA transfer. In cryptophytes and chlorarachniophytes, the only other algal groups known to contain nucleomorphs, all DNA transfer events have ceased, implying that organellogenesis at the genetic level is complete. This has made it impossible to discover the closest relatives of the endosymbiotic algae or to determine how their genomes are altered during the transition process.

"Morphologically, MGD and TGD were obviously distinct, which was supported by molecular phylogenetic analyses," says senior author Professor Yuji Inagaki. "However, both strains contained green alga-derived plastids with nucleus-like structures containing DNA."

Even though the researchers showed that the endosymbiotic algae had already been transformed into plastids, gene sequence analysis suggested that DNA transfer from the nucleomorph to the host genome was still in progress in both MGD and TGD. Given the relatively intact state of the endosymbiont genomes, the researchers successfully identified the origins of the algae to the genus level.

"Genomic analysis of these novel dinoflagellates showed that they are both nucleomorph-containing algal strains carrying plastids derived from endosymbiotic green algae, most likely of the genus Pedinomonas," explains Professor Inagaki.

"Based on the level of integration of the endosymbiont and host genomes in MGD and TGD, we concluded that the process of organellogenesis is less advanced in these strains than that in cryptophytes and chrorarachniophytes. This important distinction will allow us to use these organisms as models to better understand the process of organellogenesis."

Credit: 
University of Tsukuba

Pushing periodic disorder induced phase-matching into deep-ultraviolet spectral region

image: (a) Schematic graph of additional-phase-matching condition in arbitrary nonlinear optical crystals. The white and grey regions represent ordered crystal and disordered amorphous, respectively. The period length Λ equals the sum of ordered width La and disordered width Lb (Λ = La + Lb). Notably, La and Lb may be equivalent to coherent length Lc or integral multiple of Lc. deff/0 and n1/n2 represent the second-order nonlinear coefficient and refractive index of ordered and disordered regions, respectively. (b) Schematic estimation of the SH field amplitude of the APP quartz with different shifted phase (ΔφAPP) under the same crystal length. (c) Theoretical calculation of the APP (ΔφAPP) with the APP quartz samples of La=Lb= 2.1 μm, 1.4 μm, and 0.7 μm (d) 177.3 nm SHG output power in APP quartz (purple point) with La=Lb= 2.1 μm and Δφ=3π and in as-grown quartz (green point).

Image: 
Mingchuan Shao, Fei Liang, Haohai Yu, Huaijin Zhang

Nonlinear optical frequency conversion is an important technique to extend the wavelength of lasers which has been widely used in modern technology. The efficiency of frequency conversion depends on the phase relationship among the interacting light waves. High conversion efficiency requires the satisfaction of phase matching. However, due to the dispersion property of nonlinear optical crystals, the phase mismatching always occurs, thus, phase matching condition should be specially designed. There are two widely used techniques for phase-matching: birefringence phase matching (BPM) and quasi-phase matching (QPM). Normally, BPM employs the natural birefringence properties of nonlinear optical crystals, and QPM is mainly focused on the periodically inversion of the ferroelectric domains. However, most of nonlinear optical crystals hold neither sufficient birefringence nor controllable ferroelectric domains. Therefore, it is in urgent demand to develop new route to meet phase-matching in arbitrary nonlinear crystals and in broad wavelength ranges.

In a new paper published in Light Science & Application, scientists from the State Key Laboratory of Crystal Materials and Institute of Crystal Materials, Shandong University, China, proposed a concept based on the basic principles of nonlinear frequency transformation, additional periodic phase (APP) from the disorder alignment, which can intercept the energy transmission channel of nonlinear light to fundamental light and compensate for mismatched phase. The APP concept means that after the light propagating at the coherence length Lc, the generated phase difference Δφ_PD was compensated by the additional phase difference Δφ_APP with Δφ_APP+Δφ_PD=2mπ (m is the integer). Based on the APP concept, a periodic ordered/disordered structure is introduced into crystal quartz by femtosecond laser writing technology to achieve an effective output from ultraviolet to deep-ultraviolet at the wavelength of 177.3nm. More interestingly, the APP phase matching may get rid of the limitations of birefringent and ferroelectric materials on nonlinear frequency conversion and should be applicable to all non-centrosymmetric nonlinear crystals for achieving effective output at any wavelength in the transmission range of the materials.

"To the best of our knowledge, the phase-matched deep-ultraviolet 177.3 nm generation was firstly achieved via quartz crystal with a high efficiency of 1.07‰." they added.

"This APP strategy may provide a versatile route for arbitrary nonlinear crystal in broadband wavelength. More importantly, this order/disorder alignment adds a variable physical parameter into optical system, thus leading to next-generation revolution in nonlinear or linear modulation and classical or quantum photonics." the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Water-induced MAPbBr3@PbBr(OH) with enhanced luminescence and stability

image: a) Schematic diagram of the synthesis process for MAPbBr3@PbBr(OH). b) Schematic illustration of the morphology evolution of the as-prepared MAPbBr3 perovskite. c) Energy level diagram of PbBr(OH) and inner QDs.

Image: 
by Kai-Kai Liu, Qian Liu, Dong-Wen Yang, Ya-Chuan Liang, Lai-Zhi Sui, Jian-Yong Wei, Guo-Wei Xue, Wen-Bo Zhao, Xue-Ying Wu, Lin Dong, Chong-Xin Shan

In recent years, lead halide perovskites have emerged as promising materials for photovoltaics and light-emitting diodes (LEDs) due to their attractive optical and electrical properties, such as high photoluminescence (PL) quantum yield (QY), narrow emission spectrum, tuneable emission wavelength, high absorption coefficient, and long carrier diffusion length. Profound developments have been witnessed in the fields of solar cells, solid-state light-emitting diodes, photodetectors, and lasers. However, the poor stability of LHPs, especially in water and polar solvents, remains a crucial issue that hampers their applications.

In a new paper published in Light Science & Application, scientists from Zhengzhou University, China, and co-workers developed a new synthetic method that the PL QY of perovskites can be increased from 2.5% to 71.54% by addition a dose of water and decreases minimally in aqueous solution in one year. In addition, the as-synthesized MAPbBr3@PbBr(OH) can maintain their luminescence in many kinds of solvents and also exhibit excellent ambient, thermal and photostabilities. The enhanced stability and PL QY can be attributed to the water-induced MAPbBr3@PbBr(OH). PbBr(OH) passivated the defects of the MAPbBr3 QDs and confined carriers within the QDs so that MAPbBr3@PbBr(OH) could reach high emission efficiency; additionally, PbBr(OH) can prevent the exposure of the QDs to air and moisture, thus increasing the stability.

"The found that PL QY of perovskites can be increased by addition of water is amazing, and the reason for enhanced PL QY and stability can be attributed to the formation of stable and larger bandgap PbBr(OH) on the surface of the lead halide perovskites quantum dots after addition of water. PbBr(OH) passivated the defects of the MAPbBr3 QDs and preventd the exposure of the QDs to air and moisture, thus increasing efficiency and stability" .

"We note that this strategy is universal to methylamino lead halide perovskites (MAPbBr3), formamidine lead halide perovskites (FAPbBr3), inorganic lead halide perovskites (CsPbBr3), etc" they added.

"Since the as-prepared MAPbBr3@PbBr(OH) has high fluorescence efficiency and stability, this will stimulate research interest in the field of laser, LED and so on. This efficient approach for the synthesis of ultrastable and highly efficient luminescent perovskites will push forward their practical applications. " the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Sensing internal organ temperature with shining lights

image: OSL from the ZrO2 sample observed by the irradiation of NIR-laser light without the bone sample

Image: 
Tohoku University

A cheap, biocompatible white powder that luminesces when heated could be used for non-invasively monitoring the temperature of specific organs within the body. Tohoku University scientists conducted preliminary tests to demonstrate the applicability of this concept and published their findings in the journal Scientific Reports.

Thermometers measure temperature at the body's surface, but clinicians need to be able to monitor and manage core body temperatures in some critically ill patients, such as following head injuries or heart attacks. Until now, this is most often done by inserting a tiny tube into the heart and blood vessels. But scientists are looking for less invasive means to monitor temperature from within the body.

Applied physicist Takumi Fujiwara of Tohoku University and colleagues in Japan investigated the potential of a white powder called zirconia for this purpose.

Zirconia is a synthetic powder that is easily accessible, chemically stable, and non-toxic. When heated, its crystals become excited, releasing electrons. These electrons then recombine with 'holes' in the crystal molecular structure, a process that causes the crystals to emit light, or luminesce. Because of this material's advantageous properties for use in the human body, the scientists wanted to test and see if its luminescence could be used for monitoring temperature.

The team heated zirconia under an ultraviolet lamp, and found that as zirconia's temperature rose, its luminescence intensified. The same thing happened when a near-infrared laser light was shone on the material. This demonstrated that both heat and light could be used to induce luminescence in zirconia.

The scientists next showed that zirconia luminescence was visible with the naked eye when placed behind a piece of bone and illuminated using a near-infrared laser.

Together, the demonstrations suggest zirconia could potentially monitor internal body temperature by injecting it and then shining a near-infrared laser light on a targeted location, such as the brain. The intensity and longevity of the material's luminescence will depend on the surrounding temperature.

"While this fundamental study leaves some important issues unresolved, this work is a novel and promising application of [synthetic luminescent substances] in the medical field," the researchers conclude. Going forward, the researchers hope to discover a method that makes the wavelength of luminescence from zirconia in the region of red to near-infrared since it makes for better transmissibility for human tissues; thus, allowing for clearer information to be obtained.

Credit: 
Tohoku University

NUS scientists invent symmetry-breaking in a nanoscale device that can mimic human brain

image: Professor Venkatesan (left) discussing the charge disproportionation mechanism with Dr Sreetosh Goswami (right).

Image: 
National University of Singapore

Over the last decade, artificial intelligence (AI) and its applications such as machine learning have gained pace to revolutionise many industries. As the world gathers more data, the computing power of hardware systems needs to grow in tandem. Unfortunately, we are facing a future where we will not be able to generate enough energy to power our computational needs.

"We hear a lot of predictions about AI ushering in the fourth industrial revolution. It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets. It is clear that we will have to rethink our approaches to computation on all levels: materials, devices and architecture. We are proud to present an update on two fronts in this work: materials and devices. Fundamentally, the devices we are demonstrating are a million times more power efficient than what exists today," shared Professor Thirumalai Venky Venkatesan, the lead Principal Investigator of this project who is from the National University of Singapore (NUS).

In a paper published in Nature Nanotechnology on 23 March 2020, the researchers from the NUS Nanoscience and Nanotechnology Initiative (NUSNNI) reported the invention of a nanoscale device based on a unique material platform that can achieve optimal digital in-memory computing while being extremely energy efficient. The invention is also highly reproduceable and durable, unlike conventional organic electronic devices.

The molecular system which is key to this invention is a brainchild of Professor Sreebrata Goswami of the Indian Association for Cultivation of Science in Kolkata, India. "We have been working on this family of molecules of redox active ligands over the last 40 years. Based on the success with one of our molecular systems in making a memory device that was reported in in the journal Nature Materials in 2017, we decided to re-design our molecule with a new pincer ligand. This is a rational de novo design strategy to engineer a molecule that can act as an electron sponge," said Professor Goswami.

Dr Sreetosh Goswami, the key architect of this paper who used to be a graduate student of Professor Venkatesan and now a research fellow at NUSNNI, shared "The main finding of this paper is charge disproportionation or electronic symmetry breaking. Traditionally, this has been one of those phenomena in physics which holds great promise but fails to translate to the real world as it only occurs at specific conditions, such as high or low temperature, or high pressure."

"We are able to achieve this elusive charge disproportionation in our devices, and modulate it using electric fields at room temperature. Physicists have been trying to do the same for 50 years. Our ability to realise this phenomenon in nano-scale results in a multifunctional device that can operate both as a memristor or a memcapacitor or even both concomitantly," Dr Sreetosh further explained.

"The complex intermolecular and ionic interactions in these molecular systems offer this unique charge disproportionation mechanism. We are thankful to Professor Damien Thompson at the University of Limerick who modelled the interactions between the molecules and generated insights that allow us to tweak these molecular systems in many ways to further engineer new functionalities," said Prof Goswami.

"We believe we are only scratching the surface of what is possible with this class of materials," added Professor Venkatesan. "Recently, Dr Sreetosh has discovered that he can drive these devices to self-oscillate or even exhibit purely unstable, chaotic regime. This is very close to replicating how our human brain functions."

"Computer scientists now recognise that our brain is the most energy efficient, intelligent and fault-tolerant computing system in existence. Being able to emulate the brain's best properties while running millions of times faster will change the face of computing as we know it. In discussions with my longtime friend and collaborator Professor Stan Williams from Texas A&M University (who is a co-author in this paper), I realise that our organic molecular system might eventually be able to outperform all the oxide and 'ovonic' materials demonstrated to date," he concluded.

Moving forward, the NUS team is endeavouring to develop efficient circuits that mimics functions of the human brain.

Credit: 
National University of Singapore

System trains driverless cars in simulation before they hit the road

A simulation system invented at MIT to train driverless cars creates a photorealistic world with infinite steering possibilities, helping the cars learn to navigate a host of worse-case scenarios before cruising down real streets.

Control systems, or "controllers," for autonomous vehicles largely rely on real-world datasets of driving trajectories from human drivers. From these data, they learn how to emulate safe steering controls in a variety of situations. But real-world data from hazardous "edge cases," such as nearly crashing or being forced off the road or into other lanes, are -- fortunately -- rare.

Some computer programs, called "simulation engines," aim to imitate these situations by rendering detailed virtual roads to help train the controllers to recover. But the learned control from simulation has never been shown to transfer to reality on a full-scale vehicle.

The MIT researchers tackle the problem with their photorealistic simulator, called Virtual Image Synthesis and Transformation for Autonomy (VISTA). It uses only a small dataset, captured by humans driving on a road, to synthesize a practically infinite number of new viewpoints from trajectories that the vehicle could take in the real world. The controller is rewarded for the distance it travels without crashing, so it must learn by itself how to reach a destination safely. In doing so, the vehicle learns to safely navigate any situation it encounters, including regaining control after swerving between lanes or recovering from near-crashes.

In tests, a controller trained within the VISTA simulator safely was able to be safely deployed onto a full-scale driverless car and to navigate through previously unseen streets. In positioning the car at off-road orientations that mimicked various near-crash situations, the controller was also able to successfully recover the car back into a safe driving trajectory within a few seconds. A paper describing the system has been published in IEEE Robotics and Automation Letters and will be presented at the upcoming ICRA conference in May.

"It's tough to collect data in these edge cases that humans don't experience on the road," says first author Alexander Amini, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). "In our simulation, however, control systems can experience those situations, learn for themselves to recover from them, and remain robust when deployed onto vehicles in the real world."

The work was done in collaboration with the Toyota Research Institute. Joining Amini on the paper are Igor Gilitschenski, a postdoc in CSAIL; Jacob Phillips, Julia Moseyko, and Rohan Banerjee, all undergraduates in CSAIL and the Department of Electrical Engineering and Computer Science; Sertac Karaman, an associate professor of aeronautics and astronautics; and Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.

Data-driven simulation

Historically, building simulation engines for training and testing autonomous vehicles has been largely a manual task. Companies and universities often employ teams of artists and engineers to sketch virtual environments, with accurate road markings, lanes, and even detailed leaves on trees. Some engines may also incorporate the physics of a car's interaction with its environment, based on complex mathematical models.

But since there are so many different things to consider in complex real-world environments, it's practically impossible to incorporate everything into the simulator. For that reason, there's usually a mismatch between what controllers learn in simulation and how they operate in the real world.

Instead, the MIT researchers created what they call a "data-driven" simulation engine that synthesizes, from real data, new trajectories consistent with road appearance, as well as the distance and motion of all objects in the scene.

They first collect video data from a human driving down a few roads and feed that into the engine. For each frame, the engine projects every pixel into a type of 3D point cloud. Then, they place a virtual vehicle inside that world. When the vehicle makes a steering command, the engine synthesizes a new trajectory through the point cloud, based on the steering curve and the vehicle's orientation and velocity.

Then, the engine uses that new trajectory to render a photorealistic scene. To do so, it uses a convolutional neural network -- commonly used for image-processing tasks -- to estimate a depth map, which contains information relating to the distance of objects from the controller's viewpoint. It then combines the depth map with a technique that estimates the camera's orientation within a 3D scene. That all helps pinpoint the vehicle's location and relative distance from everything within the virtual simulator.

Based on that information, it reorients the original pixels to recreate a 3D representation of the world from the vehicle's new viewpoint. It also tracks the motion of the pixels to capture the movement of the cars and people, and other moving objects, in the scene. "This is equivalent to providing the vehicle with an infinite number of possible trajectories," Rus says. "Because when we collect physical data, we get data from the specific trajectory the car will follow. But we can modify that trajectory to cover all possible ways of and environments of driving. That's really powerful."

Reinforcement learning from scratch

Traditionally, researchers have been training autonomous vehicles by either following human defined rules of driving or by trying to imitate human drivers. But the researchers make their controller learn entirely from scratch under an "end-to-end" framework, meaning it takes as input only raw sensor data -- such as visual observations of the road -- and, from that data, predicts steering commands at outputs.

"We basically say, 'Here's an environment. You can do whatever you want. Just don't crash into vehicles, and stay inside the lanes,'" Amini says.

This requires "reinforcement learning" (RL), a trial-and-error machine-learning technique that provides feedback signals whenever the car makes an error. In the researchers' simulation engine, the controller begins by knowing nothing about how to drive, what a lane marker is, or even other vehicles look like, so it starts executing random steering angles. It gets a feedback signal only when it crashes. At that point, it gets teleported to a new simulated location and has to execute a better set of steering angles to avoid crashing again. Over 10 to 15 hours of training, it uses these sparse feedback signals to learn to travel greater and greater distances without crashing.

After successfully driving 10,000 kilometers in simulation, the authors apply that learned controller onto their full-scale autonomous vehicle in the real world. The researchers say this is the first time a controller trained using end-to-end reinforcement learning in simulation has successful been deployed onto a full-scale autonomous car. "That was surprising to us. Not only has the controller never been on a real car before, but it's also never even seen the roads before and has no prior knowledge on how humans drive," Amini says.

Forcing the controller to run through all types of driving scenarios enabled it to regain control from disorienting positions -- such as being half off the road or into another lane -- and steer back into the correct lane within several seconds. "And other state-of-the-art controllers all tragically failed at that, because they never saw any data like this in training," Amini says.

Next, the researchers hope to simulate all types of road conditions from a single driving trajectory, such as night and day, and sunny and rainy weather. They also hope to simulate more complex interactions with other vehicles on the road. "What if other cars start moving and jump in front of the vehicle?" Rus says. "Those are complex, real-world interactions we want to start testing."

Credit: 
Massachusetts Institute of Technology

Electric jolt to carbon makes better water purifier

image: Synthesis process of nanocarbon adsorbent

Image: 
Nagahiro Saito

Nagoya University scientists have developed a one-step fabrication process that improves the ability of nanocarbons to remove toxic heavy metal ions from water. The findings, published in the journal ACS Applied Nano Materials, could aid efforts to improve universal access to clean water.

Various nanocarbons are being studied and used for purifying water and wastewater by adsorbing dyes, gases, organic compounds and toxic metal ions. These nanocarbons can adsorb heavy metal ions, like lead and mercury, onto their surfaces through molecular attraction forces. But this attraction is weak, and so they aren't very efficient adsorbents on their own.

To improve adsorption, scientists are considering adding molecules to the nanocarbons, like amino groups, that form stronger chemical bonds with heavy metals. They are also trying to find ways to use all available surfaces on nanocarbons for metal ion adsorption, including the surfaces of their inner pores. This would enhance their capacity to adsorb more metal ions at a time.

Materials scientist Nagahiro Saito of Nagoya University's Institute of Innovation for Future Society and colleagues developed a new method for synthesizing an "amino-modified nanocarbon" that more efficiently adsorbs several heavy metal ions compared to conventional methods.

They mixed phenol, as a source of carbon, with a compound called APTES, as a source of amino groups. This mixture was placed in a glass chamber and exposed to a high voltage, creating a plasma in liquid. The method they used, called "solution plasma process," was maintained for 20 minutes. Black precipitates of amino-modified carbons formed and were collected, washed and dried.

A variety of tests showed that amino groups had evenly distributed over the nanocarbon surface, including into its slit-like pores.

"Our single-step process facilitates the bonding of amino groups on both outer and inner surfaces of the porous nanocarbon," says Saito. "This drastically increased their adsorption capacity compared to a nanocarbon on its own."

They put the amino-modified nanocarbons through ten cycles of adsorbing copper, zinc and cadmium metal ions, washing them between each cycle. Although the capacity to adsorb metal ions decreased with repetitive cycles, the reduction was small, making them relatively stable for repetitive use.

Finally, the team compared their amino-modified nanocarbons with five others synthesized by conventional methods. Their nanocarbon had the highest adsorption capacity for the metal ions tested, indicating there are more amino groups on their nanocarbon than the others.

"Our process could help reduce the costs of water purification and bring us closer to achieving universal and equitable access to safe and affordable drinking water for all by 2030," says Saito.

Credit: 
Nagoya University

How to get conductive gels to stick when wet

Polymers that are good conductors of electricity could be useful in biomedical devices, to help with sensing or electrostimulation, for example. But there has been a sticking point preventing their widespread use: their inability to adhere to a surface such as a sensor or microchip, and stay put despite moisture from the body.

Now, researchers at MIT have come up with a way of getting conductive polymer gels to adhere to wet surfaces.

The new adhesive method is described in the journal Science Advances in a paper by MIT doctoral student Hyunwoo Yuk, former visiting scholar Akihisa Inoue, postdoc Baoyang Lu, and professor of mechanical engineering Xuanhe Zhao.

Most electrodes used for biomedical devices are made of platinum or platinum-iridium alloys, Zhao explains. These are very good electrical conductors that are durable inside the moist environment of the body, and chemically stable so they do not interact with the surrounding tissues. But their stiffness is a major drawback. Because they can't flex and stretch as the body moves, they can damage delicate tissues.

Conductive polymers such as PEDOT:PSS, by contrast, can very closely match the softness and flexibility of the vulnerable tissues in the body. The tricky part has been getting them to stay attached to the biomedical devices they are connected to. Researchers have been struggling for years to make these polymers durable in the moist and always-moving environments of the body.

"There have been thousands of papers talking about the advantages of these materials," Yuk says, but the companies that make biomedical devices "just don't use them," because they need materials that are exceedingly reliable and stable. A failure of the material could require an invasive surgical procedure to replace it, which carries additional risk for the patient.

Stiff metal electrodes "sometimes harm the tissues, but they work well in terms of reliability and stability over a period of years," which has not been the case with polymer substitutes until now, he says.

Most efforts to address this problem have involved making significant modifications to the polymer materials to improve their durability and their ability to adhere, but Yuk says that creates problems of its own: Companies have already invested heavily in equipment to manufacture these polymers, and major changes to the formulation would require significant investment in new production equipment. These changes would be for a market that is relatively small in economic terms, though large in potential impact. Other approaches that have been tried are limited to specific materials. Instead, the MIT team focused on making the fewest changes possible, to ensure compatibility with existing production methods, and making the method applicable to a wide variety of materials.

Their method involves an extremely thin adhesive layer between the conductive polymer hydrogel and the substrate material. Though only a few nanometers thick (billionths of a meter), this layer turns out to be effective at making the gels adhere to any of a wide variety of commonly used substrate materials, including glass, polyimide, indium tin oxide, and gold. The adhesive layer penetrates into the polymer itself, producing a tough, durable protective structure that keeps the material in place even when exposed for long periods to a wet environment.

The adhesive layer can be applied to the devices by a variety of standard manufacturing processes, including spin coating, spray coating, and dip coating, making it easy to integrate with existing fabrication platforms. The coating the researchers used in their tests is made of polyurethane, a hydrophilic (water-attracting) material that is readily available and inexpensive, though other similar polymers could also be used. Such materials "become very strong when they form interpenetrating networks," as they do when coated on the conducting polymer, Yuk explains. This enhanced strength should address the durability problems associated with the uncoated polymer, he says.

The result is a mechanically strong and conductive gel that bonds tightly with the surface it's attached to. "It's a very simple process," Yuk says.

The bonding proves to be highly resistant to bending, twisting, and even folding of the substrate material. The adhesive polymer has been tested in the lab under accelerated aging conditions using ultrasound, but Yuk says that for the biomedical device industry to accept such a new material will require longer, more rigorous testing to confirm the stability of these coated fibers under realistic conditions over long periods of time.

"We'd be very happy to license and put this technology out there to test it further in realistic situations," he says. The team has begun talking to manufacturers to see "how we can best help them to test this knowledge," he says.

"I think this is a great piece of work," says Zhenan Bao, a professor of chemical engineering at Stanford University, who was not associated with this research. "Wet adhesives are already a big challenge. Conductive adhesives that work well in wet conditions are even more rare. They are very much needed for nerve interfaces and recording electrical signals from the heart or brain."

Bao says this work "is a major advancement in the bioelectronics field."

Credit: 
Massachusetts Institute of Technology

Device brings silicon computing power to brain research and prosthetics

image: Abdulmalik Obaid (on left) and Nick Melosh with their microwire array. This bundle of microwires can enable researchers to watch the activity of hundreds of neurons in the brain in real time.

Image: 
Andrew Brodhead/Stanford News Service

Researchers at Stanford University have developed a new device for connecting the brain directly to silicon-based technologies. While brain-machine interface devices already exist - and are used for prosthetics, disease treatment and brain research - this latest device can record more data while being less intrusive than existing options.

"Nobody has taken these 2D silicon electronics and matched them to the three-dimensional architecture of the brain before," said Abdulmalik Obaid, a graduate student in materials science and engineering at Stanford. "We had to throw out what we already know about conventional chip fabrication and design new processes to bring silicon electronics into the third dimension. And we had to do it in a way that could scale up easily."

The device, the subject of a paper published March 20 in Science Advances, contains a bundle of microwires, with each wire less than half the width of the thinnest human hair. These thin wires can be gently inserted into the brain and connected on the outside directly to a silicon chip that records the electrical brain signals passing by each wire - like making a movie of neural electrical activity. Current versions of the device include hundreds of microwires but future versions could contain thousands.

"Electrical activity is one of the highest-resolution ways of looking at brain activity," said Nick Melosh, professor of materials science and engineering at Stanford and co-senior author of the paper. "With this microwire array, we can see what's happening on the single-neuron level."

The researchers tested their brain-machine interface on isolated retinal cells from rats and in the brains of living mice. In both cases, they successfully obtained meaningful signals across the array's hundreds of channels. Ongoing research will further determine how long the device can remain in the brain and what these signals can reveal. The team is especially interested in what the signals can tell them about learning. The researchers are also working on applications in prosthetics, particularly speech assistance.

Worth the wait

The researchers knew that, in order to achieve their aims, they had to create a brain-machine interface that was not only long-lasting, but also capable of establishing a close connection with the brain while causing minimal damage. They focused on connecting to silicon-based devices in order to take advantage of advances in those technologies.

"Silicon chips are so powerful and have an incredible ability to scale up," said Melosh. "Our array couples with that technology very simply. You can actually just take the chip, press it onto the exposed end of the bundle and get the signals."

One main challenge the researchers tackled was figuring out how to structure the array. It had to be strong and durable, even though its main components are hundreds of minuscule wires. The solution was to wrap each wire in a biologically-safe polymer and then bundle them together inside a metal collar. This assures the wires are spaced apart and properly oriented. Below the collar, the polymer is removed so that the wires can be individually directed into the brain.

Existing brain-machine interface devices are limited to about 100 wires offering 100 channels of signal, and each must be painstakingly placed in the array by hand. The researchers spent years refining their design and fabrication techniques to enable the creation of an array with thousands of channels - their efforts supported, in part, by a Wu Tsai Neurosciences Institute Big Ideas grant.

"The design of this device is completely different from any existing high-density recording devices, and the shape, size and density of the array can be simply varied during fabrication. This means that we can simultaneously record different brain regions at different depths with virtually any 3D arrangement," said Jun Ding, assistant professor of neurosurgery and neurology, and co-author of the paper. "If applied broadly, this technology will greatly excel our understanding of brain function in health and disease states."

After spending years pursuing this ambitious-yet-elegant idea, it was not until the very end of the process that they had a device that could be tested in living tissue.

"We had to take kilometers of microwires and produce large-scale arrays, then directly connect them to silicon chips," said Obaid, who is lead author of the paper. "After years of working on that design, we tested it on the retina for the first time and it worked right away. It was extremely reassuring."

Following their initial tests on the retina and in mice, the researchers are now conducting longer-term animal studies to check the durability of the array and the performance of large-scale versions. They are also exploring what kind of data their device can report. Results so far indicate they may be able to watch learning and failure as they are happening in the brain. The researchers are optimistic about being able to someday use the array to improve medical technologies for humans, such as mechanical prosthetics and devices that help restore speech and vision.

Credit: 
Stanford University

On-demand glass is right around the corner

image: SEM of Silica used to manufacture the colloidal glasses.

Image: 
@GiulioMonaco UniTrento

Glasses used for camera lenses or reading glasses are not like those used to make windshields. They have a different degree of transparency and they break in a different way (the former break in large pieces, the latter in a multitude of tiny pieces). The techniques to obtain glasses with specific properties have long been known to the industry: a slow process for optical applications, tempering for glasses designed to break safely. These procedures determine the stress within the glass, which can therefore be easily minimized or maximized. But how to control the stress stored in a glass to adjusts it to our needs? If we could do that, we would be able to design new types of glass for new applications.

That is the question a research group of UniTrento, made up of physicists, tried to answer. The researchers focused on colloidal glasses, which are made up of microscopic particles dispersed in a solution at a concentration that allows the formation of a compact solid. The physicists of the University of Trento conducted a number of experiments at the Petra facility in Hamburg (Desy, Deutsches Elektronen-Synchrotron), Germany, and managed to create colloidal glasses characterised by a unidirectional stress, that is to say that the stresses stored locally in this material during the formation are all heading in the same direction. The results of the study were published in open access in Science Advances, the online peer-reviewed journal of the American Association for the Advancement of Science, based in Washington.

Giulio Monaco, director of the Department of Physics of the University of Trento and coordinator of the research work, explained: «Colloidal glasses are relatively stable. Think about window glass, that can last for centuries. However, locally, the atoms and particles are subject to heavy stresses, whose intensity, distribution and direction determine the mechanical properties of the material. It would be very useful if we could control those stresses».

He continued: «Measuring the intensity and direction of the stress stored in a glass is a crucial step to control these forces and therefore use them in industrial applications».

Credit: 
Università di Trento

FSU Research: Hidden source of carbon found at the Arctic coast

A previously unknown and significant source of carbon just discovered in the Arctic has scientists both marveling at a once overlooked contributor to local coastal ecosystems and concerned about what it may mean in an era of climate change.

FSU researcher Robert Spencer co-authored a study that showed evidence of undetected concentrations and flows of dissolved organic matter entering Arctic coastal waters, coming from groundwater flows on top of frozen permafrost. This water moves from land to sea unseen, but researchers now believe it carries significant concentrations of carbon and other nutrients to Arctic coastal food webs.

Spencer worked with aquatic chemists and hydrologists from The University of Texas at Austin's Marine Science Institute, the UT Austin Jackson School of Geosciences and the U.S. Fish and Wildlife Service. Their findings were published today in Nature Communications.

"I think most people are aware the Arctic is changing rapidly," said Spencer, an associate professor in the Department of Earth, Ocean, and Atmospheric Science. "What is less well-known is that we still have knowledge gaps."

Groundwater is known globally to be important for delivering carbon and other nutrients to oceans, but in the Arctic, where much water remains trapped in frozen earth, its role has been less clear. Scientists were surprised to learn that groundwater may be contributing an amount of dissolved organic matter to the Alaskan Beaufort Sea that is almost as much as the amount that comes from neighboring rivers during the summer.

The research community has generally assumed that groundwater inputs from land to sea are small in the Arctic because perennially frozen ground, or permafrost, constrains the flow of water below the tundra surface.

This study found that as shallow groundwater flows beneath the surface at sites in northern Alaska, it picks up new, young organic carbon and nitrogen as expected. However, the researchers also discovered that as groundwater flows toward the ocean, it mixes with layers of deeper soils and thawing permafrost, picking up and transporting nitrogen and organic carbon that is hundreds to thousands of years old.

This old carbon being transported by groundwater is thought to be minimally decomposed, never having seen the light of day before it meets the ocean.

"Groundwater inputs are unique because this material is a direct shot to the ocean without seeing or being photodegraded by light," said Jim McClelland, a professor at the UT Austin Marine Science Institute. "Sunlight on the water can decompose organic carbon as it travels downstream in rivers. Organic matter delivered to the coastal ocean in groundwater is not subject to this process and thus may be valuable as a food source to bacteria and higher organisms that live in Arctic coastal waters."

The researchers concluded that the supply of leachable organic carbon from groundwater amounts to as much as 70 percent of the dissolved organic matter flowing from rivers to the Alaska Beaufort Sea during the summer.

"Despite its ancient age, dissolved organic carbon in groundwater provides a new and potentially important source of fuel and energy for local coastal food webs each summer," said lead author Craig Connolly, a recent graduate of UT Austin's Marine Science Institute. "The role that groundwater inputs play in carbon and nutrient cycling in Arctic coastal ecosystems, now and in the future as climate changes and permafrost continues to thaw, is something we hope will spark research interest for years to come."

As far away as the Arctic is from most of humanity, changes there affect the rest of the globe.

"As the Arctic continues to change in coming years and more long-frozen permafrost thaws, this will naturally have major impacts on the land-to-ocean flow of water and associated carbon and nutrients," Spencer said. "Those impacts in the Arctic will be felt all across the Earth, particularly the ramifications for carbon cycling."

Credit: 
Florida State University