Tech

Toxin chimeras slip therapeutics into neurons to treat botulism in animals

image: Schematic showing how the chimera molecule (BoNT/XA) delivers nanobodies to neurons to neutralize botulism neurotoxins. This material relates to a paper that appeared in the Jan. 6, 2021, issue of Science Translational Medicine, published by AAAS. The paper, by S.-I. Miyashita at Boston Children's Hospital in Boston, MA; and colleagues was titled, "Delivery of single-domain antibodies into neurons using a chimeric toxin-based platform is therapeutic in mouse models of botulism."

Image: 
[S.-I. Miyashita <i>et al., Science Translational Medicine</i> (2021)]

Taking advantage of the chemical properties of botulism toxins, two teams of researchers have fashioned non-toxic versions of these compounds that can deliver therapeutic antibodies to treat botulism, a potentially fatal disease with few approved treatments. The research, which was conducted in mice, guinea pigs, and nonhuman primates, suggests that the toxin derivatives could one day offer a platform to quickly treat established cases of botulism and target hard-to-reach molecules within neurons. Botulism manifests due to bacterial toxins called botulinum neurotoxins (BoNTs), which are the most potent toxins known to humans. BoNTs work by entering and damaging neurons that coordinate movement, resulting in paralysis that requires intensive care and can potentially last for months. There is a dire need for therapies that can quickly reverse paralysis, but developing treatments for existing cases has been difficult because it is challenging to neutralize BoNTs with therapeutics once the toxins have entered neurons. In the first study, Shin-Ichiro Miyashita and colleagues fused different sections of two BoNTs named BoNT/X and BoNT/A, resulting in a chimeric molecule that is both non-toxic and works as a drug delivery platform. Specifically, the researchers combined a neuron-targeting domain of BoNT/A with another domain of BoNT/X that can deliver therapeutic molecules into the interior of neurons. Miyashita et al. found that their approach rapidly delivered an antitoxin antibody into neurons and neutralized both the BoNT/A and BoNT/B neurotoxins in mice, reversing paralysis within a few hours. Taking a similar approach, Patrick McNutt and colleagues engineered a non-toxic BoNT derivative that safely neutralized BoNT/A within neurons. Their treatment also alleviated paralysis and boosted survival in mice, guinea pigs, and nonhuman primates exposed to lethal amounts of BoNT/A. "This platform offers a transformational approach for a precision treatment that might be adapted to diverse presynaptic diseases," say McNutt et al.

Credit: 
American Association for the Advancement of Science (AAAS)

Light-based processors boost machine-learning processing

image: Schematic representation of a processor for matrix multiplications which runs on light.

Image: 
University of Oxford

The exponential growth of data traffic in our digital age poses some real challenges on processing power. And with the advent of machine learning and AI in, for example, self-driving vehicles and speech recognition, the upward trend is set to continue. All this places a heavy burden on the ability of current computer processors to keep up with demand.

Now, an international team of scientists has turned to light to tackle the problem. The researchers developed a new approach and architecture that combines processing and data storage onto a single chip by using light-based, or "photonic" processors, which are shown to surpass conventional electronic chips by processing information much more rapidly and in parallel.

The scientists developed a hardware accelerator for so-called matrix-vector multiplications, which are the backbone of neural networks (algorithms that simulate the human brain), which themselves are used for machine-learning algorithms. Since different light wavelengths (colors) don't interfere with each other, the researchers could use multiple wavelengths of light for parallel calculations. But to do this, they used another innovative technology, developed at EPFL, a chip-based "frequency comb", as a light source.

"Our study is the first to apply frequency combs in the field of artificially neural networks," says Professor Tobias Kippenberg at EPFL, one the study's leads. Professor Kippenberg's research has pioneered the development of frequency combs. "The frequency comb provides a variety of optical wavelengths that are processed independently of one another in the same photonic chip."

"Light-based processors for speeding up tasks in the field of machine learning enable complex mathematical tasks to be processed at high speeds and throughputs," says senior co-author Wolfram Pernice at Münster University, one of the professors who led the research. "This is much faster than conventional chips which rely on electronic data transfer, such as graphic cards or specialized hardware like TPU's (Tensor Processing Unit)."

After designing and fabricating the photonic chips, the researchers tested them on a neural network that recognizes of hand-written numbers. Inspired by biology, these networks are a concept in the field of machine learning and are used primarily in the processing of image or audio data. "The convolution operation between input data and one or more filters - which can identify edges in an image, for example, are well suited to our matrix architecture," says Johannes Feldmann, now based at the University of Oxford Department of Materials. Nathan Youngblood (Oxford University) adds: "Exploiting wavelength multiplexing permits higher data rates and computing densities, i.e. operations per area of processer, not previously attained."

"This work is a real showcase of European collaborative research," says David Wright at the University of Exeter, who leads the EU project FunComp, which funded the work. "Whilst every research group involved is world-leading in their own way, it was bringing all these parts together that made this work truly possible."

The study is published in Nature this week, and has far-reaching applications: higher simultaneous (and energy-saving) processing of data in artificial intelligence, larger neural networks for more accurate forecasts and more precise data analysis, large amounts of clinical data for diagnoses, enhancing rapid evaluation of sensor data in self-driving vehicles, and expanding cloud computing infrastructures with more storage space, computing power, and applications software.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

The revelation of the crustal geometry of the western Qilian Mountains, NE Tibetan Plateau

image: (a) Current lithospheric structure across the western Qilian Mountains and adjacent regions by combining our result with previous geological and geophysical results. (b) The construction process of the Qilian Mountains. Abbreviations: DS, Danghe Nan Mountains; TNS, Tuolai Nan Mountains; TS, Tuolai Mountains; NQLS, north Qilian Mountains.

Image: 
@Science China Press

As the largest orogenic plateau on earth, the Qinghai-Tibet Plateau was caused by a complex crustal deformation process during the continuous collision and compression process between the Indian and Eurasian continents starting at least 60-50 Ma ago. The formation of the Qinghai-Tibet Plateau records the collision of the two continents and the deformation process and mechanism within the continents. Therefore, Qinghai-Tibet Plateau is considered as a natural ideal laboratory for the study of continent-continent collision and dynamics. At present, the continuous collision between Eurasia and Indian continents is still ongoing, resulting in the Qinghai-Tibet Plateau is still continuing to expand outward. The western section of Qilian Mountains on the northeast margin of The Qinghai-Tibet Plateau, as the northeast boundary of the Plateau, was uplifted and became part of the present Qinghai-Tibet Plateau during the Middle Miocene, according to the latest chronology results. Therefore, as one of the youngest parts of the Qinghai-Tibet Plateau, the western Qilian Mountains is one of the key areas to test various proposed models of the formation of the Qinghai-Tibet Plateau.

The crustal deformation mechanism in the northeast margin of Qinghai-Tibet Plateau has been proposed by many predecessors. However, with the deepening of the research, more and more evidence has been revealed, and the previously proposed crustal deformation mechanisms have been unable to fully explain many new evidence. The differences in resolution of various means of studying the interior of the earth's crust undoubtedly add to the divergence. Therefore, earth scientists call for more precise methods to reveal the crustal structure in the northeast margin of the Qinghai-Tibet Plateau. The deep seismic reflection profiling is one of the internationally recognized methods for revealing high precision crustal structural image. Therefore, using the crustal structural image revealed by the deep seismic reflection profile to study the crustal deformation pattern in the northeast margin of Qinghai-Tibet Plateau in this paper will undoubtedly provide very important scientific significance and reference value for the study of this area.

The researchers reprocessed the high-resolution deep seismic reflection data, which were originally collected in the 1990s, for a transect across the NE margin of the western Qilian Mountains and the Hexi Corridor. The reprocessed seismic image has a higher signal-to-noise ratio compared with the first published result, which imaged the southward dipping north Qilian Mountains fault (NQSF) and a southerly dipping fault extending downward into the lower crust, which was named the north border thrust (NBT). In addition to these results, the reprocessed image more clearly delineates the geometry of the crust beneath the junction between the western Qilian Mountains and Hexi Corridor, yielding a better understanding of the processes responsible for the outward growth of the Tibetan Plateau.

The reprocessed seismic profile across the junction of the north margin of the western Qilian Mountains and the Hexi Corridor reveals the decoupled crustal deformation that is partitioned by the intra-crustal decollement layer at a depth of 14?24 km. The crustal deformation above the decollment is mainly characterized by a series of southward-dipping thrust faults downward ended at the decollement layer. Crustal-scale duplexing presents in the crustal beneath the decollement layer. The imbricate Moho structure beneath the study region implies that the Asian lithospheric mantle is being underthrust beneath the northeastern margin of the Tibetan Plateau. Integrating the results with previous geological and geophysical observations, the researchers propose an evolutionary model regarding the outward growth across the western Qilian Mountains, northeastern margin of the Tibetan Plateau (Figure 1).

This result enriches the crustal structure research on the northeast margin of Qinghai-Tibet Plateau. It is not only of great significance to the study of crustal deformation mechanism in the northeast margin of the Qinghai-Tibet Plateau, but also of great reference value to the understanding of crustal deformation mechanism in the Qinghai-Tibet Plateau.

Credit: 
Science China Press

Liver cancer cells manipulate stromal cells involved in fibrosis to promote tumor growth

image: The interaction between tumor cells and stellate cells in tumor microenvironment

Image: 
Osaka University

Osaka, Japan - Hepatocellular carcinoma (HCC), frequently seen in patients with liver cirrhosis caused by alcohol abuse or chronic viral hepatitis, is the most common form of liver cancer worldwide. As such, it is the third-most common cause of cancer-related death and has a notoriously poor prognosis. At present, surgery is the most effective treatment for HCC, but is only successful in the 10%-20% of cases where cancer cells have not spread beyond the liver.

Given the lack of treatment options for HCC, a group of researchers led by Osaka University decided to focus on specific cells and processes that occur in the area around liver tumors in the hope of finding a novel target for drug development.

The results of their study were published in a recent issue of Gastroenterology.

"Hepatic stellate cells (HSCs) are normal liver cells that play a role in the formation of scar tissue in response to liver damage," explains co-author of the study Hayato Hikita. "High levels of activated HSCs have been reported in the tumor microenvironment and are associated with a poor prognosis in HCC patients. However, no one had examined the interaction between HSCs and cancer cells in the liver."

When the researchers cultured liver cancer cells together with HSCs, they observed a significant increase in the number of cancer cells, suggesting that the HSCs somehow promoted cancer cell growth. Interestingly though, inhibition of autophagy (a cellular process primarily designed to remove damaged or unwanted cellular components) in the HSCs prevented the proliferation of cancer cells.

Using a mouse model of liver cancer and an analysis of gene expression, the researchers made the startling discovery that the cancer cells actually induced autophagy in the HSCs, which in turn caused the HSCs to secrete a protein called GDF15, which promoted tumor growth.

"When we examined liver samples from HCC patients with and without tumors, we found that the tumor tissue samples had much higher levels of GDF15," says senior author Tetsuo Takehara. "Most importantly though, when we then examined the association between GDF15 expression and clinical outcome, we found that patients with higher levels of GDF15 had a poorer prognosis than those with only low levels of GDF15 expression, which really highlighted the role of GDF15 in HCC progression."

Building on the findings of this study, novel therapies targeting GDF15 expression by HSCs are an exciting new prospect for the treatment of HCC.

Credit: 
Osaka University

Hydroxychloroquine blood levels predict clotting risk in patients with lupus

The antimalarial drug hydroxychloroquine is frequently prescribed to treat symptoms of the autoimmune disease lupus. In addition to decreasing disease flares, the drug can also prevent blood clots, which are a major problem in individuals with lupus. A new study in Arthritis & Rheumatology shows that monitoring patients' blood levels of hydroxychloroquine can predict their clotting risk.

In 739 patients, clotting occurred in 38 patients (5.1%). Average hydroxychloroquine blood levels were lower in patients who developed clots, and clotting rates were reduced by 12% for every 200 ng/mL increase in the most recent hydroxychloroquine blood level.

The finding may help clinicians determine the optimal dosing of hydroxychloroquine in patients with lupus.

"Hydroxychloroquine blood levels can be used to monitor adherence, benefits, and risks in lupus," said lead author Michelle Petri, MD, MPH, of the Johns Hopkins University School of Medicine.

Credit: 
Wiley

Living alone may increase risk of dying after hip fracture

Individuals face a higher risk of dying following hip fractures. A new study published in the Journal of Bone and Mineral Research has found that living alone after experiencing a hip fracture may further elevate this risk.

For the study, researchers examined information on hip fractures from all hospitals in Norway from 2002 to 2013, and they combined the data with the 2001 National Population and Housing Census.

During 12.8 years of follow-up in 12,770 men and 22,067 women with hip fractures at ages 50 to 79 years, higher rates of death were seen in both men and women living alone versus those living with a partner (a 37% higher risk in men and a 23% higher risk in women).

Credit: 
Wiley

Mindfulness-based cognitive therapy affects self-criticism and self-assurance in individuals with depression

Findings from a recent study of individuals with depression suggest that Mindfulness-Based Cognitive Therapy (MBCT) can improve how patients feel about themselves in difficult situations in ways that may help protect against relapse of depressive symptoms. The findings are published in Counselling and Psychotherapy Research.

For the study, 68 individuals were randomized the MBCT or a waiting list. Patients who received MBCT were more likely to experience reductions in feelings of self-inadequacy and improvements in self-reassurance. Also, individuals with improvements in self-reassurance were less likely to experience depressive relapse within 2 years after the MBCT intervention.

"Self-criticism makes people vulnerable to depression. This study shows that MBCT can influence how people relate to themselves, and that being supportive towards oneself protects against depressive relapse," said corresponding author Elisabeth Schanche, PhD, of the University of Bergen, in Norway.

Credit: 
Wiley

Cattle grazing and soybean yields

image: Little corn residue remains after concentrated grazing in the high stocking density treatment the day that cattle were moved from the fields in March of 2020.

Image: 
Morgan Grabau

By late fall, much of the Midwest is a pleasing landscape of dry, harvested corn fields. It makes for a bucolic rural scene on highway drives. But the corn litter that's left over doesn't seem useful, at least to untrained eyes.

But to those in the know, that corn residue is a valuable resource. Scattered leaves, husks, kernels, and cobs can serve as food to grazing cattle. When managed well, corn residue can increase farm income, provide affordable food for cattle, and efficiently use the land to feed people.

Morgan Grabau, a member of the American Society of Agronomy, studies the interactions of cattle grazing and crop productivity. She recently presented her research at the virtual ASA-CSSA-SSSA Annual Meeting.

"Corn residue is an under-used resource. Only 15% of the corn residue acres in the central U.S. are grazed," says Grabau.

One big concern farmers have about cattle grazing corn residue is soil compaction. If cattle compact the soil too much, future crops might not grow well. Addressing the issue of soil compaction is the main focus of Grabau's work.

In the past, Grabau's research team has shown that compaction isn't too bad during fall and winter grazing. When the soil is dry and frozen, it resists stamping cattle hooves. "My research was focused on the effect of grazing in the spring when the soil is thawed and wet," she explains.

Grabau studied two different grazing systems. In one system, researchers let a small number of cattle graze corn fields for 45 days starting in mid-February. The other system tripled the number of cattle but cut grazing time to just 15 days in March. This way, the total amount of grazing was equal. But the time spent on wet fields varied, which could affect how the soil responds to all that trampling.

The researchers studied corn fields in Nebraska, where around half of the corn fields are grazed after harvest. The team measured various soil properties that contribute to compaction and the yield of the soybeans planted in the fields the following season after cattle were done grazing. The team repeated the experiment over two years.

"Much like previous fall grazing studies, minimal effects were seen on soil properties and yield due to spring grazing, regardless of the number of cattle and area grazed," says Grabau.

The soybean productivity of the fields following grazing did show some changes. The highly concentrated grazing for just 15 days actually increased yields slightly.

"This yield increase could be due to more residue removed, causing warmer soil temperatures for plants to grow," Grabau says.

The cattle did cause some soil compaction. But their effects were limited to the surface level of fields.

"Compaction isn't permanent," Grabau says. "Soil can loosen up again as it dries and saturates over and over, and microbial activity in the soil also reduces compaction."

Fortunately, soybean seedlings had no problem establishing themselves in the soil after grazing even with some surface compaction present.

"Even when we created a worst-case scenario, grazing in the spring when the ground was wet, compaction was minimal and subsequent soybean yields were not negatively affected," Grabau says.

Although Grabau says that fall and winter grazing is probably still the best solution, farmers shouldn't be afraid of grazing cattle in the spring.

"The integration of crops and livestock is a beneficial production system," says Grabau. "Grazing cattle on corn residue can be a great way to make even more food for human consumption from corn fields, as both the corn grain and plant residue can be used as feed for livestock."

Credit: 
American Society of Agronomy

How market incumbents can navigate disruptive technology change

Researchers from University of Texas at San Antonio and University of Southern California published a new paper in the Journal of Marketing that examines the difficult choices industry incumbents and new entrants face during times of potentially disruptive technological change.

The study, forthcoming in the Journal of Marketing, is titled "Leapfrogging, Cannibalization, and Survival during Disruptive Technological Change: The Critical Role of Rate of Disengagement" and is authored by Deepa Chandrasekaran, Gerard Tellis, and Gareth James.

In July 2020, Tesla became the world's most valuable automaker, surpassing Toyota in market value for the first time. Ironically, it was Toyota that in 1997 released the Prius, the world's first mass-produced hybrid electric vehicle. In 2006, Tesla Motors, an upstart entrant, bet its future on fully electric cars. Incumbents dismissed the effort as futile because of the high entry barriers for auto production, the high cost of production in California, and the challenges of establishing charging stations. In contrast, Toyota bet the future on hybrids. Toyota faced hard choices: invest in hybrids, all-electrics, or both?

This example illustrates that during times of potentially disruptive technological change, both industry incumbents and new entrants face difficult choices. For incumbents, the critical dilemma is whether to cannibalize their own successful offerings and introduce the new (successive) technology, survive with their old offerings, or invest in both. To address this dilemma, they need to know whether disruption is inevitable and if so, how much of their existing sales will be cannibalized over time, or whether both old and new technologies may, in fact, exist in tandem (coexist). The entrant's dilemma is whether to target a niche to avoid incumbent reaction or target the mass market and incur the wrath of the incumbent.

The study's research team posits that to effectively manage disruption, companies must answer the following questions: First, when does an old technology coexist with a new, successive technology, versus going into an immediate decline? If so, how can one account for the coexistence of two technologies in an empirical model? Second, how can one estimate the extent of cannibalization and leapfrogging of an old technology by a new technology over time? Third, can consumer segments explain coexistence, cannibalization, and leapfrogging in successive technologies and, if so, in which segments?

To answer these questions, the researchers developed a generalized model of the diffusion of successive technologies. A key feature of the generalized model is the rate of disengagement from the old technology, which is not forced to equal the rate of adoption of the successive technology, thus allowing both technologies to coexist. The key finding is that technological disruption is frequent, with dominant incumbents failing in the face of takeoff of a new technology. However, disruption is neither always quick nor universal because new technologies sometimes coexist as partial substitutes of the old technologies. As Chandrasekaran explains, "Our generalized model of diffusion of successive technologies can help marketers capture disruption or coexistence due to the rate of disengagement from the old technology, which can vary from the rate of adoption of the new technology. This model enables a superior fit to data on technological succession over prior multi-generational models that do not include such flexibility."

The study also identifies four adopter segments that account for competition between successive technologies: leapfroggers correlate with the growth of the new technology, switchers and opportunists account for the cannibalization of the old technology, and dual users account for the coexistence of both technologies. The generalized model can capture variations in segment sizes across technologies and markets. For example, leapfroggers form a dominant component of adopters in the early life cycle of a new technology in developing markets while dual users do so in developed markets.

The model can provide important signals about disruption and survival by estimating cannibalization versus co-existence and forecasting the evolution of four critical consumers segments from aggregate data. "Incumbents often wait until the market for the new technology is large enough to be profitable before committing resources to its development. Our analysis suggests that managers should be careful not to underestimate cannibalization by switchers, especially when they dominate dual users, or the growth of new technologies via leapfroggers (especially in developing countries)," says Tellis. In addition, despite its frequent occurrence, disruption is not a given when a new successive technology enters the market. Thus, managers do not have to make a stark choice between the two technologies. Disruption may be averted by effectively targeting dual users and by carefully examining factors driving the prolonged (co)existence of the old technology.

Credit: 
American Marketing Association

2D CaCl crystals with +1 calcium ions displaying unexpected metallicity and ferromagnetism

image: (a) Schematic drawings of the sample preparation processes. (b) (i) Cryo-EM image of the Ca-Cl crystals in the ultra-thin reduced graphene oxide (rGO) membrane. (ii) Diffraction pattern of a typical crystal structure by cryo-EM in electron diffraction mode. (iii) Fourier transform of the entire bright-field image showing the same hexagonal lattice as in (ii). (c) Atomic ratio of Ca to Cl as a function of the etching time measured by XPS during etching by argon ion for a sample of the dried Ca-Cl@rGO membrane. (d) One stable structure from the molecular model I of CaCl crystal modules adsorbed on graphene sheet from theoretical computations. (e) Electrical resistivity measured by using the multimeter with two electrodes connecting with the up and down surfaces of the dried rGO and GO membranes, respectively. (f) Room-temperature ferromagnetism of the dried Ca-Cl@rGO membrane. (g) Heterojunction behavior of the dried Ca-Cl@rGO membrane. (h) Piezoelectricity-like property of the dried Ca-Cl@rGO membrane under ambient conditions.

Image: 
@Science China Press

Calcium ions are presented in rocks, bones, shells, biominerals, geological deposits, ocean sediments, and many other important materials. Calcium ions also play major roles in the retention of carbon dioxide in natural waters, water hardness, signal transduction and tissue generation. As one of the alkaline earth metals, the calcium atom has two valence electrons according to the octet rule. Up to now, the only known valence state of calcium ions under ambient conditions is +2, and the corresponding crystals with calcium ions are insulating.

By using cryo-electron microscopy, scientists reported the direct observation of two-dimensional (2D) CaCl crystals on reduced graphene oxide (rGO) membranes under ambient conditions, which exhibit only monovalent (i.e. +1) calcium ions. Remarkably, metallic rather than insulating properties are displayed by those 2D CaCl crystals, and more interestingly, room-temperature ferromagnetism, resulted graphene-CaCl heterojunction, coexistence of piezoelectricity and metallicity, together with the distinct hydrogen storage and release capability under ambient conditions are experimentally demonstrated.

It should be noted that conventionally, metallic materials generally do not show a piezoelectricity. Such unexpected piezoelectricity-like behavior of the metallic CaCl crystals is induced by the abnormal 2D CaCl structure that the structure is metallic due to the monovalent behavior of the Ca ions on the one hand, and on the other hand the structure has two elements (Ca and Cl) with different electric effects under compressive or tensile strain. Therefore, the 2D CaCl crystals are a novel material that has both metallic character and piezoelectric property, and will have great novel applications as transistors down to the atomic scale and nanotransistor devices.

So far as we know, room-temperature ferromagnetism has never been observed for a main group metal element. Theoretical study reveals that the possible origin of such room-temperature ferromagnetism is the edge or defect effects of the CaCl crystals, where there is an unpaired valence electron in Ca+, then it is expected that every metal element has room-temperature ferromagnetism via forming the correspondingly abnormal 2D crystals.

Theoretical studies show that the formation of such abnormal crystal is attributed to the strong cation-π interactions of the Ca cations with the aromatic rings in the graphene surfaces. Since strong cation-π interactions also exist between other metal cations (such as Mg2+, Fe2+, Co2+, Cu2+, Cd2+, Cr2+ and Pb2+) and graphitic surfaces, similar crystals with abnormal valence of other metal cations are expected.

These findings not only present a breakthrough on the 2D crystals with abnormal cation-anion ratio, novel valence of cations, and unexpected conductivity, but also provide seminal works in material, biological, chemical and physical applications. The properties and behaviors of 2D crystals break the general knowledge about this widely distributed element in daily life, and they will definitely attract attention and prompt thought about its exciting applications in various fields.

These properties and behaviors of the 2D crystals will also highly expand the applications for the functionalized graphene. Further, considering the wide distribution of metallic cations and carbon on earth, such nanoscale "special" compounds with previously unrecognized properties may be ubiquitous in nature.

Credit: 
Science China Press

On the road to invisible solar panels: How tomorrow's windows will generate electricity

image: The solar cell created by the team is transparent, allowing its use in a wide range of applications

Image: 
Joondong Kim from Incheon National University

Five years after the Paris climate agreement, all eyes are on the world's progress on the road to a carbon-free future. A crucial part of this goal involves the energy transition from fossil fuels to renewable sources, such as sun, water, wind and wave energy. Among those, solar energy has always held the highest hope in the scientific community, as the most reliable and abundant energy source on Earth. In recent decades, solar cells have become cheaper, more efficient, and environment friendly. However, current solar cells tend to be opaque, which prevents their wider use and integration into everyday materials, constrained to being lined up on roofs and in remote solar farms.

But what if next-generation solar panels could be integrated to windows, buildings, or even mobile phone screens? That is the hope of Professor Joondong Kim from the Department of Electrical Engineering at Incheon National University, Korea. In a recent study published in Journal of Power Sources, he and his colleagues detail their latest invention: a fully transparent solar cell. "The unique features of transparent photovoltaic cells could have various applications in human technology," says Prof. Kim.

The idea of transparent solar cells is well known, but this novel application where scientists have been able to translate this idea into practice is a crucial new finding. At present, the materials making the solar cell opaque are the semiconductor layers, those responsible for capturing light and translating it into an electrical current. Hence, Prof. Kim and his colleagues looked at two potential semiconductor materials, identified by previous researchers for their desirable properties.

The first is titanium dioxide (TiO2), a well-known semiconductor already widely used to make solar cells. On top of its excellent electrical properties, TiO2 is also an environment-friendly and non-toxic material. This material absorbs UV light (a part of the light spectrum invisible to the naked eye) while letting through most of the visible light range. The second material investigated to make this junction was nickel oxide (NiO), another semiconductor known to have high optical transparency. As nickel is one of the mist abundant elements on Earth, and its oxide can easily be manufactured at low industrial temperatures, NiO is also a great material to make eco-friendly cells.

The solar cell prepared by the researchers was composed of a glass substrate and a metal oxide electrode, on top of which they deposited thin layers of the semiconductors (TiO2 first, then NiO) and a final coating of silver nanowires, acting as the other electrode in the cell. They ran several tests to evaluate the device's absorbance and transmittance of light, as well as its effectiveness as a solar cell.

Their findings were encouraging; with a power conversion efficiency of 2.1%, the cell's performance was quite good, given that it targets only a small part of the light spectrum. The cell was also highly responsive and worked in low light conditions. Furthermore, more than 57% of visible light was transmitted through the cell's layers, giving the cell this transparent aspect. In the final part of their experiment, the researchers demonstrated how their device could be used to power a small motor. "While this innovative solar cell is still very much in its infancy, our results strongly suggest that further improvement is possible for transparent photovoltaics by optimizing the cell's optical and electrical properties," suggests Prof. Kim.

Now that they have demonstrated the practicality of a transparent solar cell, they hope to further improve its efficiency in the near future. Only further research can tell whether they will indeed become a reality, but for all intents and purposes, this new technology opens a--quite literal--window into the future of clean energy.

Credit: 
Incheon National University

Machine learning improves particle accelerator diagnostics

image: The Continuous Electron Beam Accelerator Facility, a DOE User Facility, features a unique particle accelerator that nuclear physicists use to explore the heart of matter.

Image: 
DOE's Jefferson Lab

Operators of the primary particle accelerator at the U.S. Department of Energy's Thomas Jefferson National Accelerator Facility are getting a new tool to help them quickly address issues that can prevent it from running smoothly. A new machine learning system has passed its first two-week test, correctly identifying glitchy accelerator components and the type of glitches they're experiencing in near-real-time.

An analysis of the results of the first field test of the custom-built machine learning system was recently published in Physical Review Accelerators and Beams.

The Continuous Electron Beam Accelerator Facility, a DOE User Facility, features a unique particle accelerator that nuclear physicists use to explore the heart of matter. CEBAF is powered by superconducting radiofrequency cavities, which are structures that enable CEBAF to impart energy to beams of electrons for experiments.

"The heart of the machine is these SRF cavities, and quite often, these will trip. When they trip, we'd like to know how to respond to those trips. The trick is understanding more about the trip: which cavity has tripped and what kind of fault it was," said Chris Tennant, a Jefferson Lab staff scientist in the Center for Advanced Studies of Accelerators.

Expert accelerator scientists review information on these faults and can use that to determine where the fault started and what type of fault it is, thus informing CEBAF operators on the best way to recover from the fault and mitigate future ones. However, that expert review takes time that operators don't have when experiments are underway.

In late 2019, Tennant and a team of CEBAF accelerator experts set out to build a machine learning system to perform that review in real-time.

They worked with several different groups to design and build from scratch a custom data acquisition system to pull information on cavity performance from a digital low-level RF system that is installed on the newest sections of particle accelerator in CEBAF, which includes about one-fifth of the SRF cavities in CEBAF. The low-level RF system constantly measures the field in SRF cavities and tweaks the signal for each one to ensure that they operate optimally.

When a cavity faults, the machine learning data acquisition system pulls 17 different signals for each cavity from the digital low-level RF system for analysis.

"We're leveraging information-rich data and turning it into actionable information," he said.

These same information-rich data are used by accelerator experts to help identify faulting cavities and causes. These past analyses were used to train the machine learning system prior to deployment.

The new system was installed and tested during CEBAF operations over an initial two-week period in early March 2020.

"For that two weeks, we had a few hundred faults that we were able to analyze, and we found that our machine learning models were accurate to 85% for which cavity faulted first and 78% in identifying the type of fault, so this is about as well as a single subject matter expert," Tennant explained.

This near-real-time feedback means that CEBAF operators can take immediate steps to mitigate problems that arise in the machine during experimental runs, and hopefully preventing smaller problems from turning into bigger ones that can reduce experiments' runtime.

"The idea is eventually, the subject matter experts won't need to spend all their time looking at the data themselves to identify faults," he said.

The next step for Tennant and his team is to analyze data from a second and longer test period that took place in late summer. If the system performed as well as the first test indicates, the team hopes to begin designs for extending the system to include older SRF cavities in CEBAF.

The foundation for his project was laid by work on a project led by Anna Shabalina, a Jefferson Lab staff member and principal investigator on a proposal funded by Jefferson Lab’s Laboratory Directed Research and Development program for fiscal year 2020. It was later selected by DOE for a $1.35 million grant to leverage machine learning to revolutionize experimentation and operations at user facilities in the coming years.*

"This was a proof-of-principle project. It was somewhat riskier, because several years ago, when this project was proposed, none of us on the team knew anything about machine learning. We just sort of jumped in," Tennant said. "So, sometimes supporting those higher-risk/higher-reward projects really pays off."

Credit: 
DOE/Thomas Jefferson National Accelerator Facility

Imminent sudden stratospheric warming to occur, bringing increased risk of snow over coming weeks

image: The stratospheric potential vorticity field on 10th February 2018. The Stratospheric Polar Vortex is about to split in two, and the weakening of the vortex was followed around two weeks later by a severe cold air outbreak over Europe known as the Beast from the East. Data from ERA-Interim reanalysis (Dee et al., 2011).

Image: 
University of Bristol

A new study led by researchers at the Universities of Bristol, Exeter, and Bath helps to shed light on the winter weather we may soon have in store following a dramatic meteorological event currently unfolding high above the North Pole.

Weather forecasting models are predicting with increasing confidence that a sudden stratospheric warming (SSW) event will take place today, 5 January 2021.

The stratosphere is the layer of the atmosphere from around 10-50km above the earth's surface. SSW events are some of the most extreme of atmospheric phenomena and can see polar stratospheric temperature increase by up to 50°C over the course of a few days. Such events can bring very cold weather, which often result in snowstorms.

The infamous 2018 "Beast from the East" is a stark reminder of what an SSW can bring. The disturbance in the stratosphere can be transmitted downward and if this continues to the earth's surface, there can be a shift in the jet stream, leading to unusually cold weather across Europe and Northern Asia. It can take a number of weeks for the signal to reach the surface, or the process may only take a few days.

The study, published in the Journal of Geophysical Research and funded by the Natural Environment Research Council (NERC), involved the analysis of 40 observed SSW events which occurred over the last 60 years. Researchers developed a novel method for tracking the signal of an SSW downward from its onset in the stratosphere to the surface.

Findings in the paper, Tracking the stratosphere?to?surface impact of Sudden Stratospheric Warmings suggest split events tend to be associated with colder weather over north west Europe and Siberia.

Lead author of the study, Dr Richard Hall, said there was an increased chance of extreme cold, and potentially snow, over the next week or two.:

"While an extreme cold weather event is not a certainty, around two thirds of SSWs have a significant impact on surface weather. What's more, today's SSW is potentially the most dangerous kind, where the polar vortex splits into two smaller "child" vortices."

"The extreme cold weather that these polar vortex breakdowns bring is a stark reminder of how suddenly our weather can flip. Even with climate change warming our planet, these events will still occur, meaning we must be adaptable to an ever more extreme range of temperatures," said Dann Mitchell, Associate Professor of Atmospheric Science at the University of Bristol and co-author of the study.

"Our study quantifies for the first time the probabilities of when we might expect extreme surface weather following a sudden stratospheric warming (SSW) event. These vary widely, but importantly the impacts appear faster and stronger following events in which the stratospheric polar vortex splits in two, as is predicted in the currently unfolding event. Despite this advance many questions remain as to the mechanisms causing these dramatic events, and how they can influence the surface, and so this is an exciting and important area for future research," said Dr William Seviour, senior lecturer at the Department of Mathematics and Global Systems Institute, University of Exeter, and co-author of the study.

Credit: 
University of Bristol

Repeated ketamine infusions reduce PTSD symptom severity

image: Adriana Feder, MD, Associate Professor of Psychiatry at the Icahn School of Medicine at Mount Sinai and lead author of the study

Image: 
Mount Sinai Health System

Repeated intravenous (IV) ketamine infusions significantly reduce symptom severity in individuals with chronic post-traumatic stress disorder (PTSD) and the improvement is rapid and maintained for several weeks afterwards, according to a study conducted by researchers from the Icahn School of Medicine at Mount Sinai. The study, published January 5 in the American Journal of Psychiatry, is the first randomized, controlled trial of repeated ketamine administration for chronic PTSD and suggests this may be a promising treatment for PTSD patients.

"Our findings provide insight into the treatment efficacy of repeated ketamine administration for PTSD, an important next step in our quest to develop novel pharmacologic interventions for this chronic and disabling disorder, as a large number of individuals are not sufficiently helped by currently available treatments," says Adriana Feder, MD, Associate Professor of Psychiatry at the Icahn School of Medicine at Mount Sinai and lead author of the study. "The data suggests repeated IV ketamine is a promising treatment for people who suffer from PTSD and provides evidentiary support to warrant future studies to determine how we can maintain this rapid and robust response over time."

Previous to the current study, Mount Sinai researchers conducted the first proof-of-concept, randomized, controlled trial of a single dose of intravenous ketamine for PTSD, which showed significant and rapid PTSD symptom reduction 24-hours post-infusion. First approved by the U.S. Food and Drug Administration as an anesthetic agent in 1970, ketamine acts as an antagonist of the N-methyl-d-aspartate (NDMA) receptor, an ionotropic glutamate receptor in the brain. In contrast, widely used antidepressants target different neurotransmitters - serotonin, norepinephrine, and dopamine - and can take weeks to even months to work. These drugs are considered ineffective in at least one third of cases, and only partially effective in an additional third.

"The data presented in our current study not only replicates, but also builds on our initial findings about ketamine for PTSD, indicating that in addition to being rapid, ketamine's effect can be maintained over several weeks. PTSD is an extremely debilitating condition and we are pleased that our discovery may lead to a treatment option for so many who are in need of relief from their suffering," said Dennis S. Charney, MD, Anne and Joel Ehrenkranz Dean of the Icahn School of Medicine at Mount Sinai and President of Academic Affairs for the Mount Sinai Health System and senior author of the paper.

For the current study, participants were randomly assigned to receive six infusions of ketamine, administered three times per week over two consecutive weeks, compared to six infusions of the psychoactive placebo control midazolam (chosen because its pharmacokinetic parameters and nonspecific behavioral effects are similar to those of ketamine) administered and evaluated over the same schedule. Individuals in this study had severe and chronic PTSD from civilian or military trauma, with median duration of 14 years and nearly half of the sample taking concomitant psychotropic medications. The primary traumas reported by participants included sexual assault of molestation, physical assault or abuse, witnessing violent assault or death, having survived or responded to the 9/11 attacks, and combat exposure. All study participants were assessed at baseline, at week 1 and week 2, as well as on each infusion day by teams of trained study raters who administered the Clinician Administered PTSD Scale for DSM-5 and the Montgomery-Asberg Depression Rating Scale (MADRS), standard rating scales for the assessment of PTSD and depression.

Significantly more participants in the ketamine group (67 percent) attained at least 30 percent or more reduction in symptoms from baseline at week two than those in the midazolam group (20 percent). Furthermore, ketamine infusions were associated with marked improvements across three of the four PTSD symptom clusters - intrusions, avoidance, and negative alterations in cognitions and mood. In the subsample of ketamine responders, improvement in PTSD symptoms was rapid, observed 24 hours after the first infusion, and was maintained for a median of 27.5 days following the primary outcome assessment day. In addition to PTSD symptom improvement, the ketamine group exhibited markedly greater reduction in comorbid depressive symptoms than the midazolam group, which is notable given the high comorbidity of depression in individuals with PTSD. Study findings further suggested that repeated ketamine infusions are safe and generally well-tolerated in individuals with chronic PTSD.

"Future studies may include administering additional doses over time and examining repeated ketamine infusions combined with trauma-focused psychotherapy, to help us determine how we can maintain this robust response over the long term," added Dr. Feder. "We want people suffering with PTSD to know that hope is on the horizon and we are working diligently to collect the information that will help bring them the relief they so desperately need."

Drs. Charney and Feder are named co-inventors on an issued patent in the United States, and several issued patents outside the U.S., filed by the Icahn School of Medicine at Mount Sinai for the use of ketamine as a therapy for PTSD.

This work was funded by a NARSAD Independent Investigator Award from the Brain & Behavior Research Foundation (PI Dr. Feder), by a generous donation from Mr. Gerald Greenwald and Mrs. Glenda Greenwald, and by Mount Sinai Innovation Partners through the i3 Accelerator, a $10 million fund providing nascent Mount Sinai discoveries with the investment necessary to fast-track technology development to reach patients sooner. Additional funding for this study was provided by the Ehrenkranz Laboratory for Human Resilience, a component of the Depression and Anxiety Center for Discovery and Treatment at ISMMS.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Protecting the global food supply chain

image: UD assistant professor Kyle Davis examines rice in a field in the Himalayan foothills.

Image: 
Photo courtesy of Kyle Davis

As the world grows increasingly globalized, one of the ways that countries have come to rely on one another is through a more intricate and interconnected food supply chain. Food produced in one country is often consumed in another country -- with technological advances allowing food to be shipped between countries that are increasingly distant from one another.

This interconnectedness has its benefits. For instance, if the United States imports food from multiple countries and one of those countries abruptly stops exporting food to the United States, there are still other countries that can be relied on to supply food. But, as the coronavirus COVID-19 global pandemic has made abundantly clear, it also leaves the food supply chain -- all the steps involved in bringing food from farms to people's tables across the world -- exposed to potential shocks to the system.

A new study published in Nature Food led by the University of Delaware's Kyle Davis looked at how to ensure that food supply chains are still able to function under these types of environmental shocks and highlighted key areas where future research should be focused. Co-authors on the study include Shauna Downs, assistant professor at Rutgers University's School of Public Health, and Jessica A. Gephart, assistant professor in the Department of Environmental Science at American University.

Davis said the motivation behind the paper was to understand current knowledge on environmental disruptions in food supply chains and to investigate evidence that disruptions in one step of the food supply chain impact subsequent stages. The steps on the global food supply chain are described in the paper as food production, storage, processing, distribution and trade, retail and consumption.

"Does a disruption in food production get passed through different steps and ultimately impact distribution and trade, all the way down to the consumers?" asked Davis, assistant professor in the Department of Geography and Spatial Sciences in UD's College of Earth, Ocean and Environment and the Department of Plant and Soil Sciences in UD's College of Agriculture and Natural Resources who is also a resident faculty member with UD's Data Science Institute. "If there's a shock to agriculture on the other side of the world, will you see the effects in your grocery store?"

The environmental disruptions covered in the paper include events like floods, droughts, and extreme heat, as well as other phenomena like natural hazards, pests, disease, algal blooms, and coral bleaching.

Davis said that this work is especially timely -- given the unprecedented effects that the COVID-19 pandemic has had on the entire food supply chain -- and highlights the importance of understanding how to make global food supply chains function properly under stress.

"COVID-19 has affected all steps in the supply chain simultaneously, from not having enough seasonal workers to harvest the crops to meat processing plants temporarily closing because workers get sick, to hoarding behaviors and runs on grocery stores," Davis said. "We've also seen many people losing their jobs, and as a result, they may not be able to purchase certain foods anymore."

Researchers have focused on understanding how temperature and precipitation affect staple crops at the production step in the supply chain, Davis said, but how that impacts the rest of the steps in the food supply chain has not been researched thoroughly. Because of this, we don't have a good grasp of how a suite of disruptions on a variety of food items ultimately impact consumption, food security, and nutrition.

To address these gaps in knowledge, the researchers identified key areas for future research: 1) to understand the shape of a supply chain, meaning its relative number of farmers, distributors, retailers and consumers to identify possible vulnerabilities; 2) to evaluate how simultaneous shocks -- such as droughts in two different places -- impact the whole supply chain; and 3) to quantify the ability for substitutions to occur within supply chains, like switching cornmeal for flour if there is a wheat shortage.

Ultimately, Davis said this work can help policy makers and businesses make food systems more capable of predicting and absorbing unprecedented shocks.

"As climate change and other sudden global events like pandemics exercise greater influence on food systems," Davis said, "we will need to continue building resilience into our food supply chain so that we're able to absorb a disruption that may be bigger than what we've seen in the past but still maintain the function of the supply chain -- getting food from field to fork."

Credit: 
University of Delaware