Tech

Engineers put tens of thousands of artificial brain synapses on a single chip

MIT engineers have designed a "brain-on-a-chip," smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors -- silicon-based components that mimic the information-transmitting synapses in the human brain.

The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to "remember" stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.

Their results, published today in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices -- electronics that are based on a new type of circuit that processes information in a way that mimics the brain's neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today's supercomputers can handle.

"So far, artificial synapse networks exist as software. We're trying to build real neural network hardware for portable artificial intelligence systems," says Jeehwan Kim, associate professor of mechanical engineering at MIT. "Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time."

Wandering ions

Memristors, or memory transistors, are an essential element in neuromorphic computing. In a neuromorphic device, a memristor would serve as the transistor in a circuit, though its workings would more closely resemble a brain synapse -- the junction between two neurons. The synapse receives signals from one neuron, in the form of ions, and sends a corresponding signal to the next neuron.

A transistor in a conventional circuit transmits information by switching between one of only two values, 0 and 1, and doing so only when the signal it receives, in the form of an electric current, is of a particular strength. In contrast, a memristor would work along a gradient, much like a synapse in the brain. The signal it produces would vary depending on the strength of the signal that it receives. This would enable a single memristor to have many values, and therefore carry out a far wider range of operations than binary transistors.

Like a brain synapse, a memristor would also be able to "remember" the value associated with a given current strength, and produce the exact same signal the next time it receives a similar current. This could ensure that the answer to a complex equation, or the visual classification of an object, is reliable -- a feat that normally involves multiple transistors and capacitors.

Ultimately, scientists envision that memristors would require far less chip real estate than conventional transistors, enabling powerful, portable computing devices that do not rely on supercomputers, or even connections to the Internet.

Existing memristor designs, however, are limited in their performance. A single memristor is made of a positive and negative electrode, separated by a "switching medium," or space between the electrodes. When a voltage is applied to one electrode, ions from that electrode flow through the medium, forming a "conduction channel" to the other electrode. The received ions make up the electrical signal that the memristor transmits through the circuit. The size of the ion channel (and the signal that the memristor ultimately produces) should be proportional to the strength of the stimulating voltage.

Kim says that existing memristor designs work pretty well in cases where voltage stimulates a large conduction channel, or a heavy flow of ions from one electrode to the other. But these designs are less reliable when memristors need to generate subtler signals, via thinner conduction channels.

The thinner a conduction channel, and the lighter the flow of ions from one electrode to the other, the harder it is for individual ions to stay together. Instead, they tend to wander from the group, disbanding within the medium. As a result, it's difficult for the receiving electrode to reliably capture the same number of ions, and therefore transmit the same signal, when stimulated with a certain low range of current.

Borrowing from metallurgy

Kim and his colleagues found a way around this limitation by borrowing a technique from metallurgy, the science of melding metals into alloys and studying their combined properties.

"Traditionally, metallurgists try to add different atoms into a bulk matrix to strengthen materials, and we thought, why not tweak the atomic interactions in our memristor, and add some alloying element to control the movement of ions in our medium," Kim says.

Engineers typically use silver as the material for a memristor's positive electrode. Kim's team looked through the literature to find an element that they could combine with silver to effectively hold silver ions together, while allowing them to flow quickly through to the other electrode.

The team landed on copper as the ideal alloying element, as it is able to bind both with silver, and with silicon.

"It acts as a sort of bridge, and stabilizes the silver-silicon interface," Kim says.

To make memristors using their new alloy, the group first fabricated a negative electrode out of silicon, then made a positive electrode by depositing a slight amount of copper, followed by a layer of silver. They sandwiched the two electrodes around an amorphous silicon medium. In this way, they patterned a millimeter-square silicon chip with tens of thousands of memristors.

As a first test of the chip, they recreated a gray-scale image of the Captain America shield. They equated each pixel in the image to a corresponding memristor in the chip. They then modulated the conductance of each memristor that was relative in strength to the color in the corresponding pixel.

The chip produced the same crisp image of the shield, and was able to "remember" the image and reproduce it many times, compared with chips made of other materials.

The team also ran the chip through an image processing task, programming the memristors to alter an image, in this case of MIT's Killian Court, in several specific ways, including sharpening and blurring the original image. Again, their design produced the reprogrammed images more reliably than existing memristor designs.

"We're using artificial synapses to do real inference tests," Kim says. "We would like to develop this technology further to have larger-scale arrays to do image recognition tasks. And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud."

Credit: 
Massachusetts Institute of Technology

Ancient asteroid impacts created the ingredients of life on Earth and Mars

image: Single stage propellant gun used for the simulation of impact-induced reactions at NIMS.

Image: 
Yoshihiro Furukawa

A new study reveals that asteroid impact sites in the ocean may possess a crucial link in explaining the formation of the essential molecules for life. The study discovered the emergence of amino acids that serve as the building blocks for proteins - demonstrating the role of meteorites in bringing life's molecules to earth, and potentially Mars.

There are two explanations for the origins of life's building molecules: extraterrestrial delivery, such as via meteorites; and endogenous formation. The presence of amino acids and other biomolecules in meteorites points to the former.

Researchers from Tohoku University, National Institute for Materials Science (NIMS), Center for High Pressure Science & Technology Advanced Research (HPSTAR), and Osaka University simulated the reactions involved when a meteorite crashes into the ocean. To do this, they investigated the reactions between carbon dioxide, nitrogen, water, and iron in a laboratory impact facility using a single stage propellant gun. Their simulation revealed the formation of amino acids such as glycine and alanine. These amino acids are direct constituents of proteins, which catalyze many biological reactions.

The team used carbon dioxide and nitrogen as the carbon and nitrogen sources because these gases are regarded as the two major components in the atmosphere on the Hadean Earth, which existed more than 4 billion years ago.

Corresponding author from Tohoku University, Yoshihiro Furukawa, explains, "Making organic molecules form reduced compounds like methane and ammonia are not difficult, but they are regarded as minor components in the atmosphere at that time." He adds, "The finding of amino acid formation from carbon dioxide and molecular nitrogen demonstrates the importance in making life's building blocks from these ubiquitous compounds."

The hypothesis that an ocean once existed on Mars also raises interesting avenues for exploration. Carbon dioxide and nitrogen are likely to have been the major constituent gases of the Martian atmosphere when the ocean existed. Therefore, impact-induced amino acid formation also provides a possible source of life's ingredients on ancient Mars.

Furukawa says, "further investigations will reveal more about the role meteorites played in bringing more complex biomolecules to Earth and Mars."

Credit: 
Tohoku University

Researchers advance fuel cell technology

image: Professor Su Ha and PhD graduate Qusay Bkour used an inexpensive catalyst made from nickel and molybdenum nanoparticles to improve fuel cell technology.

Image: 
WSU

Washington State University researchers have made a key advance in solid oxide fuel cells (SOFCs) that could make the highly energy-efficient and low-polluting technology a more viable alternative to gasoline combustion engines for powering cars.

Led by PhD graduate Qusay Bkour and Professor Su Ha in the Gene and Linda Voiland School of Chemical Engineering and Bioengineering, the researchers have developed a unique and inexpensive nanoparticle catalyst that allows the fuel cell to convert logistic liquid fuels such as gasoline to electricity without stalling out during the electrochemical process. The research, featured in the journal, Applied Catalysis B: Environmental, could result in highly efficient gasoline-powered cars that produce low carbon dioxide emissions that contribute to global warming.

"People are very concerned about energy, the environment, and global warming," said Bkour. "I'm very excited because we can have a solution to the energy problem that also reduces the emissions that cause global warming."

Fuel cells offer a clean and highly efficient way to convert the chemical energy in fuels directly into electrical energy. They are similar to batteries in that they have an anode, cathode and electrolyte. However, unlike batteries which only deliver electricity they have previously stored, fuel cells can deliver a continuous flow of electricity as long as they have fuel.

Because they run on electrochemical reactions instead of making a piston do mechanical work, fuel cells can be more efficient than the combustion engines in our cars. When hydrogen is used as fuel, their only waste product is water.

Despite the great promise of hydrogen fuel cell technology, however, storing high-pressure hydrogen gas in fuel tanks creates significant economic and safety challenges. There is little hydrogen gas infrastructure in the U.S., and the technology's market penetration is very low.

"We don't have readily available fuel cells that can run on a logistic liquid fuel such as gasoline," Bkour said.

Unlike pure hydrogen fuel cells, the developed SOFC technology can run on a wide variety of liquid fuels, such as gasoline, diesel, or even bio-based diesel fuels, and doesn't require the use of expensive metals in their catalysts. Cars powered by gasoline SOFCs could use existing gas stations.

Fuel cells that run on gasoline, however, tend to build up carbon within the cell, stopping the conversion reaction. Other chemicals that are common in liquid fuels, such as sulfur, also stop the reactions and deactivate the fuel cell.

"The carbon-induced catalyst deactivation is one of the main problems associated with the catalytic reforming of liquid hydrocarbons," Bkour said.

For their SOFC fuel cell, the WSU team used an inexpensive catalyst made from nickel and then added nanoparticles of the element, molybdenum. Testing their molybdenum-doped catalyst, their fuel cell was able to run for 24 hours straight without failing. The system was resistant to carbon build-up and sulfur poisoning. In contrast, a plain nickel-based catalyst failed in an hour.

Liquid fuel cell technology has tremendous opportunities for various power-hungry markets, including transportation applications. The researchers are now making bridges with the automotive industry to build fuel cells that can run under real-world and longer-lasting conditions.

Credit: 
Washington State University

Recycling old genes to get new traits -- How social behavior evolves in bees

image: Bees (Megalopta genalis) become queens by subjugating their daughters, who go out to forage for food, while the queen stays at home.

Image: 
Callum Kingwell

A team working at the Smithsonian Tropical Research Institute (STRI) found evidence to support a long-debated mode of evolution, revealing how evolution captures environmental variation to teach old genes new tricks: Sweat bees switch from solitary to social behavior, repurposing ancient sets of genes that originally evolved to regulate the development of other traits.

Honeybees, ants and other social insects are known for striking differences between workers and queens that encompasses their behavior, reproduction and physical stature: from brains to brawn. These differences arise when there are environmental influences such as changes in diet or social interactions. But how do responses to the environment contribute to evolution? This question was posed by psychologists and biologists over a century ago and then generally neglected because it is very difficult to test.

More recently it was given serious attention by an emeritus staff scientist at the Smithsonian Tropical Research Institute (STRI), Mary Jane West-Eberhard, in her seminal book Developmental Plasticity and Evolution (2003). West-Eberhard's work on the subject has led to a reexamination of the role of the environment in evolution.

Even though most of the 20,000 species of bees live their entire lives alone, highly sophisticated social systems have evolved multiple times. It has long been suspected that response to environmental cues may have been the initial spark that ignited the evolution of complex social lifestyles, but evidence was lacking until now.

This team based their work on Barro Colorado Island, Panama. A nocturnal sweat bee, Megalopta genalis, lives there, and its social biology has been studied intensively, because females seem to teeter on the brink of sociality. An individual bee can either live alone in a hollow stick or become a queen to lord over one, or sometimes a few, of her oppressed daughters, who go to work for her.

STRI staff scientist Bill Wcislo explained why this species can provide unique insights. "Most species are either social, or they are solitary," Wcislo said. "And for species that do vary, if you want to compare social and solitary within species, the different social forms usually live in different environments at different altitudes or latitudes, so they have different diets or other environmental factors. But because there's a single species on Barro Colorado Island that can do either the social or solitary thing, we can compare them in a place where all other things are equal: same season, same climate, same food plants, in short, the same external environment."

The team compared the sets of M. genalis genes involved in diverse processes, including those that differentiate social and solitary individuals. They found that the genes involved in regulating social behavior are largely derived from tinkering with ancient developmental processes that are central to normal development of young and to sex determination. The way a bee responds to different environmental conditions changes the way their genes are expressed and gives evolution a chance to act on individuals that are genetically the same but behaving differently.

"If the first step of evolving a new trait is always the appearance of a new genetic variant, then we would expect to find the genes related to social behavior in this species to be limited in scope and recently evolved," said Karen Kapheim, assistant professor at Utah State University and STRI Research Associate, who led the study. "We found just the opposite."

New traits--even highly complex lifestyles like social behavior--can evolve through the recycling of old genes. This helps to solve a long-standing debate among evolutionary biologists: Is the first step of evolution always a new genetic variant resulting from mutations? Or can environmental variance lead the way, with the genetics catching up? The answer to the latter question appears to be yes.

Credit: 
Smithsonian Tropical Research Institute

Titanium oxide-based hybrid materials promising for detoxifying dyes

image: Synthesis of NPSt in the presence of TiO2.

Image: 
Kazan Federal University

Photoactive materials have become extremely popular in a large variety of applications in the fields of photocatalytic degradation of pollutants, water splitting, organic synthesis, photoreduction of carbon dioxide, and others. Elza Sultanova, co-author of the paper, is engaged in researching catalytic properties of photoactive materials based on macrocycles. The project is funded by Russian Science Foundation and headed by Senior Research Associate Alexander Ovsyannikov. Employees of the Department of Organic Chemistry of Kazan Federal University, Professor Igor Antipin and Associate Professor Vladimir Burilov, also partake in the research.

Earlier in the course of research work, hollow polymer nanocontainers were obtained on the basis of a macrocyclic compound -- viologencavitand and styrene as copolymer. These nanocapsules are able to thermo-responsively encapsulate substrates of various structures and stabilize various metal nanoparticles on their surface, thus forming catalytically active composites. All stages of the study were published in highly rated foreign journals (Catal. Sci. Technol., ChemPlusChem., Chem. Commun.). In this paper, nanosized composites consisting of (covalently bound viologencavitands) titanium oxide encapsulated in the cavity of a polymer matrix, and metallic nanoparticles of noble metals (palladium, platinum, gold) stabilized on the surface of the polymer matrix are obtained for the first time. The structure of the obtained nanocomposites is described by a complex of physicochemical methods.

The main point in the study of the hybrid composites is the possibility of their use in photocatalysis using the sun as a light source. After all, it is known that titanium oxide itself, being the most popular photocatalyst of our time, is limited in use when exposed to sunlight, since the latter contains only 5-8% of ultraviolet radiation.

The use of hybrid composites as photocatalysts was demonstrated on the model reaction of the photodegradation of a dye (methylene blue) in water under direct autumn sun rays at the temperature of minus 2 ° ?. This model reaction is convenient because it is easily controlled by UV spectroscopy, as well as visually (the color changes from blue to colorless). In the transition from pure titanium oxide to encapsulated in a polymer capsule, the photodegradation efficiency already increases 3.5 times when irradiated with sunlight. When noble metals are added to the composite, the efficiency of photocatalysis increases to 94% due to the synergistic effect of the individual components. The results obtained are a good prerequisite for the possibility of using composites for photodegradation of toxic dyes to carbon dioxide and water.

The group has developed stable organo-inorganic hybrid nanocomposites, which include a viologen-containing polymer matrix, encapsulated titanium oxide and stabilized metal nanoparticles on the surface. These composites can be used as effective photocatalysts using sunlight in the decomposition of toxic dyes to safe compounds that do not require any further chemical or physical processing.

Researchers plan to use the obtained photocatalyst based on a viologen-containing polymer and titanium oxide for photogeneration of hydrogen from water. Studies will also be continued in studying the effectiveness of hybrid composites as catalysts for photodegradation of various pollutants (organic dyes and phenols) when using sunlight as an energy source.

Credit: 
Kazan Federal University

ANCA-associated vasculitis: The ADVOCATE Study

ANCA-associated vasculitis (AAV) is a systemic disease involving the formation of special autoantibodies (so-called anti-neutrophil cytoplasmic antibodies/ANCA) and vascular inflammation. There are several diseases associated with involvement of the kidneys, lungs, upper respiratory tract, heart, skin and the nervous system; potentially life-threatening courses of disease are also possible.

Immunosuppressive therapy is provided, which can lead to infections as a known side effect, among others. Modern immunosuppressants (e.g. rituximab, an anti-CD20 monoclonal antibody) do not block the entire immune system as corticoids, for example, do, but only parts of it, so other pathways of the immune system continue to work.

A role in AAV pathogenesis is also played by the complement system of the immune system, especially complement factor C5a. Neutrophil leukocytes have immunostimulating C5a receptors (C5aR). Avacopan (formerly CCX168) is an orally selective C5aR antagonist that inhibits C5a-induced activation of immune cells and thus AAV - as already demonstrated in two clinical Phase II trials.

The phase III ADVOCATE trial evaluated the safety and efficacy of avacopan - also with regard to lower doses of glucocorticoids being needed with avacopan. Patients were randomized 1:1 and received, over a total of 52 weeks, either the glucocorticoid prednisone (n=164) or avacopan (n=166) in combination with a) cyclophosphamide (oral or intravenous) followed by azathioprine or b) four infusions of rituximab (RTX). Patients were stratified on the basis of treatment (RTX i.v., or orally administered cyclophosphamide), the specific type of ANCA and newly diagnosed or relapsing AAV disease. Response to treatment (remission, primary endpoint) was defined as BVAS=0 (disease activity according to the "Birmingham Vasculitis Activity Score") plus prednisone tapering (at least four weeks before week 26). Sustained remission was present if there was no relapse from week 26 to 52.

At Week 26, 72.3% subjects achieved remission in the avacopan compared to 70.1% in the prednisone group (p

"AAV must be treated with immunosuppressants. However, the side effects of these substances can be severe - especially at higher corticosteroid doses", explains Professor David Jayne, Cambridge. "With avacopan, a drug that could be available in the future a reduction in corticoid use and to more sustained remission is achieved- in the trial, at least 10% more patients were still in remission after one year. The benefit of avacopan in patients with renal involvement was remarkable and it was well-tolerated."

"Research is developing increasingly well-targeted immunosuppressive or immunomodulatory drugs for many areas of medicine", adds Maria Jose Soler Romeo, Chair of the Paper Selection Committee of the 2020 ERA-EDTA Congress. "This is so important, because potentially life-threatening diseases such as AAV often require treatments that themselves pose risks - new immunosuppressive, specifically targeting substances are therefore urgently needed in order to improve therapies further and avoid the high dose steroids side effects."

Credit: 
ERA – European Renal Association

Systemic lupus erythematosus (SLE)

Systemic lupus erythematosus (SLE) is a chronic, inflammatory, autoimmune disease, in which damage is caused to multiple organs and tissues by the formation and deposition of immune complexes (antigen-antibody complexes). The kidneys are also affected in about half of the patients when immune complexes accumulate in the glomeruli, resulting in a glomerulonephritis called lupus nephritis (LN). The severity of LN varies considerably, ranging from unnoticed, to acute and/or chronic terminal kidney failure despite intensive therapy.

LN is a determining factor for SLE outcomes. Patients with LN have a significantly higher mortality risk than patients with SLE without kidney involvement. If remission is achieved therapeutically in cases of LN, the 10-year survival rate increases significantly. Successful therapy is therefore of great importance.

Patients with LN are treated with immunosuppressive and anti-inflammatory therapies. The standard medications used include corticosteroids, cyclophosphamide, calcineurin inhibitors, azathioprine and mycophenolate acid. Another therapeutic approach is to neutralize "soluble human B lymphocyte stimulator protein" (BLyS), which is overexpressed in patients with SLE and which stimulates the growth of B lymphocytes. Belimumab is a human monoclonal antibody that binds to BLyS and blocks its activity on B cells.

Data from the BLISS-LN study [1] - a randomised, double-blind placebo-controlled trial that included 448 adult patients with active LN and presented today as a "Late Breaking Clinical Trial" at the ERA-EDTA Congress - showed that after 2 years of treatment the addition of belimumab to standard LN therapy resulted in a significantly better primary renal response at 104 weeks (43% vs 32% with placebo, p=0.0311; composite primary endpoint defined as urine protein creatinine ratio [uPCR] ?0.7; estimated glomerular filtration rate [eGFR] no more than 20% below pre-flare value or ?60 ml/min/1.73 m2; no rescue therapy). The addition of belimumab to standard therapy also resulted in significantly more complete renal responses after 2 years: 30% vs 19.7% with placebo (p=0.0167). During the study, patients treated with belimumab had 50% less risk vs placebo of experiencing renal events that are associated with increased risk of poor renal prognosis (p=0.0014).

Two different standard LN therapy regimens were used - 118 patients received cyclophosphamide-based induction therapy followed by azathioprine maintenance of remission, and 328 patients received mycophenolate mofetil (MMF)-used for induction and maintenance of remission therapy. Benefits of belimumab were demonstrated on background of both standard therapies, although less treatment effect for improving renal response was shown in patients who were receiving induction with cyclophosphamide followed by maintenance of remission with azathioprine. Reduction of renal events associated with increased risk of poor renal prognosis was demonstrated on background of both standard therapies.

"MMF is already used in many patients. It has been shown to be equivalent to cyclophosphamide in the induction therapy of LN, and superior to azathioprine in the maintenance phase. Adding belimumab can further improve the treatment results", explains Dr Brad Rovin, from the Ohio State University, Division of Nephrology, Columbus, United States, who presented the study results at the ERA-EDTA congress.

The study showed no safety signals; the incidence of side effects was similar in both groups.

Credit: 
ERA – European Renal Association

Lightning fast algorithms can lighten the load of 3D hologram generation

video: (left) Different images at depths (a) and (b) (see right) show how the distribution of light over space forms a truly 3D image. (right) Schematic of holography setup. The calculated hologram is displayed on a spatial light modulator while laser light is directed to reflect off its surface, interfere with the original beam and form a 3D image at the camera.

Image: 
Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University have developed a new way of calculating simple holograms for heads-up displays (HUDs) and near-eye displays (NEDs). The method is up to 56 times faster than conventional algorithms and does not require power-hungry graphics processing units (GPUs), running on normal computing cores like those found in PCs. This opens the way to developing compact, power-efficient, next-gen augmented reality devices, including 3D navigation on car windshields and eyewear.

The term hologram may still have a sci-fi ring to it, but holography, the science of making records of light in 3D, is used everywhere, from microscopy, fraud prevention on banknotes to state-of-the-art data storage. Everywhere, that is, except for its most obvious offering: truly 3D displays. The deployment of truly 3D displays that don't need special glasses is yet to become widespread. Recent advances have seen virtual reality (VR) technologies make their way into the market, but the vast majority rely on optical tricks that convince the human eye to see things in 3D. This is not always feasible and limits its scope.

One of the reasons for this is that generating the hologram of arbitrary 3D objects is a computationally heavy exercise. This makes every calculation slow and power-hungry, a serious limitation when you want to display large 3D images that change in real-time. The vast majority require specialized hardware like graphics processing units (GPUs), the energy-guzzling chips that power modern gaming. This severely limits where 3D displays can be deployed.

Thus, a team led by Assistant Professor Takashi Nishitsuji looked at how holograms were calculated. They realized that not all applications needed a full rendering of 3D polygons. By solely focusing on drawing the edge around 3D objects, they succeeded in significantly reducing the computational load of hologram calculations. In particular, they could avoid using Fast-Fourier Transforms (FFTs), the intensive math routines powering holograms for full polygons. The team combined simulation data with real experiments by displaying their holograms on a spatial light modulator (SLM) and illuminating them with laser light to produce a real 3D image. At high resolution, they found that their method could calculate holograms up to 56 times faster, and that the images compared favorably to those made using slower, conventional methods. Importantly, the team only used a normal PC computing core with no standalone graphics processing unit, making the whole process significantly less resource hungry.

Faster calculations on simpler cores means lighter, more compact, power-efficient devices that can be used in a wider range of settings. The team have their sights set on heads-up displays (HUDs) on car windshields for navigation, and even augmented reality eyewear to relay instructions on hands-on technical procedures, both exciting prospects for the not too distant future.

Credit: 
Tokyo Metropolitan University

Environmental noise changes evolutionary cooperation between cellular components, model shows

Cells are massive factories, containing a multitude of substations devoted to specific tasks all devoted to keeping the overarching organism alive. Until now, researchers have questioned how such diverse components evolve in tandem -- especially when each component can evolve in a variety of ways.

Two researchers based in Tokyo, Japan, have developed a statistical physics model to demonstrate how such evolution is possible. The results were published on May 26 in Physical Review Letters.

Their work is based on the idea that the potential evolutionary tracts of each cellular component are considered to be high dimensional, but that the function of each component actually evolves in way to make the cell as a whole the most fit. This is low dimensionality, where each piece can be affected by the other pieces as a result of the evolution.

"How can such complex systems adapt to environment and evolve by somehow controlling such large degrees of freedom?" said Ayaka Sakata, an associate professor at the Institute of Statistical Mathematics. "Recent experiments have suggested that physical changes due to adaptation and evolution are highly constrained in a low-dimensional subspace. How does a drastic dimension reduction emerge?"

Sakata and co-author Kunihiko Kaneko, a professor of theoretical biology at the University of Tokyo, used a spin-glass model to understand how each piece of the puzzle evolves individually to improve the system as a whole. The model is based on how an organized system -- where the system repeats in a stable fashion -- can transition to a disorganized system -- where the connections are stable but random as the materials change from nonmagnetic to magnetic. The spins, which is an intrinsic property of electrons in such materials, become aligned under a certain condition, a phenomenon that can be mapped in biology, where the activity of genes instead of the spins.

"By formulating the problem in terms of the statistical physics of a spin-glass model, we demonstrate that the dimensional reduction emerges through the evolution of robustness to noise," Sakata said.

The noise is the unpredictability of variation in the environment that can cause changes to the cellular components themselves. Computer simulations of evolution under high noise led to random, disorganized changes while low noise led to too much variability in the components. The evolution under a moderate level of noise led to low dimensionality where the variety of the components is restricted, a characteristic considered to be a result of evolution robust to this noise level.

"Although the present statistical physics model is highly simplified, it gives a theoretical basis for dimension reduction in biological systems, in which robustness to noise is also essential," Sakata said. "In fact, the present model can be interpreted as the evolution of protein to have a certain function."

Credit: 
Research Organization of Information and Systems

Protecting the kidneys from failing

image: MAFB immunohistochemistry: Normalcontrol (Left), Primary FSGS (Right)
MAFB expression in podocytes is decreased in human primary FSGS.

Image: 
University of Tsukuba

Tsukuba, Japan - Focal segmental glomerulosclerosis (FSGS) is a common cause of steroid-resistant nephrotic syndrome, a type of kidney disease that is accompanied by massive urinary protein loss and that may progress to end-stage renal disease. In a new study, researchers from the University of Tsukuba uncovered a new protective role of the protein MafB in FSGS.

The glomeruli are the blood-filtering units of the kidneys, which ensure that only fluid from the blood and small molecules (but not larger molecules, like proteins) are excreted in the urine. Specialized cells, called podocytes, play a critical role in maintaining this filtration barrier. FSGS is a disease affecting the podocytes, and results in a thickening and functional impairment of the glomeruli that may also progress to end-stage renal disease. The protein MafB has been shown to be important for the development of podocytes, but the exact role it plays in podocyte maintenance is not well established.

"Focal segmental glomerulosclerosis is one of the most common cause of nephrotic syndrome in adults," says corresponding author of the study Professor Satoru Takahashi. "The challenge that this disease poses is that it is often resistant to steroids, which are the conventional treatment for nephrotic syndrome. We wanted to investigate the potential role of MafB in the development and therapy of focal segmental glomerulosclerosis."

To achieve their goal, the researchers first investigated kidney biopsies of FSGS patients and found that the amount of MafB in podocytes was significantly decreased, suggesting that MafB plays a role in the development of FSGS. To understand the molecular mechanism of MafB function in the kidneys, the researchers turned to a mouse model, in which MafB expression was specifically knocked out in podocytes. They found that these mice developed nephrotic syndrome and FSGS. By investigating the expression of almost 40,000 genes via so-called RNA-sequencing, the researchers found that MafB-deficient mice produced reduced amounts of other proteins that are known to be important for podocyte function, demonstrating a critical molecular signaling pathway that ensures proper podocyte and glomerular barrier function.

Could MafB be used as a novel therapeutic option for FSGS? To address this question, the researchers administered adriamycin, an agent known to cause FSGS, to mice that were designed to produce significantly higher levels of MafB in podocytes than normal mice. Kidney damage and nephrotic syndrome was less pronounced in mice over-producing MafB, suggesting a protective role of MafB in FSGS. The researchers then made use of all-trans retinoic acid (atRA), an agent known to increase MafB expression. When they injected atRA into adriamycin-treated normal mice, they found normal levels of MafB in podocytes and a significantly reduced extent of FSGS. To further show that atRA prevents FSGS by acting on MafB, the researchers showed that atRA failed to inhibit FSGS in mice lacking MafB.

"These are striking results that show how MafB plays a central role in the pathogenesis of focal segmental glomerulosclerosis," says Professor Takahashi. "Our findings provide new insights into a potential novel therapeutic target for this disease."

Credit: 
University of Tsukuba

Telephone interventions could be used to reduce symptoms of cancer

Telephone interventions could be used to successfully treat symptoms of cancer such as fatigue, depression and anxiety, new research in the Cochrane Library reports. This could help patients receive the care they need during the current Covid-19 pandemic when face- to- face access with medical professionals is limited.

During this unique study researchers from the University of Surrey investigated the effectiveness of telephone interventions by medical professionals offering help and support in treating symptoms of cancer. People with cancer often experience a variety of symptoms such as depression, anxiety, sexually related issues and fatigue caused by the disease and its treatment. If not properly treated these can lead to additional problems including difficulties in carrying out everyday tasks, poor sleep, and poor quality of life.

Reviewing 32 previous studies in the field with a total of 6250 participants, researchers found that telephone interventions usually undertaken by nurses (on average three to four calls per intervention) have potential to reduce symptoms of depression, feelings of anxiety, fatigue and emotional distress. Evidence of the usefulness of telephone interventions for other symptoms, such as uncertainty, pain, sexually?related symptoms, dyspnoea, and general symptom experience, was limited - mainly due to few studies being conducted in these areas.

Professor Emma Ream, Director of Health Sciences Research at the University of Surrey, said: "Due to increasing pressures on cancer services in the NHS and the current disruption to services due to Covid-19, it is important that we explore different ways to deliver care to those who need it.

"Telephone interventions delivered by healthcare professionals are one way to do this. Offering care to patients in their own homes is convenient for them, and can make them more comfortable and possibly more open about their feelings when speaking to professionals. They ultimately will reduce foot traffic in hospitals which is very important at the moment in reducing risks of contracting Covid-19 virus. More research is needed to confirm the effectiveness of such interventions."

Credit: 
University of Surrey

Microglia in the olfactory bulb have a nose for protecting the brain from infection

image: When virus (labeled in green) enters the nasal passages, its spread is abruptly halted just before entering the CNS (blue oval structures at the top of the image)

Image: 
Image courtesy of McGavern lab

Researchers at the National Institute of Neurological Disorders and Stroke (NINDS), a part of the National Institutes of Health, have identified a specific, front-line defense that limits the infection to the olfactory bulb and protects the neurons of the olfactory bulb from damage due to the infection. Neurons in the nose respond to inhaled odors and send this information to a region of the brain referred to as the olfactory bulb. Although the location of nasal neurons and their exposure to the outside environment make them an easy target for infection by airborne viruses, viral respiratory infections rarely make their way from the olfactory bulb to the rest of the brain, where they could cause potentially fatal encephalitis. The study was published in Science Immunology.

Taking advantage of special viruses that can be tracked with fluorescent microscopy, the researchers led by Dorian McGavern, Ph.D., senior investigator at NINDS, found that a viral infection that started in the nose was halted right before it could spread from the olfactory bulb to the rest of the central nervous system.

"Airborne viruses challenge our immune system all the time, but rarely do we see viral infections leading to neurological conditions," said Dr. McGavern. "This means that the immune system within this area has to be remarkably good at protecting the brain."

Additional experiments showed that microglia, immune cells within the central nervous system, took on an underappreciated role of helping the immune system recognize the virus and did so in a way that limited the damage to neurons themselves. This sparing of neurons is critical, because unlike cells in most other tissues, most neuronal populations do not come back.

Because of this, the central nervous system has evolved to include several defense mechanisms designed to keep pathogens out. However, when airborne viruses are inhaled, they travel through the nasal passages and interact with a tissue called the olfactory epithelium, which is responsible for our sense of smell. Neurons at the edge of the olfactory system extend small projections through the bone lining the nasal cavity. These projections enter the brain, giving it access to odors present in the air. Neurons in the olfactory epithelium also offer an easy way for viruses to bypass traditional central nervous system barriers by providing a direct a pathway to the brain.

"If a virus infects the processes of neurons that dangle within the airway, there is a chance for this virus to enter the brain, and ultimately cause encephalitis or meningitis," said Dr. McGavern. "We are interested in understanding immune responses that develop at the interface between nasal olfactory neurons, which end in the olfactory bulb, and the rest of the brain."

Dr. McGavern's team was able to show that CD8 T cells, which are part of the immune system responsible for controlling viruses, are very important in protecting the brain after infection of nasal tissue. Using advanced microscopy, his group watched in real time how CD8 T cells protected the brain from a nasal virus infection.

Interestingly, the CD8 T cells did not appear to interact directly with neurons, the predominately infected cell population. They instead engaged microglia, which are central nervous system immune cells that act a bit like garbage collectors by clearing cellular debris and dead cell material. When a viral infection occurs, the microglia appear to take up virus material from the surrounding environment and present it to the immune system as though they had become infected.

In this way, infected olfactory neurons can "hand off" virus particles to microglia, which were then detected by the T cells. The T cells then respond by releasing antiviral molecules that clear the virus from neurons in a way that does not kill the cells. Because microglia are a renewable cell type, this type of interaction makes sense from an evolutionary standpoint.

"The immune system has developed strategies to favor the preservation of neurons at all costs," said Dr. McGavern. "Here, we show that microglia can 'take the blow' from neurons by engaging T cells, which then allows the antiviral program to play out."

Considerable attention has been paid to respiratory viral infections of late due to the current COVID-19 pandemic. Dr. McGavern noted that, while that virus was not studied in these experiments, some of the symptoms it produces suggest that the same mechanism described here could be in play.

"One of the interesting symptoms associated with infection by novel coronavirus is that many people lose their sense of smell and taste. This suggests that the virus is not only a respiratory pathogen, but likely targets or disrupts olfactory sensory neurons as well."

It is important to note that widespread infection of the olfactory sensory neurons, whether by the novel coronavirus, the virus used in this study, or any other similar virus, will likely disrupt our sense of smell. However, unlike other neurons in the central nervous system, these sensory neurons that begin in the nose and end in the brain are capable of regenerating after an infection is cleared.

"The immune response we describe does not protect olfactory sensory neurons nor the sense of smell," explained Dr. McGavern. "This is not necessarily a long-term issue, because those sensory neurons can be replaced once the virus is dealt with. What is critical is to protect the brain and central nervous system from encephalitis or meningitis--our sense of smell can often be repaired over time."

Dr. McGavern continued by saying that given the importance of microglia in stimulating the antiviral response, factors that can lead to their depletion or loss of function could increase susceptibility to central nervous system infection.

Credit: 
NIH/National Institute of Neurological Disorders and Stroke

Research shows promising advances to lower cost and durable smart window technology

video: Researchers at the University of Colorado Boulder have developed an improved method for controlling smart tinting on windows that could make them cheaper, more effective and more durable than current options on the market. Smart window technology allows users to adjust the amount of sunlight and heat entering through home or windows without blocking views. Tinting allows for more natural light through home windows while still maintaining privacy and has positive implications for energy reduction.

Image: 
Michael McGehee

Researchers at the University of Colorado Boulder have developed an improved method for controlling smart tinting on windows that could make them cheaper, more effective and more durable than current options on the market. The research, led by Professor Mike McGehee in the College of Engineering and Applied Science, is described in a new paper this week in Joule and uses a reversible metal electrodeposition process that is different from the current industry standards.

"What we are doing is building an electrochemical cell. We have a transparent electrode and an electrode with metal ions. By switching the voltage, the thin plate metal blocks the light," he said. "It's not at all how other people are achieving the same effect."

The paper explains in detail how metal can be electroplated onto a transparent electrode to block light and then stripped to make the window transparent again by manipulating the voltage. It specifically explores how various electrolytes can be used with different supporting anions to achieve the desired results.

McGehee said smart window technology allows users to adjust the amount of sunlight and heat entering through home or windows without blocking views. Tinting allows for more natural light through home windows while still maintaining privacy and has positive implications for energy reduction and HVAC control in the home or office. Despite the appeal, dynamic windows have yet to achieve extensive commercialization because of many of the problems this paper addresses.

For example, this new process ultimately results in a more desirable neutral color of glass than other technologies and allows for any transparency adjustment all the way from 80 percent tinted down to 0 percent or fully transparent. Whereas many of the windows on the market can only provide up to 70 percent tinting. This transition can be done quickly as well, with 60 percent contrast happening in less than 3 minutes.

The final product is also likely to be less expensive to create than existing technologies. McGehee said potential cost savings were hard to gauge but producing windows with this technology doesn't require large special tools and has a high throughput - meaning the glass can be manufactured rapidly.

PhD candidate and McGehee lab member Tyler Hernandez is the first author on the paper and that the team has already made a 1 square-foot window using this process. They are currently running stability and other tests and the initial results indicate long-term durability of the with no evidence of electrode etching which degrades the overall performance and is a big drawback to other versions on the market.

Car manufactures are also interested in the technology while airplane manufacture Boeing already uses electrochromic windows on their 787 Dreamliner.

McGehee speculates that other application areas might include cycling glasses or ski goggles that shift with the quickly changing light conditions.

"This is a question and process my group has been looking into for some time now," he said. "This paper addresses many of the problems this technology has faced, and we think there is a lot of opportunity going forward."

Credit: 
University of Colorado at Boulder

Is e-cigarette use associated with relapse among former smokers?

What The Study Did: Whether use of electronic cigarettes among former cigarette smokers was associated with an increased risk of smoking relapse was examined with the use of nationally representative survey data.

Authors: Wilson M. Compton, M.D., M.P.E., of the National Institute on Drug Abuse in Bethesda, Maryland, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2020.4813)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

More evidence of no survival benefit in COVID-19 patients receiving hydroxychloroquine

A study of electronic medical records from US Veterans Health Administration medical centers has found that hydroxychloroquine--with or without azithromycin--did not reduce the risk of ventilation or death and was associated with longer length of hospital stay. This analysis, published June 5 in the journal Med, is the first in the US to report data on hydroxychloroquine outcomes for COVID-19 from a nationwide integrated health system.

The study included data from 807 people hospitalized with COVID-19 at Veterans Affairs medical centers around the country. About half, 395 patients, did not receive hydroxychloroquine at any time during their hospitalization. Among those who did, 198 patients were treated with hydroxychloroquine and 214 were treated with both hydroxychloroquine and azithromycin. Most of the patients given hydroxychloroquine, about 86%, received it before being put on a mechanical ventilator.

After adjustment for clinical characteristics, the risk of death from any cause was higher in the hydroxychloroquine group but not in the hydroxychloroquine + azithromycin group when these were compared with the no-hydroxychloroquine group. The researchers also found that the length of hospital stay was 33% longer in the hydroxychloroquine group and 38% longer in the hydroxychloroquine + azithromycin group than in the no-hydroxychloroquine group. Pre-existing conditions such as cardiovascular disease, chronic obstructive pulmonary disease, and diabetes were relatively common and similar across all groups.

The researchers--from the Columbia VA Health Care System, the University of South Carolina, and the University of Virginia School of Medicine--reported that their study has strengths that earlier studies have not had. For example, because it employed data from comprehensive electronic medical records (the VA Informatics and Computing Infrastructure, or VINCI), rather than administrative health insurance claims, they were able to apply rigorously identified covariates and outcomes. Additionally, because the data came from an integrated national healthcare system, the findings were less susceptible to biases that might occur in a single-center or regional study.

But they also acknowledge limitations: the median age in their study was about the same age as that in other studies of hospitalized patients, 70 years, but because the patients were older the findings might not apply to younger people with COVID-19, although a quarter of patients ranged from 22 to 60 years old. Additionally, the patients in the study were overwhelmingly male, nearly 96%, reflecting the demographics of veterans. The researchers also note that the findings don't provide insight into the use of these drugs in the outpatient setting or as prophylaxis, but they add that the FDA and the U.S. National Institutes of Health have both advised against the use of hydroxychloroquine outside of clinical trials.

Credit: 
Cell Press