Tech

How the brain is programmed for computer programming?

image: Overview of the study: MRI data were collected from 30 subjects with different levels of programming expertise while they performed the program categorization task, where subjects answered the functional categories of given Java code snippets by pressing buttons. The results suggest that the functional categories of source code can be decoded from the human brain and that multiple brain regions have fine-tuned representation of source code in proportion to the behavioral performance of program comprehension.

Image: 
Takatomi Kubo

Countries around the world are seeing a surge in the number of computer science students. Enrolment in related university programs in the U.S. and Canada tripled between 2006-2016 and Europe too has seen rising numbers. At the same time, the age to start coding is becoming younger and younger because governments in many different countries are pushing K-12 computer science education. Despite the increasing popularity of computer programming, little is known about how our brains adapt to this relatively new activity. A new study by researchers in Japan has examined the brain activity of thirty programmers of diverse levels of expertise, finding that seven regions of the frontal, parietal and temporal cortices in expert programmer's brain are fine-tuned for programming. The finding suggests that higher programming skills are built upon fine-tuned brain activities on a network of multiple distributed brain regions.

"Many studies have reported differences between expert and novice programmers in behavioural performance, knowledge structure and selective attention. What we don't know is where in the brain these differences emerge," says Takatomi Kubo, an associate professor at Nara Institute of Science and Technology, Japan, and one of the lead authors of the study.

To answer this question, the researchers observed groups of novices, experienced, and expert programmers. The programmers were shown 72 different code snippets while under the observation of functional MRI (fMRI) and asked to place each snippet into one of four functional categories. As expected, programmers with higher skills were better at correctly categorizing the snippets. A subsequent searchlight analysis revealed that the amount of information in seven brain regions strengthened with the skill level of the programmer: the bilateral inferior frontal gyrus pars triangularis (IFG Tri), left inferior parietal lobule (IPL), left supramarginal gyrus (SMG), left middle and inferior temporal gyri (MTG/IT), and right middle frontal gyrus (MFG).

"Identifying these characteristics in expert programmers' brains offers a good starting point for understanding the cognitive mechanisms behind programming expertise. Our findings illuminate the potential set of cognitive functions constituting programming expertise," Kubo says.

More specifically, the left IFG Tri and MTG are known to be associated with natural language processing and, in particular, semantic knowledge retrieval in a goal-oriented way. The left IPL and SMG are associated with episodic memory retrieval. The right MFG and IFG Tri are functionally related to stimulus-driven attention control.

"Programming is a relatively new activity in human history and the mechanism is largely unknown. Connecting the activity to other well-known human cognitive functions will improve our understanding of programming expertise. If we get more comprehensive theory about programming expertise, it will lead to better methods for learning and teaching computer programming," Kubo says.

Credit: 
Nara Institute of Science and Technology

Listening to the call of the wild: Tracking deer movements using sound

image: Researchers from The University of Tokyo have designed a new type of system using listening devices to detect and track deer positions in the wild

Image: 
Institute of Industrial Science, the University of Tokyo

Tokyo, Japan -- In the marchland of Japan's Oze National Park, keeping track of the deer population has been a difficult and time-consuming task for the park rangers. Now their lives could get much easier, thanks to a novel technique for tracking deer movements using unmanned listening devices developed by researchers at the Institute of Industrial Science, a part of The University of Tokyo.

Monitoring deer numbers is important in Oze and other national parks in Japan because deer are not native to the ecosystem and can have damaging effects on it. Current methods of monitoring deer populations range from traditional techniques such as counting droppings to photographing deer at night using automated cameras or from above during the day using unmanned aerial vehicles (UAVs). Each of these methods has its drawbacks and limitations--for example, the thick forest cover in some parts of the national park makes it difficult to see the deer from above using UAVs.

"The problem with using recording devices to estimate the size of deer populations in the past was that it was difficult to avoid counting the same deer multiple times--by setting up a grid of listening stations, we are able to triangulate the position of each deer with precision and track its movements," says Tadanobu Okumura, one of the researchers who developed the technology.

The researchers built a prototype listening station which is powered by solar panels and automatically synchronizes its internal clock with a GPS satellite. As the recordings from each of the stations are synchronized, the lag in the time it takes the sound of a deer to reach the recording station can be used to determine its location with precision using a triangulation technique.

"When we tested our prototype in an experimental setting in the playground of The University of Tokyo, we were able to pinpoint the location of a sound within five meters. In a second trial under more realistic conditions in the marshland at Oze National Park, it was possible to locate a sound to within about fifteen meters," explains Kazuo Oki, who also worked on the project. During a two-hour trial in Oze, the system picked up 72 distinct deer calls.

This prototype is a first step toward building a system that can be installed in the wild and monitored remotely. In the muddy wetlands of Oze, this could make the task of counting deer a lot easier.

Credit: 
Institute of Industrial Science, The University of Tokyo

Dalian coherent light source reveals the origin of interstellar medium S2 fragments

image: Researchers Directly Observed the C + S2 Channel in CS2 Photodissociation

Image: 
DICP

Studying the creation and evolution of sulfur-containing compounds in outer space is essential for understanding interstellar chemistry. CS2 is believed to be the most important molecule in comet nuclei, interstellar dust, or ice cores. CS and S2 are the photodissociation fragments of CS2.

Forty years ago, the emission spectra of only CS and S2 species, and not those of CS2 species, were observed from several comets by the International Ultraviolet Explorer satellite. The photodissociation mechanism of CS2 molecules remains unclear, and S2 fragments have not been experimentally observed before.

Recently, a team led by Prof. YUAN Kaijun from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences (CAS), in cooperation with Prof. WANG Xing'an's group from the University of Science and Technology of China, observed the C+S2 product channel from CS2 photodissociation for the first time using a home-made Time-Sliced Velocity Map Ion Imaging (TS-VMI) experimental setup, based on the Dalian Coherent Light Source (DCLS).

This study, published in The Journal of Physical Chemistry Letters on January 11 2021, provided direct experimental evidence for the origin of the interstellar medium S2 fragments observed previously.

The researchers investigated the two-photon ultraviolet (UV) and one-photon vacuum ultraviolet (VUV) photodissociation dynamics of CS2 molecules via the VUV free-electron laser (FEL) at DCLS. They directly observed the C+S2 product channel from CS2 photodissociation and obtained images of the electronically ground/excited states of S2 products with vibrational excitation.

Moreover, the researchers analyzed the product scattering anisotropy parameter β value. The electronically-excited states of the central atom of the CS2 molecule played an important role in the isomerization and photodissociation processes.

This research demonstrated that interstellar medium S2 fragments could be directly generated from CS2 photodissociation.

"Given the similarity of OCS studied in our previous works and CS2 in this work, we believe that the central-atom elimination channel is more general than expected in the photodissociation of triatomic molecules," stated Prof YUAN.

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

Efficient fluorescent materials and OLEDs for the NIR

image: (a) Molecular structure of the l-PN(THS) oligomer series. (b) Band diagram for the materials employed in the OLEDs. TFB (Poly[(9,9-dioctylfluorenyl-2,7-diyl)-alt-(4,4?-(N-(4-sec-butylphenyl)diphenylamine)]) and F8BT molecular structures are illustrated respectively above and below the relative band diagrams. (c) OLED architecture including ITO patterned glass substrate, poly(3,4-ethylene dioxythiophene) doped with poly(styrene sulfonate) (PEDOT:PSS) hole-transport layer, TFB electron/exciton blocking layer, F8BT:l-P6(THS) NIR light-emitting layer and Ca/Al cathode.

Image: 
by Alessandro Minotto, Ibrahim Bulut, Alexandros G. Rapidis, Giuseppe Carnicella, Maddalena Patrini, Eugenio Lunedei, Harry L. Anderson, and Franco Cacialli

The ability to manipulate near-infrared (NIR) radiation has the potential to enable a plethora of technologies not only for the biomedical sector (where the semitransparency of human tissue is a clear advantage) but also for security (e.g. biometrics) and ICT (information and communication technology), with the most obvious application being to (nearly or in)visible light communications (VLCs) and related ramifications, including the imminent Internet of Things (IoT) revolution. Compared with inorganic semiconductors, organic NIR sources offer cheap fabrication over large areas, mechanical flexibility, conformability, and, potentially, bio-compatibility. However, the emission efficiency of organic emitters in the NIR is hindered by the detrimental effects of certain types of aggregation/packing of the emitters in the solid state and by the generally observed increase of non-radiative rates upon reduction of the energy gap (EG), i.e. the so-called "energy-gap law" (EG-law) for radiationless transitions. Hybrid organic/inorganic innovative materials such as perovskite methylammonium lead halide and quantum dots may offer a high external quantum efficiency (EQE) alternative, but their heavy-metal content will prevent their use in most applications, especially biocompatible or wearable ones. Toxicity issues can also affect phosphorescent materials incorporating toxic heavy elements.

In a new paper published in Light: Science & Applications, an international team of scientists, led by Professor Franco Cacialli at University College London and Professor Harry Anderson at the University of Oxford report novel non-toxic and heavy-metal-free organic NIR emitters and OLEDs characterised by emission peaking at ~ 850 nm and a maximum 3.8% external quantum efficiency (EQE).

The authors use optical spectroscopy to elucidate how it is possible to leverage the increasing spatial extent of excited states with oligomer length to favourably manipulate the competition between radiative and nonradiative processes (quantified by the radiative and nonradiative rates, kr and knr respectively), while simultaneously suppressing aggregation. Surprisingly, instead of a decreasing photoluminescence quantum yield (PLQY) with oligomer length (and thus with reducing gap), a steady increase and eventual saturation of the PLQY is observed at around the hexamer (l-P6(THS)). While surprising, this behaviour can be understood by considering that in these systems conjugated triple-bond-based bridges between the porphyrins allow effective intra-molecular electronic coupling among the macrocycles, and so enable the radiative (singlet) excited state (exciton) to delocalize over increasing portions of the molecule. This forces an increasing mismatch of the spatial extent of the radiative (singlet) and of the non-radiative (triplet) excitons, in view of the intrinsically localized nature of the triplets. Such a mismatch is expected to suppress intersystem crossing (ISC) between singlets and triplets and therefore the non-radiative rate (knr). In addition, exciton delocalization is also expected to favour decoupling from vibrational ladders (and thus circumvent the EG-law). Remarkably, the growth of the nonradiative rate as a function of the decrease of the energy gap (forced by the increased oligomer length) is characterized in these systems by a logarithmic rate an order of magnitude smaller than in previous studies. Second, bulky trihexylsilyl side chains are attached to the porphyrins to prevent aggregation quenching, through steric hindrance, which limits π-π interactions (see chemical structure in Figure 1).

The basic photophysics and material design breakthrough has been confirmed by incorporating an F8BT:l-P6(THS) blend in OLEDs, with which an average EQE of 1.1% and a maximum EQE of 3.8% at a peak wavelength of 850 nm were demonstrated (Figure 2). A novel quantitative model was also developed to analyse the results, which implies the importance of triplets to singlets conversion processes (e.g. reverse inter-system crossing, and/or thermally activated delayed fluorescence) to account for the EQE values beyond the apparent limit imposed by spin-statistics.

The EQEs presented in the paper are, to the best of the authors knowledge, the highest reported so far in this spectral range from a "heavy-metal-free" fluorescent emitter.

The authors summarise the significance of their work by saying:

"Not only do our results demonstrate milder increases of knr with (reducing) EG than in the literature, but, most importantly, they also provide a general strategy for designing high-luminance NIR emitters."

"In the short term, they may enable further development of OLEDs in this challenging spectral range for a wide range of potential applications spanning from the life-sciences (biochemical wearable sensors, in vivo sub-surface bio-imaging, to name just two), security (e.g. biometrics), horticulture, and (in)visible light communications (iVLC), a serious contestant to alleviate the bandwidth demands of the imminent Internet-of-thing (IoT) revolution."

"More importantly, and in perspective, these findings are significant to a range of disciplines."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

New ion trap to create the world's most accurate mass spectrometer

image: Skoltech scientists pin hopes on a new ion trap to create the world's most accurate mass spectrometer

Image: 
Evgeny Nikolaev and Anton Lioznov/Skoltech

Mass spectrometers are widely used to analyze highly complex chemical and biological mixtures. Skoltech scientists have developed a new version of a mass spectrometer that uses rotation frequencies of ionized molecules in strong magnetic fields to measure masses with higher accuracy (FT ICR). The team has designed an ion trap that ensures the utmost resolving power in ultra-strong magnetic fields. The research was published in the journal Analytical Chemistry.

The ion trap is shaped like a cylinder made up of electrodes, with electric and magnetic fields generated inside. The exact masses of the test sample's ions can be determined from their rotation frequencies. The electrodes must create a harmonized field of a particular shape such that the ions rotate predictably. A trap with such a field is called a Dynamically Harmonized Cell (DHC).

The DHC was invented in 2011 by Evgeny Nikolaev, a Professor at the Skoltech Center for Computational and Data-Intensive Science and Engineering (CDISE). Although in reality, the cell's field is of a highly complex nature and is not harmonized, for fast rotating ions in the magnetic field it still appears as harmonized due to the averaging effect, hence the cell's name. So far, the best trap in terms of spectrum measurement accuracy, the DHC has been widely used in research and commercial mass spectrometers with a high demand on accuracy and integrated into the strongest magnetic field mass spectrometer at the National High Magnetic Field Laboratory in Tallahassee, FL.

Super strong magnets cost tens of millions of dollars. Mass measurement accuracy is supposed to increase linearly with magnetic field strength, but it does not: in reality, the pattern is non-linear, and the increase in accuracy is much slower than expected.

The scientists assumed that non-linearity occurs because the level of vacuum in the cell is not sufficient, no matter how advanced are the pumps. They developed a trap with both ends open for easy evacuation of residual gases and named it the 'Zig-Zag Cell'.

"Right now, our lab is manufacturing the new cell which we will use for experiments to check whether our assumptions and theoretical predictions are correct, and if they are, the trap will put the linear relationship between mass spectrum measurement accuracy and magnetic field strength back in place, thus ensuring higher accuracy at very high values of magnetic field strength. The fact that the accuracy increases with an increase in magnetic field strength means that the trap will potentially help create the most accurate mass spectrometer of all," says Anton Lioznov, a PhD student at Skoltech.

According to the study lead, Professor Evgeny Nikolaev, mass spectrometers with a new type of cell will ensure higher accuracy for biological samples and complex mixtures, such as oil, where even existing mass spectrometers of this type with the DHC can detect up to 400,000 compounds.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Skoltech team developed on-chip printed 'electronic nose'

image: The e-nose matrix board with eight sensors

Image: 
Skoltech

Skoltech researchers and their colleagues from Russia and Germany have designed an on-chip printed 'electronic nose' that serves as a proof of concept for low-cost and sensitive devices to be used in portable electronics and healthcare. The paper was published in the journal ACS Applied Materials Interfaces.

The rapidly growing fields of the Internet of Things (IoT) and advanced medical diagnostics require small, cost-effective, low-powered yet reasonably sensitive, and selective gas-analytical systems like so-called 'electronic noses.' These systems can be used for noninvasive diagnostics of human breath, such as diagnosing chronic obstructive pulmonary disease (COPD) with a compact sensor system also designed at Skoltech. Some of these sensors work a lot like actual noses -- say, yours -- by using various sensors to detect the complex signal of a gaseous compound.

One approach to creating these sensors is by additive manufacturing technologies, which have achieved enough power and precision to produce the most intricate devices. Skoltech senior research scientist Fedor Fedorov, Professor Albert Nasibulin, research scientist Dmitry Rupasov, and their collaborators created a multisensor 'electronic nose' by printing nanocrystalline films of eight different metal oxides onto a multielectrode chip (they were manganese, cerium, zirconium, zinc, chromium, cobalt, tin, and titanium). The Skoltech team came up with the idea for this project.

"For this work, we used microplotter printing and true solution inks. There are a few things that make it valuable. First, the printing resolution is close to the distance between electrodes on the chip, which is optimized for more convenient measurements. We show these technologies are compatible. Second, we managed to use several different oxides, enabling more orthogonal signals from the chip resulting in improved selectivity. We can also speculate that this technology is reproducible and easy to be implemented in industry to obtain chips with similar characteristics, and that is really important for the 'e-nose' industry," Fedorov explained.

In subsequent experiments, this 'nose' was able to sniff out the difference between different alcohol vapors (methanol, ethanol, isopropanol, and n-butanol), which are chemically very similar and hard to tell apart, at low concentrations in the air. Since methanol is extremely toxic, detecting it in beverages and differentiating between methanol and ethanol can save lives. To process the data, the team used linear discriminant analysis (LDA), a pattern recognition algorithm, but other machine learning algorithms could also be used for this task.

So far, the device operates at rather high temperatures of 200-400 degrees Celsius. Still, the researchers believe that new quasi-2D materials such as MXenes, graphene, and so on could be used to increase the sensitivity of the array and ultimately allow it to operate at room temperature. The team will continue working in this direction, optimizing the materials used to lower power consumption.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Wood formation can now be followed in real-time -- and possibly serve the climate of tomorrow

image: Visualization of cell wall patterns.

Image: 
Dr. René Schneider

A genetic engineering method makes it possible to observe how woody cell walls are built in plants. The new research in wood formation, conducted by the University of Copenhagen and others, opens up the possibility of developing sturdier construction materials and perhaps more climate efficient trees.

The ability of certain tree species to grow taller than 100 meters is due to complex biological engineering. Besides needing the right amounts of water and light to do so, this incredible ability is also a result of cell walls built sturdily enough to keep a tree both upright and able to withstand the tremendous pressure created as water is sucked up from its roots and into its leaves.

This ability is made possible by what are known as the secondary cell walls of plants. Secondary cell walls, also known as xylem or wood, are built according to a few distinct and refined patterns that allow wall strength to be maintained while still allowing connecting cells to transport water from one to the other.

How these wall patterns are built has been a bit of a mystery. Now, the mystery is starting to resolve. For the first time, it is possible to observe the process of woody cell wall pattern formation within a plant--and in real-time no less. A team of international researchers, including Professor Staffan Persson of the University of Copenhagen, has found a way to monitor this biological process live, under the microscope. The scientific article for the study is published in the renowned journal, Nature Communications.

This opens up the possibility of manipulating the building process and perhaps even making plant xylem stronger--a point that shall be returned to.

A genetic trick makes it possible to observe wood formation in real-time

Because wood forms in tissue that is buried deep inside the plant, and microscopes work best on the surfaces of objects, the wood-forming process is difficult to observe. To witness it in action, researchers needed to apply a genetic trick. By modifying the plants with a genetic switch, they were able to turn on wood formation in all cells of the plant--including those on the surface. This allowed them to observe the wood formation under a microscope in detail and in real time.

Cell walls consist mainly of cellulose, which is produced by enzymes located on the surface of all plant cells. Generally speaking, the process involves the orderly arrangement of protein tubes - known as microtubules - at the surface of the cells. The microtubules serve as tracks along which the wall producing enzymes deposit construction material to the wall.

"One can imagine the construction process as a train network, where the train would represent the cellulose-producing enzymes, as they move forward while producing cellulose fibers. The microtubules, or rails, steer the direction of the proteins, just like train rails. Interestingly, during the formation of woody cell walls these "rails" need to change their organization completely to make patterned walls - a process that we now can follow directly under our microscopes" explains Staffan Persson, of UCPH's Department of Plant and Environmental Sciences.

Taller trees and sturdier building materials

"We now have a better understanding of the mechanisms that cause the microtubules to rearrange and form the patterns. Furthermore, we can simulate the wall pattern formation on a computer. The next step is to identify ways that allows us to make changes to the system. For example, by changing patterns," suggests Persson.

First author, Dr Rene Schneider chimes in:

"Changing the patterns can alter the ways in which a plant grows or distributes water within it, which can then go on to influence a plant's height or biomass. For example, if you could create trees that grow differently by accumulating more biomass, it could potentially help slow the increase in carbon dioxide in the atmosphere."

In the longer term, Persson suggests that one of the most obvious applications is to manipulate biological processes to develop stronger or different woody construction materials.

"If we can change both the chemical composition of cell walls, which we and other researchers are already working on, and the patterns of cell walls, we can probably alter wood strength and porosity. Sturdier construction materials made of wood don't just benefit the construction industry, but the environment and climate as well. They have a smaller carbon footprint, a longer service life and can be used for manifold purposes. In some cases, they may even be able to replace energy-intensive materials like concrete," says Persson, whose background as a construction engineer compliments his plant biology background.

He also points to potential applications in the development of cellulose-based nanomaterials. These are gaining ground in medicine, where cellulose-based nanomaterials can be used to efficiently transport pharmaceuticals throughout the body.

However, these applications would first require further knowledge in how we can manipulate secondary walls and implementation of the knowledge in trees or other useful crop plants as much of our research is conducted in model plant organisms, such as thale cress, underscores both Staffan Persson and Rene Schneider.

FACTS:

The secondary cell walls of plants are located between the primary cell wall and the cell surface. They are typically formed when the thinner primary cell wall stops growing. In addition to cellulose, one of the main components of cell wall is lignin, which fills in the cavities in cellulose and thereby strengthens the wall even more.

Secondary cell walls have two major patterns in two distinct stages of cell life. The first is a spiral pattern which twists around the cell, making the walls stronger and allowing for the cells to continue to grow. The second pattern is perforated by numerous small holes. The latter pattern doesn't allow for cell growth. However, as cells connect with each via the holes, they are able to transport water from one cell to the next, and so on.

The scientific article for the study is published in the renowned journal, Nature Communications: https://www.nature.com/ncomms/

The study's first author is René Schneider, affiliated with the University of Melbourne in Australia and the Max Planck Institute for Molecular Plant Physiology in Germany. In addition to the University of Copenhagen's Staffan Persson, research was also conducted by researchers from Wageningen University, Netherlands; Nara Institute of Science and Technology, Japan; University of Tasmania, Australia and Shanghai Jiao Tong University in China.

Professor Staffan Persson was recruited from the University of Melbourne, Australia in July 2020 as a recipient of the Villum Investigator and NNF Laureate Research Grant. Persson is the new head of the Copenhagen Plant Science Center (CPSC) and a DNRF Chair at the Department of Plant and Environmental Sciences.

Credit: 
University of Copenhagen - Faculty of Science

Link between dual sensory loss and depression

People with combined vision and hearing loss are nearly four times more likely to experience depression and more than three times more likely to suffer chronic anxiety, according to a new study published in the journal Frontiers in Psychology and led by Anglia Ruskin University (ARU).

Researchers analysed a health survey of 23,089 adults in Spain and found that while people suffering either vision or hearing loss both were more likely to report depression as those that were not, that risk increased to 3.85 times higher when respondents reported problems with both senses combined.

The study also found people with combined vision and hearing loss were 3.38 times more likely than the general population to report chronic anxiety.

It is understood to be the first study looking at the risk of depression in people with combined vision and hearing loss.

Lead author Professor Shahina Pardhan, Director of the Vision and Eye Research Institute at ARU, said: "Difficulties with seeing and hearing affects many aspects of everyday life. It can affect ability to work, interaction with others and carry out physical activity, all of which are important for emotional wellbeing. Our study has found a significantly increased risk of mental health issues like depression and chronic anxiety if people suffer both vision and hearing loss."

Co-author Dr Guillermo López-Sánchez added: "These findings show the importance of appropriate treatment for sensory loss as well as timely intervention for mental health issues. The strong link to mental health that we have found shows these issues cannot be ignored by health authorities and action must be taken to ensure the best possible care for those with sensory loss."

ARU, along with vision and hearing charities, is calling on the Government to support a UK National Eye Health and Hearing Study, which would provide robust and expansive data on the UK's sensory health needs.

Credit: 
Anglia Ruskin University

New treatment helps patients with a spinal cord injury

image: An targeted epidural electrical stimulation of the spinal cord allows spinal cord injury patients to regain control over their blood pressure - a less known but frequent condition.

Image: 
EPFL / MCV

An international team of scientists headed by Grégoire Courtine at EPFL and CHUV and Aaron Phillips at the University of Calgary has developed a treatment that can dramatically improve the lives of patients with a spinal cord injury.

VIDEO: https://www.youtube.com/watch?v=UGXnuHgDWFU

"A serious and underrecognized result of these injuries is unstable blood pressure, which can have devastating consequences that reduce quality of life and are life threatening. Unfortunately, there are no effective therapies for unstable blood pressure after spinal cord injury". said Dr. Aaron Phillips, co-lead author of the study (see affiliations below). "We created the first platform to understand the mechanisms underlying blood pressure instability after spinal cord injury."

Their findings, published today in Nature, builds on research that has already enabled several paraplegics to walk again through epidural electrical stimulation (EES). But instead of targeting the region of the spinal cord that produces leg movements, they delivered EES in the region containing the neural circuits that regulate blood pressure. In addition, they adapted the stimulation protocol in real-time based on measurements taken by a blood-pressure monitor implanted in an artery. The monitor measures blood pressure continuously, and adapts the instructions sent to a pacemaker that in turn delivers electrical pulses over the spinal cord. The stimulation is biomimetic, since it recapitulates the natural activation of the body's hemodynamic system. "The stimulation compensates for the broken communication line between the patient's central nervous system and sympathetic nervous system," says Courtine.

The research team initially tested their method in preclinical rodent and nonhuman primate models in order to understand the mechanisms that disrupt blood pressure modulation after spinal cord injury, and to identify where and how the stimulation patterns should be applied to obtain the desired hemodynamic responses. Jocelyne Bloch, the neurosurgeon who heads the .NeuroRestore research center with Courtine and who carried out the surgical implants, was surprised at how quickly the stimulation protocol worked. "It was impressive to see the blood pressure rise to the target level immediately after the stimulation was applied," she says.

After these initial tests, the scientists tried their method on a human patient.

"I suffered from daily episodes of low blood pressure, especially in the morning and evening," says Richi, 38 years old. "But since I've had the implant, it happens much less often - maybe once every couple of weeks." Himself a surgeon, Richi lost the use of all four limbs after a sport accident. "Those daily episodes of hypotension were a real burden. They also disturbed my vision and prevented me from performing even simple everyday tasks. The electrical stimulation treatment provided a huge relief - much more effective than medication."

One of the physicians working with Richi, Dr. Sean Dukelow, states: "Since using this system, Richi was able to completely stop all drugs he was using to manage blood pressure instability. This has been transformative, and over the long-term may reduce Richi's risk of cardiovascular disease."

The team intends to continue its research thanks to a large grant received from the US Defense Advanced Research Projects Agency (DARPA). At the same time, Onward (formerly GTX Medical) - a startup based at EPFL Innovation Park and in the Netherlands - will develop and market clinical devices based on the team's discoveries.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Chinese spice helps unravel the mysteries of human touch

New insight into how human brains detect and perceive different types of touch, such as fluttery vibrations and steady pressures, has been revealed by UCL scientists with the help of the ancient Chinese cooking ingredient, Szechuan pepper.

Humans have many different types of receptor cells in the skin that allow us to perceive different types of touch. For more than a century, scientists have puzzled over whether touch signals from each type of receptor are processed independently by the brain, or whether these different signals interact before reaching conscious perception.

For the study, published in Proceedings of the Royal Society B, UCL researchers took a novel approach to this question by stimulating one type of touch receptor chemically, and another type mechanically. This bypasses the problem of different mechanical touch stimuli potentially interacting within the skin, with unknown effects on the receptors.

Instead, the UCL team used hydroxy-α-sanshool, a bioactive compound of Szechuan pepper responsible for the characteristic tingling quality of Szechuan cuisine, to stimulate the touch receptors responsible for the sensation of fluttery vibration.

In the study, consisting of 42 participants, hydroxy-α-sanshool was applied to a small skin area on the lip. Once participants started to experience a tingling sensation, they were asked to note the strength of the tingling sensation.

Next researchers applied a steady pressure stimulus to different locations on the upper and lower lips. Participants reported their subjective perception of the intensity of the tingling sensation, by rating it relative to the initial sensation before pressure was applied.

Across several tests, the tingling sensation caused by hydroxy-α-sanshool, was dramatically reduced by steady pressure. The intensity of tingling sensation caused by hydroxy-α-sanshool decreased as the steady pressure increased, and also decreased as the site of steady pressure was moved closer to the site where sanshool was applied.

Lead author Professor Patrick Haggard (UCL Institute of Cognitive Neuroscience), said: "Scientists had previously described how 'touch inhibits pain', but our work provides novel evidence that one kind of touch can inhibit another kind of touch.

"Our results suggest that the touch system for steady pressure must inhibit the touch system for fluttery vibration at some level in the nervous system.

"The inhibition between these signals may explain how the brain produces a single perception of touch, despite the wide range of signals transmitted by the different types of sensory receptor in the skin."

Credit: 
University College London

Machine-learning to predict the performance of organic solar cells

image: Gradient-based organic solar cell

Image: 
ICMAB

Imagine looking for the optimal configuration to build an organic solar cell made from different polymers. How would you start? Does the active layer need to be very thick, or very thin? Does it need a large or a small amount of each polymer? Knowing how to predict the specific composition and cell design that would result in optimum performance is one of the greatest unresolved problems in materials science. This is, in part, due to the fact that the device performance depends on multiple factors. Now, researchers from the Universitat Rovira i Virgili (URV) specialized in Artificial Intelligence have collaborated with researchers from Institute of Materials Science of Barcelona, specialized on materials for energy applications, to combine the experimental data points that they gather with artificial intelligence algorithms and enable an unprecedented predicting capability of the performance of organic solar cells.

ICMAB researchers, led by Mariano Campoy-Quiles, have generated multiple data sets by using a new experimental method that allows them to have a large number of samples in only one, speeding the time compared to conventional methods. Then, machine-learning models are used to learn from those data sets and predict the performance of even more materials, such as novel organic semiconductors synthesized at the group of Prof. Martin Heeney at Imperial College London.

This study may be the first of many in the field combining artificial intelligence and high-throughput experiments to predict the optimum conditions of certain materials and devices.

Obtaining multiple experimental data points

One of the key aspects of this study is that researchers are able to generate big and meaningful datasets at minimal experimental effort. This is an important aspect toward the success of machine-learning modelling in order to obtain accurate and reliable models and predictions. Researchers use a methodology based on combinatorial screening in which they generate samples with gradients in the parameters that mostly affect the performance of organic solar cells (i.e. composition and thickness).

"When using a conventional method, a sample provides you with information about only one point. However, using our methodology we can obtain between 10 and 1000 times more points. This allows, on the one hand, to evaluate the photovoltaic potential of a material about 50 times faster than with conventional methods. On the other hand, it provides large statistics and a huge set of data (hundreds of thousands of points) that allow us to reliably train different artificial intelligence algorithms" says Mariano Campoy-Quiles, ICMAB researcher and co-author of this study.

Artificial Intelligence algorithms to predict the behavior

"Within the broad field of AI, in this work we apply machine-learning, which is a term that gathers all sort of algorithms which confer machines (i.e. computers) the ability to learn from a given set of data, yet not necessarily to take autonomous decisions. Here, we exploit the more statistical vision of AI to draw predictive models from our large experimental datasets" explains Xabier Rodríguez- Martínez, ICMAB researcher and first author of the study.

Artificial intelligence algorithms in the field of materials science are mainly used to look for behavior patterns and to further develop predictive models of the behavior of a family of materials for a given application. To do so, an algorithm is first trained by exposing it to real data to generate a model algorithm. The model is then validated with other data points not used to create the model, but from the same category of materials. Once validated, the algorithm is applied to predict the behavior of other similar materials that are not part of the training nor validating set.

In this specific study, AI algorithms are trained with thousands of points obtained with the high throughput method to evaluate and predict the different factors that determine the efficiency of an organic solar cell. "The use of AI algorithms was particularly challenging in this case", explains Roger Guimerà, ICREA Professor and researcher from URV's Department of Chemical Engineering and co-author of the study, "because the volume and complexity of the data, and because the ultimate goal is to extrapolate to new materials that have never been tested."

Achievements and future challenges

This work represents two great achievements. On the one hand, developing AI models that predict how efficiency depends on many of the organic solar cell parameters. The degree of prediction is very high even for materials that have not been used in the training set.

"The second important point is that thanks to AI, we have determined which are the physical parameters that affect to a greater extent this behavior" says Mariano Campoy-Quiles, and adds "In particular, we have seen that the most critical parameters that determine the optimum composition are the electronic gap of each material, as well as how balanced the charge transport is in each one."

Researchers believe that the results and the methodology developed in this study are very important to guide theoretical researchers as to what to take into account when developing future analytical models that attempt to determine the efficiency of a given system.

"Our next challenge is to understand much more complex systems. The more complex the system, the more useful AI can be" concludes Campoy-Quiles.

Credit: 
Universitat Rovira i Virgili

Food export restrictions by a few countries could skyrocket global food crop prices

Recent events such as the Covid-19 pandemic, locust infestations, drought and labour shortages have disrupted food supply chains, endangering food security in the process. A recent study published in Nature Food shows that trade restrictions and stockpiling of supplies by a few key countries could create global food price spikes and severe local food shortages during times of threat.

'We quantified the potential effects of these co-occurring global and local shocks globally with their impacts on food security,' explains Aalto University Associate Professor Matti Kummu. The results of this research have critical implications on how we should prepare for future events like Covid-19, he says.

The researchers modelled future scenarios to investigate the impact of export restrictions and local production shocks of rice, wheat, and maize would have on their supply and price. These three crops form the backbone of global trade in staple crops and are essential for food security across the globe.

The results show that restriction by only three key exporters of each crop would increase the price of wheat by 70%, while maize and rice would rise by 40% and 60%. When combining this with potential local shocks that occurred last year, the prices would nearly double.

Kummu explains: 'This is the result of an increasingly interconnected world, in which the majority of countries are dependent on imported food and, so, vulnerable to this kind of shock.'

'We saw that trade restrictions by only a few key actors can create large short-term price spikes in the world market export price of grains, which can lead to food insecurity in import-dependent countries,' explains Postdoctoral Researcher Theresa Falkendal, Potsdam Institute for Climate Impact Research.

By suddenly losing more than one-third of their annual grain supply, many low-income and lower-middle-income countries in Africa and Asia would not be able to cover this grain supply deficit with their domestic reserves, and would need alternative grain sources to survive.

'It's important to realise that food security depends on both local and remote conditions, and imprudent policy decisions in the rich part of the world can plunge people into real hardship in poorer parts of the world,' states Falkendal.

But shock scenarios such as those modelled by the researchers and the risks they bring may become commonplace thanks in part to global warming.

The Covid-19 pandemic's effect on global agricultural supply chains, as well as locusts destroying crops and livelihoods in the Horn of Africa and South Asia, have had a devastating effect on food security.

'To help prevent such devastation in the future, we need proactive strategies, like reducing food waste, changing the diet towards more plant-based protein sources, and increasing the yields sustainably particularly in the most vulnerable countries,' says Kummu.

'While sustainable design of agricultural systems is important, it must go hand-in-hand with efforts to improve political decisions and accountability,' says Michael J. Puma, research scientist and fellow at Center for Climate Systems Research, Earth Institute, Columbia University.

These solutions would ease a lot of pressure on resources that are needed for food production and help improve the self-sufficiency of low-income and middle-income countries.

Thus, timely and coordinated international responses are needed to minimise threats to food security especially to low-income and middle-income countries which lack the resources and purchasing power of larger nations, to ensure affordable staple grains for the world's poorest citizens, and to avert a humanitarian crisis.

'It's essential that humanitarian institutions strengthen their efforts to support democratic accountability around the world, which will ultimately help us to avoid severe food insecurity and famine,' concludes Puma.

Credit: 
Aalto University

New concept for rocket thruster exploits the mechanism behind solar flares

image: PPPL physicist Fatima Ebrahimi in front of an artist's conception of a fusion rocket

Image: 
Elle Starkman (PPPL Office of Communications) and ITER

A new type of rocket thruster that could take humankind to Mars and beyond has been proposed by a physicist at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL).

The device would apply magnetic fields to cause particles of plasma, electrically charged gas also known as the fourth state of matter, to shoot out the back of a rocket and, because of the conservation of momentum, propel the craft forward. Current space-proven plasma thrusters use electric fields to propel the particles.

The new concept would accelerate the particles using magnetic reconnection, a process found throughout the universe, including the surface of the sun, in which magnetic field lines converge, suddenly separate, and then join together again, producing lots of energy. Reconnection also occurs inside doughnut-shaped fusion devices known as tokamaks.

"I've been cooking this concept for a while," said PPPL Principal Research Physicist Fatima Ebrahimi, the concept's inventor and author of a paper detailing the idea in the Journal of Plasma Physics. "I had the idea in 2017 while sitting on a deck and thinking about the similarities between a car's exhaust and the high-velocity exhaust particles created by PPPL's National Spherical Torus Experiment (NSTX)," the forerunner of the laboratory's present flagship fusion facility. "During its operation, this tokamak produces magnetic bubbles called plasmoids that move at around 20 kilometers per second, which seemed to me a lot like thrust."

Fusion, the power that drives the sun and stars, combines light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei that represents 99% of the visible universe -- to generate massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

Current plasma thrusters that use electric fields to propel the particles can only produce low specific impulse, or speed. But computer simulations performed on PPPL computers and the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory in Berkeley, California, showed that the new plasma thruster concept can generate exhaust with velocities of hundreds of kilometers per second, 10 times faster than those of other thrusters.

That faster velocity at the beginning of a spacecraft's journey could bring the outer planets within reach of astronauts, Ebrahimi said. "Long-distance travel takes months or years because the specific impulse of chemical rocket engines is very low, so the craft takes a while to get up to speed," she said. "But if we make thrusters based on magnetic reconnection, then we could conceivably complete long-distance missions in a shorter period of time."

There are three main differences between Ebrahimi's thruster concept and other devices. The first is that changing the strength of the magnetic fields can increase or decrease the amount of thrust. "By using more electromagnets and more magnetic fields, you can in effect turn a knob to fine-tune the velocity," Ebrahimi said.

Second, the new thruster produces movement by ejecting both plasma particles and magnetic bubbles known as plasmoids. The plasmoids add power to the propulsion and no other thruster concept incorporates them.

Third, unlike current thruster concepts that rely on electric fields, the magnetic fields in Ebrahimi's concept allow the plasma inside the thruster to consist of either heavy or light atoms. This flexibility enables scientists to tailor the amount of thrust for a particular mission. "While other thrusters require heavy gas, made of atoms like xenon, in this concept you can use any type of gas you want," Ebrahimi said. Scientists might prefer light gas in some cases because the smaller atoms can get moving more quickly.

This concept broadens PPPL's portfolio of space propulsion research. Other projects include the Hall Thruster Experiment which was started in 1999 by PPPL physicists Yevgeny Raitses and Nathaniel Fisch to investigate the use of plasma particles for moving spacecraft. Raitses and students are also investigating the use of tiny Hall thrusters to give small satellites called CubeSats greater maneuverability as they orbit the Earth.

Ebrahimi stressed that her thruster concept stems directly from her research into fusion energy. "This work was inspired by past fusion work and this is the first time that plasmoids and reconnection have been proposed for space propulsion," Ebrahimi said. "The next step is building a prototype!"

Credit: 
DOE/Princeton Plasma Physics Laboratory

Machine-learning to predict the performance of organic solar cells

image: Gradient-based organic solar cell samples used in the high-throughput experiments | ICMAB

Image: 
ICMAB

Knowing how to predict the specific composition and cell design that would result in optimum performance is one of the greatest unresolved problems in materials science. This is, in part, due to the fact that the device performance depends on multiple factors.

Now, researchers from the Institute of Materials Science of Barcelona, specialized on materials for energy applications, have collaborated with researchers from the Universitat Rovira i Virgili specialized in Artificial Intelligence, to combine the experimental data points that they gather with artificial intelligence algorithms and enable an unprecedented predicting capability of the performance of organic solar cells.

ICMAB researchers, led by Mariano Campoy-Quiles, have generated multiple data sets by using a new experimental method that allows them to have a large number of samples in only one, speeding the time compared to conventional methods. Then, machine-learning models are used to learn from those data sets and predict the performance of even more materials, such as novel organic semiconductors synthetized at the group of Prof. Martin Heeney at Imperial College London.

This study may be the first of many in the field combining artificial intelligence and high-throughput experiments to predict the optimum conditions of certain materials and devices.

Obtaining multiple experimental data points

One of the key aspects of this study is that researchers are able to generate big and meaningful datasets at minimal experimental effort. This is an important aspect toward the success of machine-learning modelling in order to obtain accurate and reliable models and predictions.

Researchers use a methodology based on combinatorial screening in which they generate samples with gradients in the parameters that mostly affect the performance of organic solar cells (i.e. composition and thickness).

"When using a conventional method, a sample provides you with information about only one point. However, using our methodology we can obtain between 10 and 1000 times more points. This allows, on the one hand, to evaluate the photovoltaic potential of a material about 50 times faster than with conventional methods. On the other hand, it provides large statistics and a huge set of data (hundreds of thousands of points) that allow us to reliably train different artificial intelligence algorithms" says Mariano Campoy-Quiles, ICMAB researcher and co-author of this study.

Artificial Intelligence algorithms to predict the behavior

"Within the broad field of AI, in this work we apply machine-learning, which is a term that gathers all sort of algorithms which confer machines (i.e. computers) the ability to learn from a given set of data, yet not necessarily to take autonomous decisions. Here, we exploit the more statistical vision of AI to draw predictive models from our large experimental datasets" explains Xabier Rodríguez- Martínez, ICMAB researcher and first author of the study.

Artificial intelligence algorithms in the field of materials science are mainly used to look for behavior patterns and to further develop predictive models of the behavior of a family of materials for a given application. To do so, an algorithm is first trained by exposing it to real data to generate a model algorithm. The model is then validated with other data points not used to create the model, but from the same category of materials. Once validated, the algorithm is applied to predict the behavior of other similar materials that are not part of the training nor validating set.

In this specific study, AI algorithms are trained with thousands of points obtained with the high throughput method to evaluate and predict the different factors that determine the efficiency of an organic solar cell.

Achievements and future challenges

This work represents two great achievements. On the one hand, developing AI models that predict how efficiency depends on many of the organic solar cell parameters. The degree of prediction is very high even for materials that have not been used in the training set.

"The second important point is that thanks to AI, we have determined which are the physical parameters that affect to a greater extent this behavior" says Mariano Campoy-Quiles, and adds "In particular, we have seen that the most critical parameters that determine the optimum composition are the electronic gap of each material, as well as how balanced the charge transport is in each one."

Researchers believe that the results and the methodology developed in this study are very important to guide theoretical researchers as to what to take into account when developing future analytical models that attempt to determine the efficiency of a given system.

"Our next challenge is to understand much more complex systems. The more complex the system, the more useful AI can be" concludes Campoy-Quiles.

Credit: 
Spanish National Research Council (CSIC)

Scientists 'farm' natural killer cells in novel cancer fighting approach

image: Two large natural killer immune cells are surrounded by their much smaller exosomes on the NK-GO microfluidic chip developed at the University of Michigan.

Image: 
Image courtesy of Yoon-Tae Kang and Zeqi Niu.

Building on the promise of emerging therapies to deploy the body's "natural killer" immune cells to fight cancer, researchers at the University of Michigan Rogel Cancer Center and U-M College of Engineering have gone one step further.

They've developed what is believed to be the first systematic way to catch natural killer cells and get them to release cancer-killing packets called exosomes. These nano-scale exosomes are thousands of times smaller than natural killer cells -- or NK cells for short -- and thus better able to penetrate cancer cells' defenses.

A proof-of-concept study in blood samples from five patients with non-small cell lung cancer demonstrated that the approach was able to capture natural killer cells on a microfluidic chip and use them to "farm" the NK exosomes.

The multidisciplinary team, which included U-M engineers and oncologists, further demonstrated that the exosomes could effectively kill circulating tumor cells in cell cultures, according to findings published in Advanced Science.

"Exosomes are small sacs of proteins and other molecules that are naturally released by almost every type of cell in the body," says Yoon-Tae Kang, Ph.D., a research fellow in chemical engineering and co-lead author of the study. "In this case, we wanted to expand our understanding of NK exosomes and try to harness their cancer-killing potential."

Compared to NK cells, NK exosomes are more stable and easier to modify for therapeutic purposes, he says. The system also has potential to help diagnose and monitor cancer, the study notes.

Harnessing the power of NK cells has long presented a tantalizing possibility for researchers. Unlike T cells, NK cells don't have to be primed by invader-specific antigens in order to fight off intruders.

"One major bottleneck of NK cell-based therapies, however, is that after the injection of the NK cells into patients, they're not good at infiltrating into the tumor microenvironment," says co-lead author Zeqi Niu, a doctoral student in chemical engineering. "The NK cell-derived exosomes contain the same cancer-killing molecules but they're much, much smaller and better able to penetrate into the tumors."

The technology

While a small number of previous studies had examined NK exosomes' ability to kill cancer cells, there hadn't been a systematic approach to capturing patient-derived NK cells and using them to generate NK exosomes, adds co-senior study author, Sunitha Nagrath, Ph.D., an associate professor of chemical engineering at U-M and member of the Rogel Cancer Center.

"The advantage of exosomes is that they're organic, native to your body," she says. "We didn't have to fabricate them. The beauty of the approach is being able to capture the NK cells on a microfluidic chip, incubate them on the chip for a short period of time and then collect the exosomes that are released by the cells. Otherwise, trying to isolate exosomes directly from the blood would be like looking for tiny pebbles in a room packed full of stuff."

The NK exosome harvesting system combines three technologies. First, the researchers captured the NK cells on a graphene oxide microfluidic chip developed at U-M. Then the cells are incubated, prompting them to release exosomes. The exosomes are captured by tiny magnetic beads, called ExoBeads, which are coated with exosome-specific antibodies; the beads are removed from the chip and the NK exosomes are separated from them using a different process. Simultaneously, the team used a liquid biopsy system called Labyrinth, which was also developed at U-M, to isolate circulating tumor cells from patient blood samples to evaluate correlations between NK cells and exosomes, and circulating tumor cells.

These early stage research efforts examined samples from five patients with lung cancer and two healthy controls.

"What we found was that the more NK cells were present in the sample, the fewer circulating tumor cells there were," Niu says. "We also found that the more tumor cells were in a sample, the more NK exosomes were present -- which is a clue that the presence of the cancer is stimulating the NK cells to produce the cancer-fighting exosomes."

Additional experiments showed that NK exosomes derived from patient samples were indeed able to kill circulating tumor cells in cell cultures.

A circulating tumor cell is one that has broken away from a primary tumor and begun traveling through the bloodstream. These can then seed new cancers elsewhere in the body through a process known as metastasis.

"When we first started on this research, we weren't sure if we were going to be able to generate and harvest the NK exosomes using a chip and ExoBeads," Nagrath says. "Or if we'd be able to collect them efficiently enough for potential therapeutic use. The NK cell capture process, the process of separating the NK exosomes from the beads, creating an assay to evaluate the exosomes' ability to kill cancer cells -- there were many components that had to be optimized for this approach to be a success, and each one presented its own challenges."

Exosomes also have good potential to be further engineered and optimized to be more potent and efficient against cancer cells, the researchers note.

Next steps

Additional validation and work will be needed to develop the technology toward potential future clinical use, Kang says.

The number of patient samples included in the study was limited by the exigencies of the COVID-19 pandemic, which resulted in temporary laboratory closures across the U-M campus. The researchers plan to continue the work with a larger number of patient samples. They're also working with collaborators to test the efficacy of the exosomes against tumors in vivo -- in mouse models of cancer.

Additional work is being done on the use of the technology to capture immune cells from the blood to provide doctors with additional information about a patient's cancer. This would be different from most liquid biopsy technology, which focuses on capturing and evaluating the circulating tumor cells.

The researchers also stressed the centrality of the collaboration between physicians and engineers in moving the research forward.

"Immunotherapy approaches that target immune checkpoints have revolutionized cancer therapy for several cancers. However, only a small minority of patients exhibit a durable clinical benefit and cure," says co-senior author Nithya Ramnath, MBBS, an oncologist and professor of internal medicine at Michigan Medicine. "Preliminary studies with NK cells either by themselves or in combination with immune checkpoint inhibitors have, however, have shown only modest results. NK-derived exosomes, on the other hand, are able to home into tumors more effectively. The current paper represents a technological advance in the ability to harvest not only NK cells, but also NK derived exosomes that could help inform future immunotherapeutic approaches."

Credit: 
Michigan Medicine - University of Michigan