Tech

Predicting epilepsy from neural network models

Within the staggeringly complex networks of neurons which make up our brains, electric currents display intricate dynamics in the electric currents they convey. To better understand how these networks behave, researchers in the past have developed models which aim to mimic their dynamics. In some rare circumstances, their results have indicated that 'tipping points' can occur, where the systems abruptly transition from one state to another: events now commonly thought to be associated with episodes of epilepsy. In a new study published in EPJ B, researchers led by Fahimeh Nazarimehr at the University of Technology, Tehran, Iran, show how these dangerous events can be better predicted by accounting for branches in networks of neurons.

The team's findings could give researchers a better understanding of suddenly occurring episodes including epilepsy and asthma attacks, and may enable them to develop better early warning systems for patients who suffer from them. To do this, the study considered how the dynamics of neuron activity are influenced by branches in the networks they form. Previous models have shown that these dynamics will often slow down at these points - yet so far, they have been unable to predict how the process unfolds in larger, more complex networks of neurons.

Nazarimehr's team improved on these techniques using updated models, where the degree to which adjacent neurons influence each other's dynamics can be manually adjusted. In addition, they considered how the dynamics of complex neuron networks compare with those of isolated cells. Together, these techniques enabled the researchers to better predict where branching occurs; and subsequently, how the network's dynamics are affected. Their results represent an advance in our understanding of the brain's intricate structure, and how the dynamics of the electric currents it contains can be directly related to instances of epilepsy.

Credit: 
Springer

SMART researchers design portable device for fast detection of plant stress

image: Portable leaf-clip Raman sensor being used at TLL to detect nutrient stress in leafy vegetables.

Image: 
Singapore-MIT Alliance for Research and Technology (SMART)

Portable device allows rapid detection of nitrogen deficiency - a critical nutrient for plant health

When tested on popular vegetables such as Spinach and Kai Lan, the device was also able to detect levels of other metabolites; allowing measurement of a wider range of plant stress phenotypes such as drought, heat/cold, saline and light stress

New tool offers economical, sustainable, and environmentally-friendly method to fight food insecurity

Singapore, 8 December 2020 - Researchers from the Disruptive & Sustainable Technologies for Agricultural Precision (DiSTAP) Interdisciplinary Research Group (IRG) of Singapore-MIT Alliance for Research and Technology (SMART), MIT's research enterprise in Singapore and Temasek Life Sciences Laboratory (TLL) have designed a portable optical sensor that can monitor whether a plant is under stress. The device offers farmers and plant scientists a new tool for early diagnosis and real-time monitoring of plant health in field conditions.

Precision agriculture is an important strategy for tackling growing food insecurity through sustainable farming practices, but it requires new technologies for rapid diagnosis of plant stresses before the onset of visible symptoms and subsequent yield loss. SMART's new portable Raman leaf-clip sensor is a useful tool in precision agriculture allowing early diagnosis of nitrogen deficiency in plants, which can be linked to premature leaf deterioration and loss of yield.

In a paper titled "Portable Raman leaf-clip sensor for rapid detection of plant stress" published in the prestigious journal Scientific Reports, SMART DiSTAP and TLL scientists explain how they designed, constructed, and tested the leaf clip that allows the optical sensor to probe the leaf chemistry and establish the stress state.

"Our findings showed that in vivo measurements using the portable leaf-clip Raman sensor under full-light growth conditions were consistent with measurements obtained with a benchtop Raman spectrometer on leaf-sections under laboratory conditions," says MIT Professor Rajeev Ram, co-Lead author of the paper and Principal Investigator at DiSTAP. "We demonstrated that early diagnosis of nitrogen deficiency - a critical nutrient and the most important component of fertilizers - in living plants is possible with the portable sensor."

While the study mainly looked at measuring nitrogen levels in plants, the device can also be used to detect levels of other plant stress phenotypes such as drought, heat and cold stress, saline stress, and light stress. The wide range of plant stressors that can be detected by these leaf-clip Raman probes and their simplicity and speed makes them ideal for field use by farmers to ensure crop health.

"While we have focused on the early and specific diagnosis of nitrogen deficiency using the leaf-clip sensor, we were able to measure peaks from other metabolites that are also clearly observed in popular vegetables such as Kailan, Lettuce, Choy Sum, Pak Choi, and Spinach," says Dr. Chung Hao Huang, co-first author of the paper and Postdoctoral Fellow at TLL.

The team believes their findings can aid farmers to maximise crop yield, while ensuring minimal negative impacts on the environment, including minimising pollution of aquatic ecosystems by reducing nitrogen runoff and infiltration into the water table.

"The sensor was demonstrated on multiple vegetable varieties and supports the effort to produce nutritious, low-cost vegetables as part of the Singapore 30 by 30 initiative," says Professor Nam-Hai Chua, co-Lead Principal Investigator at DiSTAP, Deputy Chairman at TLL and co-Lead author of the study. "Extension of this work to a wider variety of crops may contribute globally to improved crop yields, greater climate resiliency, and mitigation of environmental pollution through reduced fertilizer use."

Credit: 
Singapore-MIT Alliance for Research and Technology (SMART)

Natural reward theory could provide new foundation for biology

image: Dr Owen Gilbert, researcher at the Department of Integrative Biology at the University of Texas at Austin (USA) and author of the recent paper suggesting the natural reward theory of evolution.

Image: 
LE Gilbert

A link between evolution over short time frames (microevolution) and long time frames (macroevolution) that could open new approaches to understanding some of biology's deepest questions is proposed by Dr Owen Gilbert of the Department of Integrative Biology at the University of Texas at Austin (USA) in a new paper, published in the open-access, peer-reviewed journal Rethinking Ecology.

In his work, Gilbert suggests that there is an alternative non-random force of evolution, which acts synergistically with natural selection and leads to the increased innovativeness, or advancement, of life with time. The novel concept complements Darwin's theory of evolution and addresses the questions it has left unanswered.

"This could solve the mystery of why life has become more innovative with time," points out Gilbert.

Rather than assume that natural selection applies to long-preserved units like species or clades, or that natural selection works for the long-term goal of "fitness maximization," Gilbert reworked the foundations of evolutionary theory to show that there is room for another non-random force of evolution, natural reward.

Gilbert distinguishes the genetic units and time frames of long-term evolution. Whereas natural selection alters gene frequencies within species, Gilbert argues, natural reward alters the total abundance of entire genetic systems, including genetic codes, gene networks, and genetic regulatory modules shared by species and higher taxa. Gilbert proposes that natural reward also applies to cycles of invention, expansion and extinction, which happen over thousands to millions of generations, and which, when repeated, extend into deep evolutionary time.

"All previous theories of macroevolution assumed that natural selection is the only non-random force," Gilbert said. "This meant that researchers had to either extrapolate from microevolution to macroevolution, or assign foresight to natural selection--which everyone knows is an adulteration of the theory."

"A main advantage of invoking natural reward as a separate force is that it means natural selection can be used to explain the stepwise origin of complex traits, without assigning omniscience to natural selection." Forming an analogy to economics, Gilbert argues that natural selection plays the role of nature's blind inventor, creating complex "inventions" without an eye to the broader market, while natural reward acts as nature's blind entrepreneur, spreading complex inventions to the markets or environments that immediately demand them.

"With this framework, it becomes possible to clearly separate problems of origin and success, which have long been muddled," Gilbert said. "The result is new insights on major problems of biology."

In the light of the natural reward theory, Gilbert reviews questions of the evolution of evolvability, why sexual reproduction is widespread, the fixation of a single genetic code, and the factors causing apparently sudden bursts of evolutionary change. Gilbert also investigates the question of whether the mammalian replacement of dinosaurs may be considered an advancement of life, culminating with a brief review of the cause of success of human economic systems.

"Only time will tell if the theory of natural reward is correct," Gilbert said. "Existing data show, however, that its main assumptions are justified and that the theory holds promise in yielding new insights on major biological problems."

In his conclusion, Gilbert summarizes the main implication of the natural reward theory, "... advancement is explained as an expected outcome of two deterministic evolutionary forces, natural selection and natural reward, acting together without foresight for the future."

Credit: 
Pensoft Publishers

Beavers may help amphibians threatened by climate change

VANCOUVER, Wash. - The recovery of beavers may have beneficial consequences for amphibians because beaver dams can create the unique habitats that amphibians need.

That finding was reported by four WSU Vancouver scientists in a paper published in the journal Freshwater Biology. The research took place in the Gifford Pinchot National Forest of the Cascade Range, where the researchers identified 49 study sites either with or without beaver dams. The researchers found the beaver-dammed sites were 2.7 times higher in amphibian species richness than the undammed sites.

Certain types of amphibians, particularly those that develop more slowly, such as red-legged frogs and northwestern salamanders, were detected almost exclusively in dammed sites.

"Beaver-dammed wetlands support more of the amphibian species that need a long time to develop in water as larvae before they are able to live on land as adults," said Jonah Piovia-Scott, assistant professor in the School of Biological Sciences and one of the authors of the article.

Beavers, once abundant in the Pacific Northwest, were hunted nearly to extinction in the 19th century. But, in an effort to improve wildlife habitat and mitigate the effects of climate extremes, some land managers are relocating beavers into places they occupied in the past, and beavers' numbers are slowly recovering, which is also benefiting amphibians, according to the study.

Red-legged frogs and northwestern salamanders are also the species most threatened by climate change, which is projected to bring drier summer conditions to streams and wetlands in the Cascade Range. By expanding existing ponds and increasing the time before they dry up, beaver dams are allowing such species more time to reproduce and develop.

"Beavers may be a key component of ecological resilience to climate change in these ecosystems," Piovia-Scott said.

In addition to Piovia-Scott, the authors of the study are Kevan Moffett, assistant professor in the School of the Environment; John Romansic, former postdoctoral scholar in the School of Biological Sciences; and Nicolette Nelson, former graduate student in the School of Biological Sciences.

Credit: 
Washington State University

Breakthrough material makes pathway to hydrogen use for fuel cells under hot, dry conditions

image: Researchers have developed a proton conductor for fuel cells based on polystyrene phosphonic acids that maintain high protonic conductivity at high temperatures without water.

Image: 
Los Alamos National Laboratory, University of Stuttgart (Germany), University of New Mexico, and Sandia National Laboratories

LOS ALAMOS, N.M., Dec. 7, 2020-- A collaborative research team, including Los Alamos National Laboratory, University of Stuttgart (Germany), University of New Mexico, and Sandia National Laboratories, has developed a proton conductor for fuel cells based on polystyrene phosphonic acids that maintain high protonic conductivity up to 200 C without water. They describe the material advance in a paper published this week in Nature Materials. Hydrogen produced from renewable, nuclear, or fossil fuels with carbon capture, utilization, and storage can help to decarbonize industries and provide environmental, energy resilience and flexibility across multiple sectors in the economy. Towards that, fuel cells are a promising technology that converts hydrogen into electricity through an electrochemical process, emitting only water.

"While the commercialization of highly efficient fuel-cell electric vehicles has successfully begun," said Yu Seung Kim, project leader at Los Alamos, "further technological innovations are needed for the next-generation fuel cell platform evolving towards heavy-duty vehicle applications. One of the technical challenges of current fuel cells is the heat rejection from the exothermic electrochemical reactions of fuel cells.

"We had been struggling to improve the performance of high-temperature membrane fuel cells after we had developed an ion-pair coordinated membrane in 2016," said Kim. "The ion-pair polymers are good for membrane use, but the high content of phosphoric acid dopants caused electrode poisoning and acid flooding when we used the polymer as an electrode binder."

In current fuel cells, the heat rejection requirement is met by operating the fuel cell at a high cell voltage. To achieve an efficient fuel-cell powered engine, the operating temperature of fuel cell stacks must increase at least to the engine coolant temperature (100 C).

"We believed that phosphonated polymers would be a good alternative, but previous materials could not be implemented because of undesirable anhydride formation at fuel cell operating temperatures. So we have focused on preparing phosphonated polymers that do not undergo the anhydride formation. Kerres' team at the University of Stuttgart was able to prepare such materials by introducing fluorine moiety into the polymer. It is exciting that we have now both membrane and ionomeric binder for high-temperature fuel cells," said Kim.

Ten years ago, Atanasov and Kerres developed a new synthesis for a phosphonated poly(pentafluorostyrene) which consisted of the steps i) polymerization of pentafluorostyrene via radical emulsion polymerization and ii) phosphonation of this polymer by a nucleophilic phosphonation reaction. Surprisingly, this polymer showed a good proton conductivity being higher than Nafion in the temperature range >100°C, and an unexpected excellent chemical and thermal stability of >300°C.

Atanasov and Kerres shared their development with Kim at Los Alamos, whose team in turn developed high-temperature fuel cells to use with the phosphonated polymers. With the integration of membrane electrode assembly with LANL's ion-pair coordinated membrane (Lee et al. Nature Energy, 1, 16120, 2016), the fuel cells employing the phosphonated polymer exhibited an excellent power density (1.13 W cm-2 under H2/O2 conditions with > 500 h stability at 160 C).

What's next? "Reaching over 1 W cm-2 power density is a critical milestone that tells us this technology may successfully go to commercialization" said Kim. Currently, the technology is pursuing commercialization through the Department of Energy's ARPA-E and the Hydrogen and Fuel Cell Technologies Office within the Energy Efficiency and Renewable Energy Office (EERE).

Credit: 
DOE/Los Alamos National Laboratory

Measurements of tree height can help cycad conservation decisions

image: The tall, tree-like cycad species known as Cycas micronesica occupies coastal areas of numerous islands in Micronesia but has become endangered due to several non-native insect species that feed on the plants. Recent research shows that measurements of height growth helps inform conservation decisions.

Image: 
University of Guam

A multi-national research team has exploited long-term data sets that span 2001 to 2018 to reveal the utility of tree height quantifications in informing conservation decisions of an arborescent cycad species. The field work was led by the University of Guam and targeted Cycas micronesica from the Micronesian Islands of Guam, Tinian, and Yap as the model species. The findings were reported in the journal Plant Signaling & Behavior, appearing online in Volume 15, Issue 12 (doi: 10.1080/15592324.2020.1830237).

"Combining results from various insular habitats enabled a greater understanding of how plant age, sex, and environment influenced height growth," said co-author Murukesan Krishnapillai from the Yap campus of the College of Micronesia-FSM. "Our findings support the contention that cycads do not pursue rapid primary growth as a means of thriving as healthy populations."

The study illuminated a disparity in growth between male and female cycad trees, a phenomenon that has been observed in botanic gardens but never before reported from native habitats. Healthy male trees exhibited height growth that was greater than 3 cm per year, but height growth of healthy female trees was constrained to about 2 cm per year. The authors attributed these sex differences to a greater availability of non-structural resources to support primary growth in the male trees.

"Another interesting finding was a greater rate of growth for plants in managed gardens than for plants in natural settings," Krishnapillai said. "While this phenomenon has been discussed anecdotally, we are the first to use a highly replicated empirical approach to verify this for a cycad species."

Indeed, the height growth rate of garden plants was triple that of plants in natural forest communities. The authors attributed these differences to consequential plant competition for resources in the forest settings and the lack of plant competition in the garden settings.

In addition to the reporting of height increment data, the authors discussed the relevance of the new knowledge for informing conservation decisions.

"For example, we showed that height growth of Guam's threatened cycad trees suffering from years of insect damage was much less than that of healthy trees," said co-author Patrick Griffith, executive director of the Montgomery Botanical Center.

The threatened trees exhibited height growth that was about half that of healthy trees.

"If conservation agencies are able to implement effective mitigation actions to recover the plant populations in the future, this new knowledge about height growth rate may be useful for quantifying the resulting recovery of plant health," said Griffith, who also serves as co-chair of the Cycad Specialist Group, a global network for cycad conservation expertise. "I see a great potential for these findings to help conserve other threatened cycads worldwide."

Perhaps the most striking outcomes of the study were the applications of height increment data to population-level interpretations. Invasions of non-native insect herbivores that feed on cycad trees began to occur on Guam in 2003, and the resulting years of plant damage have preferentially killed the smallest demographic groups. The authors used their data to estimate a complete loss of about 70 years of recruitment as of January 2020, indicating that rebuilding a healthy plant population will require more than 70 years after conservation mitigation actions effectively stop the herbivore threats.

Empirical metrics are required for conservationists to quantify the extent of plant population recovery after invasive threat mitigation. The study has revealed the quantification of height index as a valuable metric for determining success of any future conservation interventions.

Credit: 
University of Guam

How clean electricity can upgrade the value of captured carbon

image: University of Toronto Engineering PhD candidate Geonhui Lee works on an electrolyzer in the lab of Professor Ted Sargent.

Image: 
Photo: Marit Mitchell

A team of researchers from University of Toronto Engineering has created a new process for converting carbon dioxide (CO2) captured from smokestacks into commercially valuable products, such as fuels and plastics.

"Capturing carbon from flue gas is technically feasible, but energetically costly," says Professor Ted Sargent, who serves as U of T's Vice-President, Research and Innovation. "This high energy cost is not yet overcome by compelling market value embodied in the chemical product. Our method offers a path to upgraded products while significantly lowering the overall energy cost of combined capture and upgrade, making the process more economically attractive."

One technique for capturing carbon from smokestacks -- the only one that has been used at commercial-scale demonstration plants -- is to use a liquid solution containing substances called amines. When flue gas is bubbled through these solutions, the CO2 within it combines with the amine molecules to make chemical species known as adducts.

Typically, the next step is to heat the adducts to temperatures above 150 C in order to release the CO2 gas and regenerate the amines. The released CO2 gas is then compressed so it can be stored. These two steps, heating and compression, account for up to 90% of the energy cost of carbon capture.

Geonhui Lee, a PhD candidate in Sargent's lab, pursued a different path. Instead of heating the amine solution to regenerate CO2 gas, she is using electrochemistry to convert the carbon captured within it directly into more valuable products.

"What I learned in my research is that if you inject electrons into the adducts in solution, you can convert the captured carbon into carbon monoxide," says Lee. "This product has many potential uses, and you also eliminate the cost of heating and compression."

Compressed CO2 recovered from smokestacks has limited applications: it is usually injected underground for storage or to enhance oil recovery.

By contrast, carbon monoxide (CO) is one of the key feedstocks for the well-established Fischer-Tropsch process. This industrial technique is widely used to make fuels and commodity chemicals, including the precursors to many common plastics.

Lee developed a device known as an electrolyzer to carry out the electrochemical reaction. While she is not the first to design such a device for the recovery of carbon captured via amines, she says that previous systems had drawbacks in terms of both their products and overall efficiency.

"Previous electrolytic systems generated pure CO2, carbonate, or other carbon-based compounds which don't have the same industrial potential as CO," she says. "Another challenge is that they had low throughput, meaning that the rate of reaction was low."

In the electrolyzer, the carbon-containing adduct has to diffuse to the surface of a metal electrode, where the reaction can take place. Lee's experiments showed that, in her early studies, the chemical properties of the solution were hindering this diffusion, which in turn inhibited her target reaction.

Lee was able to overcome the problem by adding a common chemical, potassium chloride (KCl), to the solution. Though it doesn't participate in the reaction, the presence of KCl greatly speeds up the rate of diffusion.

The result is that the current density -- the rate at which electrons can be pumped into the electrolyzer and turned into CO -- can be 10 times higher in Lee's design than in previous systems. The system is described in a new paper published today in Nature Energy.

Lee's system also demonstrated high faradaic efficiency, a term that refers to the proportion of injected electrons that end up in the desired product. At a current density of 50 milliamperes per square centimetre (mA/cm2), the faradaic efficiency was measured at 72%.

While both the current density and efficiency set new records for this type of system, there is still some distance to go before it can be applied on a commercial scale.

Credit: 
University of Toronto Faculty of Applied Science & Engineering

A study predicts smooth interaction between humans and robots

According to a new study by Tampere University in Finland, making eye contact with a robot may have the same effect on people as eye contact with another person. The results predict that interaction between humans and humanoid robots will be surprisingly smooth.

With the rapid progress in robotics, it is anticipated that people will increasingly interact with so called social robots in the future. Despite the artificiality of robots, people seem to react to them socially and ascribe humane attributes to them. For instance, people may perceive different qualities - such as knowledgeability, sociability, and likeability - in robots based on how they look and/or behave.

Previous surveys have been able to shed light on people's perceptions of social robots and their characteristics, but the very central question of what kind of automatic reactions social robots evoke in us humans has remained unanswered. Does interacting with a robot cause similar reactions as interacting with another human?

Researchers at Tampere University investigated the matter by studying the physiological reactions that eye contact with a social robot evokes. Eye contact was chosen as the topic of the study for two major reasons. First, previous results have shown that certain emotional and attention-related physiological responses are stronger when people see the gaze of another person directed to them compared to seeing their averted gaze. Second, directing the gaze either towards or away from another person is a type of behaviour related to normal interaction that even current social robots are quite naturally capable of.

In the study, the research participants were face to face with another person or a humanoid robot. The person and the robot looked either directly at the participant and made eye contact or averted their gaze. At the same time, the participants' skin conductance, which reflects the activity of the autonomous nervous systems, the electrical activity of the cheek muscle reflecting positive affective reactions, and heart rate deceleration, which indicates the orienting of attention, were measured.

The results showed that all the above-mentioned physiological reactions were stronger in the case of eye contact compared to averted gaze when shared with both another person and a humanoid robot. Eye contact with the robot and another human focused the participants' attention, raised their level of arousal and elicited a positive emotional response.

"Our results indicate that the non-linguistic, interaction-regulating cues of social robots can affect humans in the same way as similar cues presented by other people. Interestingly, we respond to signals that have evolved over the course of evolution to regulate human interaction even when these signals are transmitted by robots. Such evidence allows us to anticipate that as robot technology develops, our interaction with the social robots of the future may be surprisingly seamless," says doctoral researcher Helena Kiilavuori.

"The results were quite astonishing for us, too, because our previous results have shown that eye contact only elicits the reactions we perceived in this study when the participants know that another person is actually seeing them. For example, in a video conference, eye contact with the person on the screen does not cause these reactions if the participant knows that his or her own camera is off, and the other person is unable to see him or her. The fact that eye contact with a robot produces such reactions indicates that even though we know the robot is a lifeless machine, we treat it instinctively as if it could see us. As if it had a mind which looked at us," says Professor of Psychology Jari Hietanen, director of the project.

Credit: 
Tampere University

Newly discovered Greenland plume drives thermal activities in the Arctic

image: A seismic station on the Greenland Ice Sheet installed by authors. Snow accumulation in one year is ~1.5 m, and the solar panels are buried in the snow. Snow removal and maintenance are done manually by several people.

Image: 
Genti Toyokuni

A team of researchers understands more about the melting of the Greenland ice sheet. They discovered a flow of hot rocks, known as a mantle plume, rising from the core-mantle boundary beneath central Greenland that melts the ice from below.

The results of their two-part study were published in the Journal of Geophysical Research.

"Knowledge about the Greenland plume will bolster our understanding of volcanic activities in these regions and the problematic issue of global sea-level rising caused by the melting of the Greenland ice sheet," said Dr. Genti Toyokuni, co-author of the studies.

The North Atlantic region is awash with geothermal activity. Iceland and Jan Mayen contain active volcanoes with their own distinct mantle plumes, whilst Svalbard - a Norwegian archipelago in the Arctic Ocean - is a geothermal area. However, the origin of these activities and their interconnectedness has largely been unexplored.

The research team discovered that the Greenland plume rose from the core-mantle boundary to the mantle transition zone beneath Greenland. The plume also has two branches in the lower mantle that feed into other plumes in the region, supplying heat to active regions in Iceland and Jan Mayen and the geothermal area in Svalbard.

Their findings were based on measurements of the 3-D seismic velocity structure of the crust and whole mantle beneath these regions. To obtain the measurements, they used seismic topography. Numerous seismic wave arrival times were inverted to obtain 3-D images of the underground structure. The method works similarly to a CT scan of the human body.

Toyokuni was able to utilize seismographs he installed on the Greenland ice sheet as part of the Greenland Ice Sheet Monitoring Network. Set up in 2009, the project sees the collaboration of researchers from 11 countries. The US-Japan joint team is primarily responsible for the construction and maintenance of the three seismic stations on the ice sheet.

Looking ahead, Toyokuni hopes to explore the thermal process in more detail. "This study revealed the larger picture, so examining the plumes at a more localized level will reveal more information."

Credit: 
Tohoku University

Grasping an object - model describes complete movement planning in the brain

image: A rhesus macaque (Macaca mulatta) wearing a data glove for detailed hand and arm tracking.

Image: 
Ricarda Lbik / German Primate Center

Every day we effortlessly make countless grasping movements. We take a key in our hand, open the front door by operating the door handle, then pull it closed from the outside and lock it with the key. What is a natural matter for us is based on a complex interaction of our eyes, different regions of the brain and ultimately our muscles in the arm and hand. Neuroscientists at the German Primate Center (DPZ) - Leibniz Institute for Primate Research in Göttingen have succeeded for the first time in developing a model that can seamlessly represent the entire planning of movement from seeing an object to grasping it. Comprehensive neural and motor data from grasping experiments with two rhesus monkeys provided decisive results for the development of the model, which is an artificial neural network that, by feeding it with images showing certain objects, is able to simulate processes and interactions in the brain for the processing of this information. The neuronal data from the artificial network model were able to explain the complex biological data from the animal experiments and thus prove the validity of the functional model. This could be used in the long term for the development of better neuroprostheses, for example, to bridge the damaged nerve connection between brain and extremities in paraplegia and thus restore the transmission of movement commands from the brain to arms and legs (PNAS).

Rhesus monkeys, like humans, have a highly developed nervous and visual system as well as dexterous hand motor control. For this reason, they are particularly well suited for research into grasping movements. From previous studies in rhesus monkeys it is known that the interaction of three brain areas is responsible for grasping a targeted object. Until now, however, there has been no detailed model at the neural level to represent the entire process from the processing of visual information to the control of arm and hand muscles for grasping that object.

In order to develop such a model, two male rhesus monkeys were trained to grasp 42 objects of different shapes and sizes, presented to them in random order. The monkeys wore a data glove that continuously recorded the movements of arm, hand and fingers. The experiment was performed by first briefly illuminating the object to be grasped while the monkeys looked at a red dot below the respective object and performed the grasping movement with a short delay after a blinking signal. These conditions provide information about the time at which the different brain areas are active in order to generate the grasping movement and the associated muscle activations based on the visual signals.

In the next step, images of the 42 objects, taken from the perspective of the monkeys, were fed into an artificial neural network in the computer, whose functionality was mimicking the biological processes in the brain. The network model consisted of three interconnected stages, corresponding to the three cortical brain areas of the monkeys, and provided meaningful insights into the dynamics of the brain networks. After appropriate training with the behavioral data of the monkeys, the network was able to precisely reflect the grasping movements of the rhesus monkeys. It was able to process images of recognizable objects and could reproduce the muscle dynamics required to grasp the objects accurately.

The results obtained using the artificial network model were then compared with the biological data from the monkey experiment. It turned out that the neural dynamics of the model were highly consistent with the neural dynamics of the cortical brain areas of the monkeys. "This artificial model describes for the first time in a biologically realistic way the neuronal processing from seeing an object for object recognition, to action planning and hand muscle control during grasping", says Hansjörg Scherberger, head of the Neurobiology Laboratory at the DPZ, and he adds: "This model contributes to a better understanding of the neuronal processes in the brain and in the long term could be useful for the development of more efficient neuroprostheses."

Credit: 
Deutsches Primatenzentrum (DPZ)/German Primate Center

Split wave

image: Switching points of the brain are simulated with magnetic waves, which are specifically generated and divided using nonlinear processes within microscopically small vortex discs.

Image: 
HZDR/Sahneweiß/H. Schultheiß

Neural networks are some of the most important tools in artificial intelligence (AI): they mimic the operation of the human brain and can reliably recognize texts, language and images, to name but a few. So far, they run on traditional processors in the form of adaptive software, but experts are working on an alternative concept, the "neuromorphic computer". In this case, the brain's switching points - the neurons - are not simulated by software but reconstructed in hardware components. A team of researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has now demonstrated a new approach to such hardware - targeted magnetic waves that are generated and divided in micrometer-sized wafers. Looking to the future, this could mean that optimization tasks and pattern recognition could be completed faster and more energy efficiently. The researchers have presented their results in the journal Physical Review Letters (DOI: 10.1103/PhysRevLett.125.207203).

The team based its investigations on a tiny disc of the magnetic material iron nickel, with a diameter just a few micrometers wide. A gold ring is placed around this disc: When an alternating current in the gigahertz range flows through it, it emits microwaves that excite so-called spin waves in the disc. "The electrons in the iron nickel exhibit a spin, a sort of whirling on the spot rather like a spinning top," Helmut Schultheiß, head of the Emmy Noether Group "Magnonics" at HZDR, explains. "We use the microwave impulses to throw the electron top slightly off course." The electrons then pass on this disturbance to their respective neighbors - which causes a spin wave to shoot through the material. Information can be transported highly efficiently in this way without having to move the electrons themselves, which is what occurs in today's computer chips.

Back in 2019, the Schultheiß group discovered something remarkable: under certain circumstances, the spin wave generated in the magnetic vortex can be split into two waves, each with a reduced frequency. "So-called non-linear effects are responsible for this," explains Schultheiß's colleague Lukas Körber. "They are only activated when the irradiated microwave power crosses a certain threshold." Such behavior suggests spin waves as promising candidates for artificial neurons because there is an amazing parallel with the workings of the brain: these neurons also only fire when a certain stimulus threshold has been crossed.

Microwave decoy

At first, however, the scientists were unable to control the division of the spin wave very precisely. Körber explains why: "When we sent the microwave into the disc, there was a time lag before the spin wave divided into two new waves. And this was difficult to control." So, the team had to think up a way around the problem, which they have now described in the Physical Review Letters: In addition to the gold ring, a small magnetic strip is attached close to the magnetic wafer. A short microwave signal generates a spin wave in this strip which can interact with the spin wave in the wafer and thus act as a kind of decoy. The spin wave in the strip causes the wave in the wafer to divide faster. "A very short additional signal is sufficient to make the split happen faster," Körber explains. "This means we can now trigger the process and control the time lag."

Which also means that, in principle, it has been proven that spin wave wafers are suitable for artificial hardware neurons - they switch similarly to nerve cells in the brain and can be directly controlled. "The next thing we want to do is build a small network with our spin wave neurons," Helmut Schultheiß announces. "This neuromorphic network should then perform simple tasks such as recognizing straightforward patterns."

Facial recognition and traffic optimization
Pattern recognition is one of the major applications of AI. Facial recognition on a smartphone, for instance, obviates the necessity for a password. In order for it to work, a neural network must be trained in advance, which involves huge computing power and massive amounts of data. Smartphone manufacturers transfer this network to a special chip that is then integrated in the cell phone. But the chip has a weakness. It is not adaptive, so cannot recognize faces wearing a Covid mask, for example.

A neuromorphic computer, on the other hand, could also deal with situations like this: in contrast to conventional chips, its components are not hard wired but function like nerve cells in the brain. "Because of this, a neuromorphic computer could process big volumes of data at once, just like a human - and very energy efficiently at that," Schultheiß enthuses. Apart from pattern recognition, the new type of computer could also prove useful in another economically relevant field: for optimization tasks such as high-precision smartphone route planners.

Credit: 
Helmholtz-Zentrum Dresden-Rossendorf

Quick and sensitive identification of multidrug-resistant germs

video: Researchers from the University of Basel have developed a sensitive testing system that allows the rapid and reliable detection of resistance in bacteria. The system is based on tiny, functionalized cantilevers that bend due to binding of sample material. In the analyses, the system was able to detect resistance in a sample quantity equivalent to 1-10 bacteria.

Image: 
Swiss Nanoscience Institute, University of Basel

Researchers from the University of Basel have developed a sensitive testing system that allows the rapid and reliable detection of resistance in bacteria. The system is based on tiny, functionalized cantilevers that bend due to binding of sample material. In the analyses, the system was able to detect resistance in a sample quantity equivalent to 1-10 bacteria.

Bacteria that are no longer susceptible to various antibiotics pose a significant threat to our health. In the event of a bacterial infection, physicians require rapid information about potential resistance so that they can respond quickly and correctly.

Cantilever systems as an alternative
Traditional methods for detecting resistance are based on cultivating bacteria and testing their sensitivity to a spectrum of antibiotics. These methods take 48 to 72 hours to deliver results, and some strains of bacteria are difficult to cultivate. Molecular biological tests are a great deal faster and work by amplifying resistance genes or specific short sequences of genetic material by polymerase chain reaction (PCR), but even this method doesn't deliver satisfactory results for every bacterium.

An alternative comes in the form of methods using tiny cantilevers, which bend when RNA molecules bind to their surface, for example -- and this bending can then be detected. RNA molecules are "transcripts" of genes and can be used as instructions for building proteins. In addition, RNA molecules can be used to detect resistance genes in the genetic material of bacteria.

No need for labelling or amplification

Writing in the journal Global Challenges, a team of scientists from the Department of Physics, the Department of Biomedicine and the Swiss Nanoscience Institute (SNI) at the University of Basel have now presented a cantilever testing system that allowed them to detect RNA from a single antibiotic resistant bacterium. With the new cantilever system, it is not necessary to amplify or label the samples for analysis.

The researchers began by attaching sequences of three genes associated with vancomycin resistance to the cantilevers and then exposed these prepared cantilevers to a flow of RNA extracted from bacteria. If RNA molecules from the resistance genes were present, the matching RNA fragments would bind to the cantilevers, causing them to undergo nanoscale deflection that could be detected using a laser.

A clear signal even with point mutations
This method allowed the detection of not only resistance genes, but also individual point mutations associated with them. To study this, the researchers used point mutations coupled to genes responsible for resistance to ampicillin and other betalactam antibiotics.

"The big advantage of the method we've developed is its speed and sensitivity," says Dr. François Huber, first author of the paper. "We succeeded in detecting tiny quantities of specific RNA fragments within five minutes." In the case of single mutations, the detected RNA quantities corresponded to about 10 bacteria. When it came to detecting entire resistance genes, the researchers obtained a clear signal even with an amount of RNA that corresponded to a single bacterium.

"If we can detect specific genes or mutations in the genome of bacteria, then we know what antibiotic resistance the bacteria will exhibit," explains Professor Adrian Egli from University Hospital Basel, whose team played an essential role in the study. "Our work in the hospital would benefit from this kind of reliable and sensitive information about the resistance of pathogens."

Credit: 
Swiss Nanoscience Institute, University of Basel

Bend, don't break: new tool enables economic glass design

image: The user can easily adapt their original concept to create an impressive glass façade fabricable with cold bending. © Ruslan Guseinov / IST Austria

Image: 
© Ruslan Guseinov / IST Austria

Curved glass façades can be stunningly beautiful, but traditional construction methods are extremely expensive. Panes are usually made with "hot bending", where glass is heated and formed using a mold or specialized machines, an energy-intensive process that generates excess waste in the form of individual molds. Cold-bent glass is a cheaper alternative in which flat panes of glass are bent and fixed to frames at the construction site. However, given the fragility of the material, coming up with a form that is both aesthetically pleasing and manufacturable is extremely challenging. Now, an interactive, data-driven design tool allows architects to do just that.

Created by a team of scientists from IST Austria, TU Wien, UJRC, and KAUST, the software allows users to interactively manipulate a façade design and receive immediate feedback on the fabricability and aesthetics of the panelization---a very convenient way for navigating various realizations of the designer's intentions. The software is based on a deep neural network trained on special physical simulations to predict glass panel shapes and fabricability. In addition to allowing users to interactively adapt an intended design, it can automatically optimize a given design, and can be easily integrated into an architect's usual workflow. The software and research results were presented at SIGGRAPH Asia 2020.

Hot-bent and cold-bent glass

Hot-bent glass has been in use since the 19th century, though it was not until the 1990s that it became generally available. Still, the process remains prohibitively expensive and the logistics of transporting bent glass are complicated. An alternative, cold-bent glass, was developed around ten years ago. It was cheap to make, easy to transport, and the geometric and visual quality were better than hot-bent glass. The technique also allowed architects to make use of special types of glass and accurately estimate the deformation stress on the panels.

The issue was that designing cold-bent glass façades represents an enormous computational problem. Ruslan Guseinov, IST Austria postdoc and co-first author, explains: "While it is possible to calculate when an individual panel will break, or provide a safety margin for additional loads, working with the full façade---which often comprises thousands of panels---is simply too complex for the conventional designer tools." Moreover, using a computer with traditional computational methods to obtain stresses and shapes each time a change was made would take too long to be usable.

Enabling a new technology

Thus, the team's goal was to create software that would allow a (non-expert) user to interactively edit a surface while receiving real-time information on the bent shape and the associated stresses for each individual panel. They decided on a data-driven approach: the team ran more than a million simulations to build a database of possible curved glass shapes, represented in a computer-aided design (CAD) format conventional in architecture. Then, a deep neural network (DNN) was trained on this data. This DNN precisely predicts one or two possible glass panel shapes for a given quadrangular boundary frame; these can then be used in a façade sketched by an architect.

That the DNN predicted several shapes was "one of the most surprising aspects of the DNN," adds Konstantinos Gavriil, co-first author and researcher at TU Wien. "We knew that a given boundary does not uniquely define the panel, but we didn't anticipate that the DNN would be able to find multiple solutions, even though it had never seen two alternative panels for a single boundary." From the set of solutions, the program selects the pane geometry that best fits the façade design, taking into account characteristics such as smoothness of frames and reflections.

The user can then adapt their model to reduce stress and otherwise improve the overall appearance. If this proves too difficult, the user can automatically optimize the design at any time, which gives a "best fit" solution that significantly reduces the number of infeasible panels. In the end, either all panels can be safely constructed, or the user can choose to hot bend a few of them. Once the user is satisfied with the form, the program exports the flat panel shapes and frame geometries necessary for the construction of the façade.

Accuracy and efficiency

To test the accuracy of the simulations, the team manufactured frames and glass panels, including panels under extremely high stress. In the worst case, they observed miniscule deviation from the predicted shapes (less than panel thickness), and all panels were fabricable as expected. The team further verified that the data-driven model faithfully (and efficiently) reproduced the output of the simulations.

"We believe we have created a novel, practical system that couples geometric and fabrication-aware design and allows designers to efficiently find a balance between economic, aesthetic, and engineering criteria," concludes Bernd Bickel, professor at IST Austria. In the future, the program could be expanded to include additional features for practical architectural design, or be used to explore different materials and more complex mechanical models.

Credit: 
Institute of Science and Technology Austria

Participation in competitive sport in adolescence brings midlife health benefits to women

image: According to a new study, females who participate in competitive sport during adolescence have better fitness at midlife than do females with no competitive sport background in adolescence.

Image: 
University of Jyväskylä

Females who participate in competitive sport during adolescence have better fitness at midlife than do females with no competitive sport background in adolescence, reveals a study conducted at the University of Jyväskylä. Higher lean mass and bone density and better physical performance at midlife were associated with competitive sport participation at the age of 13 to 16 years. The study also found that bone density was lower if the woman has had her first period at age 14 years or older.

The findings emphasize the link between adolescence competitive sport participation, and body composition, bone health and physical performance later in life.

"The relationship between adolescent participation in competitive sport and midlife health benefits were seen even if we took midlife physical activity into account," says Suvi Ravi, a PhD student and the corresponding author at the Faculty of Sport and Health Sciences.

"We also investigated if individuals engaged in competitive sport in adolescence had more musculoskeletal problems than those with no competitive sport background in adolescence, but did not found any association between those factors," Ravi explains.

Another main finding of the study was that women whose period had occurred at age 14 years or older had lower bone density than women whose period had started at age 12 or younger. In women under menopausal age, this association was found even if physical activity in adolescence and at midlife was taken into account.

"It seems that a later age for a female's first period is associated with lower bone density regardless of competitive sport participation in adolescence," Ravi says. "It is known that some female athletes have their first period well above the average menarcheal age, so it is important to pay attention to girls whose menarche is delayed. Nowadays it is thought that menarche is belated if periods have not occurred by the age of 15 years."

This study was part of the Estrogenic Regulation of Muscle Apoptosis (ERMA) study conducted at the University of Jyväskylä and Gerontology Research Center and led by Academy Research Fellow Eija Laakkonen. Nearly 1,100 (n = 1098) women between the ages of 47 and 55 participated in this part of the ERMA project where the associations between adolescence physical activity, age at menarche and midlife characteristics were investigated.

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

New geological findings from eastern Fennoscandia add new dimensions to the history of European ice

image: Map showing the localities in NW Europe and Fennoscandia where deposits representing the different warm intervals during the last 420,000 years of Ice Age are preserved. The grey ellipsoid indicates the location of the revised localities in the study. The thick grey W-E zone shows the approximate position of the inferred northern limit of boreal pine forest in the Late Brörup temperate interval c. 100,000 years ago.

Image: 
Matti Räsänen

In Finland, the majority of the glacial and warm interval records have been interpreted to represent only the last, Weichselian, glacial cycle that took place 11,700-119,000 years ago. Finnish researchers have now revised the crucial part of the existing stratigraphic documentation in southern Finland. The new findings show that a considerable part of the warm interval records extends further back in time than earlier thought. The new results change the established conceptions about glacial history in the area.

The new study conducted at the University of Turku has examined geological stratigraphic sequences in southern and central Finland. The material collected during the study was compared with corresponding stratigraphic sequences in Fennoscandia, the Baltic countries and Europe.

- One of the studied warm interval records may be circa 300,000-400,000 years old. The forests in South Finland were then composed of scots pine and Norway spruce and contained larch, fir and possibly some species related to present-day Strobus pine, says Professor of Geology Matti E. Räsänen.

A major part of the revised warm interval records is, however, attributed to the Röpersdorf-Schöningen interglacial circa 200,000 years ago. The study led by Räsänen has, for the first time, managed to reconstruct the paleogeography, vegetation and climate of this regional interglacial in Fennoscandia. During this interglacial period, the ocean levels were nearly 20m lower than today, and the Gulf of Bothnia hosted fresh water lakes surrounded by boreal pine forests.

- This is why Finland had a continental climate with warmer summers and colder winters than today. The forests were dominated by scots pine and the Siberian spruce was growing even in southern Finland. Several species that nowadays grow on the East European Plain and in Southeast Europe were growing in southern Finland, explains Räsänen.

During the Eemian interglacial 119,000-131,000 years ago, ocean levels were four to six metres higher than today and the Baltic basin was well connected to oceans.

- The dinoflagellate, silicoflagellate and diatom microfossils discovered from the stratigraphic sequences show detailed evidence of the widespread intermixing of continental fresh and marine waters within the shallow Eemian sea coastal waters.

Beginning of the Last Ice Age Cooler than Thought

Most importantly, the research results change the established conceptions about the nature of the temperate Brörup interval in the beginning of the last Weichselian glacial cycle circa 100,000 years ago. The findings from Björkö Island in the UNESCO World Heritage Site of Kvarken Archipelago suggest that during this interval, central and southern Finland supported open birch forest tundra, which was later invaded by spruce, but not boreal pine forests as earlier thought.

- The northern limit of pine forests seems to have been located at the Gulf of Riga in the Balticum. The climate in southern Finland was thus considerably cooler than thought. These results are important as they provide background information for the modelling of future climate, concludes Räsänen.

Credit: 
University of Turku