Tech

A weather station for epileptic seizure

image: In common with weather disturbances, there are several time scales in epileptic brain activity. These can be used to predict the onset of a seizure one to several days in advance.

Image: 
© UNIGE/Mélanie Proix

A third of epilepsy sufferers are resistant to treatment for this neurological disease that affects 1% of the population. The onset of seizures is unpredictable, and has been the subject of fruitless research since the 1970s. The unforeseeable nature of the disease means patients are forced to take medication and / or adjust their lifestyles. Neuroscientists from the University of Geneva (UNIGE) and the University Hospital of Bern (Inselspital) - working with the University of California in San Francisco (UCSF) and Brown University in Providence - have succeeded in developing a technique that can predict seizures between one and several days in advance. By recording neuronal activity over at least six months using a device implanted directly in the brain, it is possible to detect individual cycles of epileptic activity and provide information about the probability of a future seizure. This approach, published in the journal Lancet Neurology, is remarkably reliable, and prospective clinical trials are now in the pipeline.

An epileptic brain can switch suddenly from a physiological state to a pathological state, characterised by a disturbance of neuronal activity which can cause, inter alia, convulsions typical of an epileptic seizure. How and why the brain swaps one state for another is still poorly understood, with the result that the onset of a seizure is difficult, if not impossible, to predict. &laquoSpecialists worldwide have been trying for over 50 years to predict seizures a few minutes in advance, but with limited success,» explains Timothée Proix, a researcher in the Department of Fundamental Neurosciences in UNIGE's Faculty of Medicine. Seizures do not appear to be preceded by any obvious warning signs that would make prediction easier. The frequency, depending on the individual, varies from once a year to once a day.

&laquoIt's a huge problem for patients», begins Maxime Baud, a neurologist at Inselspital. &laquoThis unpredictability is associated with a permanent threat that obliges patients to take medication on a daily basis. And in many cases, it prevents them from participating in certain sports. Living with this hanging over you can also affect your mental health». Existing treatments are often difficult to bear: they depend on drugs with numerous potential side effects to reduce neuronal excitability and sometimes involve neurosurgery to remove the epileptic focus, i.e. the starting point of the brain seizures. Moreover, a quarter of patients do not respond to these treatments, meaning they have to learn to manage the chronicity of their disease.

Weather forecasting

Epileptic activity can be measured using cerebral electrical activity data recorded by electroencephalography. This can be used to identify interictal discharges - evanescent discharges that appear in between seizures without directly causing them. &laquoWe observe clinically that epileptic seizures recur in clusters and cyclically. To ascertain whether the interictal discharges can explain these cycles and forecast the onset of a seizure, we analysed the data in greater detail», continues Dr Baud.

To do this, Baud collaborated with Vikram Rao, neurologist at UCSF, to obtain neuronal activity data collected over several years using devices implanted long-term in the brains of patients with epilepsy. After confirming that there were indeed cycles of cerebral epileptic activity, the scientists turned their attention to statistical analysis. This approach helped highlight a phenomenon known as the &laquopro-ictal state» where the probability of the onset of a seizure is high. &laquoAs with weather disturbances, there are several time scales in epileptic brain activity», points out Dr Baud. &laquoThe weather is influenced by the cycle of the seasons or day and night. On an intermediate scale, when a weather front approaches, the probability that it will rain increases for several days and is, therefore, better predictable. These three scales of cyclic regulation also exist for epilepsy.»

The right timeframe

The electrical activity in the brain is a reflection of the cellular activity of its neurons, more precisely their action potentials, electrical signals propagating along the neural network to transmit information. Action potentials are well known to neuroscientists, and their probability can be modelled using mathematical laws. &laquoWe adapted these mathematical models to the epileptic discharges to find out whether they heralded or inhibited a seizure», explains Dr Proix. To boost the predictive reliability, recordings of brain activity over very long periods were required. Using this approach, fronts with a high probability of seizure lasting several days could be determined for a majority of patients, making it possible to predict seizures several days in advance in some. With brain activity data collected over periods of at least six months, seizure prediction is informative for two-thirds of patients.

The analytical approach is sufficiently &laquolight» to allow the transmission of data in real time on a server or directly on a microprocessor with a device small enough to be implanted in the skull. The researchers are now working in collaboration with the Wyss Center for Bio and Neuroengineering, based at Campus Biotech in Geneva, to develop a minimally invasive brain monitoring device to record the long-term data needed to forecast seizures. The device, which slips under the skin of the scalp, could give people with epilepsy the power to plan their lives according to the likelihood of having a seizure.

Credit: 
Université de Genève

Scientists take a step towards expanding the use of magnetic fluids in medicine

image: Scientists take a step towards expanding the use of magnetic fluids in medicine and technology.

Image: 
Peter the Great St.Petersburg Polytechnic University

Magnetic fluids are used in many different areas, including medicine, electronics, mechanical engineering, ecology, etc. Such a wide range of applications is explained by a number of its useful properties. Researchers from Peter the Great St.Petersburg Polytechnic University (SPbPU) in collaboration with colleagues from Jiangsu Normal University (JSNU) discovered new effects in magnetic fluids, which will increase its effectiveness for medical purposes in future. The results were published in Springer Proceedings in Physics.

"Magnetic fluids can be used, for example, in surgery. If a magnetic fluid is injected into a vein or artery, and a permanent magnet is located in the place of the incision, a "plug" of the magnetic fluid will block the blood flow after the incision. These magnetic fluids are naturally diluted in the body fluids, which leads to the formation of the large aggregates and its eventual deposition. It may result in the capillary blockage and other negative effects. In this regard, it is important to investigate the aggregation stability during dilution in order to prevent the sticking of particles," said Elina Nepomnyashchaya, an employee of the Laboratory for Laser Photometry and Spectroscopy SPbPU.

In the normal conditions various stabilizations are used to prevent the sticking of particles in the magnetic fluids. The scientific group of the Polytechnic University used the spectral analysis in the visible and ultraviolet range and assessed the stability of the magnetic fluids. Such research for described particles has never been conducted before. Mostly the studies in this field are dedicated to the infrared range, due to the possibility of implementation of the magnetic fluids in the optical fiber lines.

However, the studies in the visible and ultraviolet range could be useful to evaluate the optical properties of particles in the magnetic fluids and to detect their aggregations, which occurs if the stability is disturbed due to the dilution.

Currently, the scientific group is working on the method of magnetic fluids stabilization, which could be safely used in medicine. Researchers are to determine the safe composition and concentration of the magnetic fluids for medical purposes.

In future, such studies will be aimed at assessing the aggregation stability of magnetic fluids in case of the dilution and upon the influence of the magnetic field used to deliver particles in the body.

Credit: 
Peter the Great Saint-Petersburg Polytechnic University

Researchers make 'high vis vests' to help monitor bee behaviour

video: Video showing the movement of tagged bees in a garden, as detected by the monochrome Raspberry Pi camera

Image: 
Video by Michael Smith

A team of researchers from the University of Sheffield and The Bumblebee Conservation Trust have been trialling new, low-cost ways to monitor bee species in the UK, by dressing bees in high visibility retroreflective vests. This novel research will be presented at the British Ecological Society's virtual Festival of Ecology.

Researchers attached retroreflective tags to seven species of wild bee and to a commercially bred UK bumblebee subspecies. Then, the foraging behaviour and 3D flight path of various bees was monitored using the web interface of a custom-built, real time tracking system.

Tracking bees in the wild is a critical part of understanding their ecology, allowing scientists to deduce their foraging and navigational behaviour, as well as their nest preferences.

Currently, it is very difficult and expensive to monitor bee populations. Commonly used methods such as harmonic radars are biased toward larger species, such as bumblebees, which are large enough to withstand the weight of the radar's tag. As such, there are several unknowns regarding the behaviour of the UK's smaller bee species.

Michael Smith, lead author and computer scientist at the University of Sheffield, said, "Finding the bee itself is difficult, and finding wild bee nests in the first place is massively difficult and time-consuming, especially for rarer or less-known species. This tool hopefully will make finding them far easier, making these studies a practical approach."

The system proved successful in monitoring seven wild species (over 100 individuals), across two field sites in the UK, including a wildflower patch at the University of Sheffield. This involved smaller-bodied species such as honeybees and the solitary leafcutter bees.

The tracking system was able to detect bees from up to 40 metres away and tags were still detected a week after deployment. The actual retroreflective tag is made of the same fabric as cycling high visibility vests.

Retroreflective materials such as high vis jackets are useful because when light hits them, it bounces back to the source. So, the researchers used a camera with a flash to take a photo of the bee, and the bee in its retroreflective vest appears as a tiny bright dot.

Michael Smith, said, of the pilot test, "We surprisingly found one of our buff-tailed bumblebees several metres up in a pine tree nearby, about 33 metres from the tracking system. It's not somewhere we would usually have looked, eliminating some human biases and motivating the system's use for re-observation studies."

In addition to their durability, the researchers found no significant difference in the length of foraging time or number of flowers visited between tagged and non-tagged individuals. These results suggest that methods such as this could be used to safely monitor bees across their lifespan.

The bees were captured with a net and transferred into a queen marking pot, commonly used by beekeepers, and then immobilised using cold air, allowing the tags to be safely and non-invasively deployed.

The tracking system is built out of off-the-shelf low-cost components and consists of a camera with a global electronic shutter, a flash and a Raspberry Pi computer. The electronic shutter allows for a very short exposure, which lets the light from the flash illuminate the scene, rather than the sun.

A machine learning model was trained to automatically identify a tag within an image frame and to learn the difference between real tags and various false positives. The whole system can then, in real time, detect the appearance of a bee in the field of the camera or discard false positives, such as a piece of pollen.

By using a system capable of real time detection, researchers can manually search for the bee and corroborate if the tracking system has correctly detected a real bee and find which individual has been detected.?

Richard Comont, Science Manager of The Bumblebee Conservation Trust, said "Being able to track bees from easy-to-find foraging sites back to the hard-to-find nest gives us the chance to find more nests, and nests much earlier in the life cycle. That means that it's much easier to establish nest site requirements, which can be taken into account when doing conservation work."

There are also some pending improvements which will upgrade this method. The range of the photo lens is limited to line of sight and a distance of 40m using the default wide-angle lens and flash. In the current prototype, tagged bees appear as identical white dots.

Michael Smith said "Given the wider changes in landscape management at policy level, being able to provide answers to foraging and nesting needs of key insect pollinators is increasingly important."

However, this work is a significant advancement, Richard Comont said "We currently know very little about the home life of bees away from captive colonies in labs - a huge omission for this declining group."

Future research from the group will involve using the tracking system to find new nests and training the model to distinguish between coloured filters on the retroreflective tags, allowing individual tagged bees to be identified remotely. The low cost of tracking systems such as this can allow for the scale-up of automated pollinator monitoring to address data gaps.

Michael Smith's poster will be available on-demand until the 18th of January 2021 at the Festival of Ecology. This work is unpublished and has not been through the peer-review process yet. This online conference will bring together 1,200 ecologists from more than 50 countries to discuss the most recent breakthroughs in ecology.

Credit: 
British Ecological Society

Novel crystalline oxide may solve the problem of overheating in composite materials

video: The atomic waltz in the crystalline lattice in this interactive video, which helps understand this fascinating mechanism of atomic rearrangement in response to heat more clearly!

Image: 
Toshihiro Isobe

Scientists at Tokyo Institute of Technology recently synthesized a novel material that displays unique thermal expansion properties. The method used by the scientists enables the production of a unique crystalline oxide containing zirconium, sulfur, and phosphorus, that exhibits two distinct mechanisms of negative thermal expansion. This is the first known material to show this property and its application may help avoid damage to composite materials, such as computer chip components, facing unexpected temperature changes.

Most materials tend to expand when heated, as the atoms move apart. The expansibility of materials under heat is measured using the coefficient of thermal expansion (CTE). Most of the current industry-grade materials have a positive CTE, making them perform poorly when subjected to more 'extreme' temperatures. However, some materials experience the opposite effect, shrinking at higher temperatures. This unusual process, known as negative thermal expansion, may help solve the problem of heat damage to composite materials.

A team of scientists at the Tokyo Institute of Technology led by Associate Prof. Toshihiro Isobe has been researching materials with negative CTE. As Dr. Isobe explains, "Negative thermal expansion behavior can be primarily attributed to two types of mechanisms, phase transition and framework-type mechanism." Both these mechanisms have found industrial application as both have pros and cons. Phase transition-type materials have large negative CTEs but narrow usable temperature ranges, which limits their operational use, particularly as fillers in composite materials. Framework-type materials, on the other hand, show thermal shrinkage over a wide temperature range, but because they have small absolute CTE values, they are required in large quantities to achieve the desired result. For years, scientists have been searching for a suitable compromise between the two, but materials able to undergo both mechanisms of negative thermal expansion have never been reported, until now.

In their new study, published in NPG Asia Materials, Dr. Isobe and his team report a method to synthesize a novel crystalline oxide made of zirconium, sulfur, and phosphorus, and describe its characteristics. This crystal, the chemical formula for which is Zr2SP2O12, is described by Dr. Isobe as "a negative CTE material which displays both transition- and framework-type mechanisms when heated."

The scientists found that, while Zr2SP2O12 exhibits both mechanisms of negative thermal mechanism mentioned earlier, one might be dominant at a given temperature. For instance, between 393K (roughly 120°C) and 453K (roughly 180°C), the material shrunk rapidly and some of the structural units were deformed, which indicates a phase transition. However, above and below this temperature range, the contraction was not as pronounced, and the researchers instead observed small changes in the length and angle of bonds between atoms, a characteristic of framework-type mechanism.

The researchers also noted an interesting phenomenon. They found that the crystals containing fewer sulfur atoms in the lattice were more easily deformed during the phase transition (120-180°C), resulting in a larger contraction of the material (higher negative CTE). This can help in producing Zr2SP2O12 crystals with the desired CTE for specific applications.

This novel crystalline material and the mechanism of its production could pave the way for the synthesis of compounds with a similar dual mechanism. This way, material engineers would be able to select compounds with specific properties to tailor the performance of manufactured materials to specific operational conditions.

Credit: 
Tokyo Institute of Technology

Researchers reveal link between cryptocurrency coding and market behavior

A new study by Reader in City's Department of Mathematics, Dr Andrea Baronchelli, published in the Science Advances journal, has revealed a connection between the coding of cryptocurrencies and their market behaviour.

Dr Baronchelli and his colleagues have analysed 297 cryptocurrencies whose code is stored in GitHub, and whose daily trading volume has averaged larger than US$100k during their lifetime.

The study demonstrates that 4 percent of developers - considered a significant fraction - contribute to the code of two or more cryptocurrencies. It further questions the transparency surrounding the coding process which creates individual cryptocurrencies.

"In our paper, we challenge the view that open code grants transparency to cryptocurrencies, even accepting that literate users do check it carefully", says Dr Baronchelli, in 'From code to market: Network of developers and correlated returns of cryptocurrencies'.

Noting the 'Code is law' operating principle in cryptocurrency generation, Dr Baronchelli says the security, transferability, availability and other properties of a crypto-asset are determined by the code through which it is created. If code is open source, as it happens for most cryptocurrencies, this principle would prevent manipulations and grant transparency to users and traders. However, this approach considers cryptocurrencies as isolated entities and neglects the possible connections between them.

He maintains that the whole network of cryptocurrencies should be considered both by regulators and by professional investors aiming to maximise portfolio diversification.

Dr Baronchelli and his colleagues discovered that 1668 out of the 2225 cryptocurrencies listed in CoinMarketCap as of 9 June 2019 shared their source code on GitHub. They then queried the GitHub Archive dataset storing all events on public repositories from 2011, through Google BigQuery. This step provided them with all events related to the development of cryptocurrency GitHub projects.

The authors specifically queried two types of events: "push events" and accepted "pull request events". Finally, they removed all events triggered by GitHub apps (software designed to maintain and update the repositories), and removed from their dataset GitHub profiles whose name included the term "bot" so as to exclude noise from users that identified or were reported to be non-human. The authors also collected cryptocurrency daily price, exchange volume and market capitalisation from three different web sources: CoinGecko, CryptoCompare and CoinMarketCap (the latter only until the end of July 2018 due to updates in the website regulations).

Dr Baronchelli says his work has broad implications, given the primacy of code as an important societal regulator that challenges traditional institutions, from national laws to financial markets:

"Cryptocurrencies are open source digital objects traded as financial assets that allow, at least theoretically, everyone to directly shape both an asset structure and its market behaviour. Our study, identifying a simple event in the development space that anticipates a corresponding behaviour in the market, establishes a first direct link between the realms of coding and trading. In this perspective, we anticipate that our results will be of interest to researchers investigating how code and algorithms may affect the non-digital realm and spark further research in this direction."

Credit: 
City St George’s, University of London

New curriculum improves students' understanding of electric circuits in schools

FRANKFURT / TÜBINGEN. Life without electricity is something that is no longer imaginable. Whether it be a smartphone, hair-dryer or a ceiling lamp - the technical accomplishments we hold dear all require electricity. Although every child at school learns that electricity can only flow in a closed electric circuit, what is actually the difference between current and voltage? Why is a plug socket a potential death-trap but a simple battery is not? And why does a lamp connected to a power strip not become dimmer when a second lamp is plugged in?

Research into physics education has revealed that even after the tenth grade many secondary school students are not capable of answering such fundamental questions about simple electric circuits despite their teachers' best efforts. Against this backdrop, Jan-Philipp Burde, who recently became a junior professor at the University of Tübingen, in the framework of his doctoral thesis supervised by Prof. Thomas Wilhelm at Goethe University, developed an innovative curriculum for simple electric circuits, which specifically builds upon the everyday experiences of the students. In contrast to the approaches taken to date, from the very outset the new curriculum aims to help students develop an intuitive understanding of voltage. In analogy to air pressure differences that cause an air stream (e.g. at an inflated air mattress), voltage is introduced as an "electric pressure difference" that causes an electric current. A comparative study with 790 school pupils at secondary schools in Frankfurt showed that the new curriculum led to a significantly improved understanding of electric circuits compared to traditional physics tuition. Moreover, the participating teachers also stated that using the new curriculum fundamentally improved their teaching.

Credit: 
Goethe University Frankfurt

New discovery brings analogue spintronic devices closer

image: This EM image of an experimental device as used in this study shows graphene (light green) with boron nitride (blue) on top. Measurements were made across the orange lines.

Image: 
S. Omar, University of Groningen

The observation of nonlinearity in electron spin-related processes in graphene makes it easier to transport, manipulate and detect spins, as well as spin-to-charge conversion. It also allows analogue operations such as amplitude modulation and spin amplification. This brings spintronics to the point where regular electronics was after the introduction of the first transistors. These results by University of Groningen physicists were published in the journal Physical Review Applied on 17 December.

Spintronics is a type of electronics that uses the spin of electrons (a magnetic moment that can have the values 'up' or 'down') to transport signals. Spin transport in the 2D carbon material graphene is excellent; however, manipulation of spins is not. This requires the addition of ferromagnets (for spin injection and detection) or heavy-atom materials with high spin-orbit coupling, which allow the manipulation of spins.

Nonlinear

Scientists from the University of Groningen have now shown that nonlinear effects that are particular to electron spin can be achieved using 2D boron nitride. Previously, they had already shown that injecting a current through a boron nitride bilayer, to which a small DC bias current was applied, resulted in a very high spin polarization, which means that there is a large difference between the numbers of spin-up and spin-down electrons. They have now shown that the polarization increase can be attributed to nonlinear processes that influence the electron spins.

The nonlinearity means that two spin signals multiply, rather than add up (which would be a linear effect). Furthermore, in the nonlinear regime, spin signals can be measured without using ferromagnets. Earlier, all these effects were either absent or very weak in a typical graphene spintronic device. 'All because of this nonlinear effect, which increases in proportion with the bias current,' says Siddhartha Omar, a former postdoctoral researcher at the University of Groningen and first author of the paper. 'Polarization can even reach 100 per cent. Since it is nonlinear, you give less and get more during the injection when this current is applied.'

Neuromorphic

In the study, Omar and his colleagues in the Physics of Nanodevices group at the Zernike Institute for Advanced Materials, University of Groningen, show applications of the nonlinear effect for basic analogue operations, such as essential elements of amplitude modulation on pure spin signals. 'We believe that this can be used to transport spin over larger distances. The larger spin signal also makes spin-charge conversion easier and that means that we no longer need ferromagnets to detect them.'

The ability to modulate a spin signal, rather than just switch it on or off, also makes it easier to construct spintronic devices. Omar: 'They could be used in spin-based neuromorphic computing, which uses switches that can have a range of values, rather than just 0 or 1.' It also seems possible to create a spin current amplifier, which produces a large spin current with a small bias voltage. 'It may be there already, but we still have to prove it,' says Omar.

Spintronics

All these effects were measured both at low temperatures and at room temperature and could be used in applications such as nonlinear circuit elements in the fields of advanced spintronics. 'Spintronics is now at the point where regular electronics was after the introduction of the first transistors. We could now build real spintronic devices,' concludes Omar.

Credit: 
University of Groningen

Inverted fluorescence

Fluorescence usually entails the conversion of light at shorter wavelengths to light at longer wavelengths. Scientists have now discovered a chromophore system that goes the other way around. When excited by visible light, the fluorescent dyes emit light in the ultraviolet region. According to the study published in the journal Angewandte Chemie, such light upconversion systems could boost the light-dependent reactions for which efficiency is important, such as solar-powered water splitting.

Fluorescent dyes absorb light at shorter wavelengths (high energy, e.g. blue light) and emit light at longer wavelengths (low energy, e.g. red light). Upconversion of light is much more difficult to achieve. Upconversion means that a fluorescent dye is excited with radiation in the visible range but emits in the ultraviolet. Such dyes could be used to run high-energy catalytic reactions such as solar-powered water splitting just using normal daylight as an energy source. Such dyes would expand the range of available excitation energy.

Nobuhiro Yanai and colleagues at Kyushu University, Japan, are exploring multi-chromophore systems for their ability to upconvert fluorescence light. Yanai explains how upconversion works: "Fluorescence upconversion occurs when two chromophore molecules, which have been excited in the triplet state by a sensitizer, collide. This collision annihilates the sensitized energy and lifts the chromophores to a higher energy level. From there, they emit the energy as radiation."

In practice, however, it is difficult to achieve effective upconverting chromophore designs--existing systems need high-intensity radiation and still do not achieve more than ten percent efficiency. "The main reason for the low efficiency is that the sensitizer chromophore molecules also absorb much of the upconverted light, which is then lost," Yanai says.

In contrast, the donor-acceptor chromophore pair developed by Yanai and colleagues exhibits energy levels that are so finely adjusted that it achieved a record-high 20 percent upconversion efficiency. Almost no back-absorption and low nonradiative loss occurred. The novel chromophore pair consisted of an iridium-based donor, which was an established sensitizer, and a naphthalene-derived acceptor, which was a novel compound.

Low back-absorption and few radiative losses mean that the intensity of the exciting radiation can be low. The researchers reported that solar irradiance was sufficient to achieve high upconversion efficiency. Even indoor applications were possible using artificial light. The authors held an LED lamp above an ampoule filled with the chromophore solution and measured the intensity of the emitted UV light.

Credit: 
Wiley

World's first transmission of 1 Petabit/s using a single-core multimode optical fiber

image: Previous high capacity demonstrations in multi-mode fibers

Image: 
National Institute of Information and Communications Technology (NICT)

Points

A world record transmission of 1 petabit per second in a multimode optical fiber increases the current record data rate in multimode optical fibers by more than 2.5 times.

Wideband optical transmission in fibers with more 15 modes is demonstrated for the first time, enabled by mode multiplexers and a transmission fiber optimized for high optical bandwidth.

This demonstration advanced high-density and large capacity transmission in optical fibers that can be produced with standard methods.

Abstract

A group of researchers from the Network System Research Institute of the National Institute of Information and Communications Technology (NICT, Japan) led by Georg Rademacher, NOKIA Bell Labs (Bell Labs, USA) led by Nicolas K. Fontaine and Prysimian Group (Prysimian, France) led by Pierre Sillard succeeded in the world's first transmission exceeding 1 petabit per second in a single-core multi-mode optical fiber. This increases the current record transmission in a multi-mode fiber by a factor of 2.5.

To date, transmission experiments in optical fibers supporting large number of modes was limited to small optical bandwidths. In this study, we demonstrated the possibility of combining highly spectral efficient wideband optical transmission with an optical fiber guiding 15 fiber modes that had a cladding diameter in agreement with the current industry standard of 0.125 mm. This was enabled by mode multiplexers and an optical fiber that supported wideband transmission of more than 80 nm over a distance of 23 km. The study highlights the large potential of single-core multi-mode fibers for high capacity transmission using fiber manufacturing processes similar to those used in the production of standard multi-mode fibers.

The results of this study were accepted for the post-deadline session at the 46th European Conference on Optical Communication (ECOC 2020).

Background

Over the past decade, intensive research was carried out worldwide to increase the data rates in optical transmission systems using space-division multiplexing in order to accommodate the exponentially increasing data transmission requirements. Compared to multi-core optical fibers, multi-mode fibers can support a higher spatial-signal-density and are easier to manufacture. However, using multi-mode fibers for high capacity space-division multiplexed transmission requires the use of computationally intensive digital signal processing. These requirements increase with the number of transmission modes and realizing transmission systems supporting large number of fiber modes is an active field of research.

Achievements

At NICT, a transmission experiment was designed and carried out that utilized the transmission fiber made by Prysmian and mode multiplexers developed by Bell Labs. A wideband transceiver subsystem was developed at NICT to transmit and receive several hundred highly spectral efficient WDM channels of high signal quality. The novel mode multiplexers were based on a multi-plane light conversion process where the light of 15 input fibers was reflected multiple times on a phase plate to match the modes of the transmission fiber. The transmission fiber was 23 km long and had a graded-index design. It was based on existing multi-mode fiber designs that were optimized for wideband operation and had a cladding diameter of 0.125 mm and a coating diameter of 0.245 mm, both adhering to the current industry standard. The transmission system demonstrated the first transmission exceeding 1 petabit per second in a multi-mode fiber increasing the current record demonstration by a factor of 2.5.

When increasing the number of modes in a multi-mode fiber transmission system, the computational complexity of the required MIMO digital signal processing increases. However, the used transmission fiber had a small modal delay, simplifying the MIMO complexity and maintained this low modal delay over a large optical bandwidth. As a result, we could demonstrate the transmission of 382 wavelength channels, each modulated with 64-QAM signals. The success of large-capacity transmission using a single-core multimode optical fiber, which has a high spatial signal density and easy manufacturing technology, is expected to advance high-capacity multimode transmission technology for future high capacity optical transmission systems.

Future Prospects

In the future, we would like to pursue the possibility of extending the distance of large-capacity multi-mode transmission and integrating it with multi-core technology to establish the foundation of future optical transmission technology with large capacity.

The paper on the results of this experiment was published at the 46th European Conference on Optical Communication (ECOC2020, December 6th - 10th 2020), which is one of the largest international conferences related to optical fiber communication. It was planned to be held in Brussels, Belgium but had to be conducted virtually due to the Novel Corona Virus epidemic. The paper received a very high evaluation from and was adopted for presentation in a special session for the latest research (Post Deadline Paper) that took place on the 10th of December.

Credit: 
National Institute of Information and Communications Technology (NICT)

Discovery finds a cellular building block acts as a gel, not liquid as previously believed

University of Alberta researchers have found an answer to a fundamental question in genomic biology that has eluded scientists since the discovery of DNA: Within the nucleus of our cells, is the complex package of DNA and proteins called chromatin a solid or a liquid?

In a study published in the journal Cell, the research team, led by Department of Oncology professor Michael Hendzel and collaborator Jeffrey Hansen from Colorado State University, found that chromatin is neither a solid nor a liquid, but something more like a gel.

Previously, fields such as biochemistry operated under the assumption that chromatin and other elements of the nucleus operated in a liquid state, Hendzel said. This new understanding of the physical properties of chromatin challenges that idea, and could lead to a more accurate understanding of how the genome is encoded and decoded.

"We all know the difference between water and ice, and we all understand that if you want to tie two things together, for example, you can't do it with a liquid. You need a rope, something that has mechanical strength," said Hendzel, who is also a member of the Cancer Research Institute of Northern Alberta (CRINA). "That's what we're talking about here. Right now, all of our understanding of gene regulation is largely based on the assumption of freely moving proteins that find DNA and whose accessibility is only regulated by the blocking of that movement. So this research could potentially lead to very different kinds of ways of understanding gene expression."

"Another way to look at it is that bone, muscle and connective tissue all have very different physical properties, and if those physical properties break down somehow, it's almost always associated with disease," said Alan Underhill, associate professor in the Department of Oncology, CRINA member and contributor to the study. "In the case of chromatin, it's about scaling this principle down to the level of the cell nucleus, because it is all connected."

"What we're seeing here bridges the biochemistry of cellular contents and the underlying physics, allowing us to get at the organizational principles--not just for cells, but the entire body," he added.

All of our chromosomes are made from chromatin, which is half histone (or structural) proteins and half DNA, organized into long strings with bead-like structures (nucleosomes) on them. Inside the nucleus of a cell, the chromatin fibre interacts with itself to condense into a chromosome. The chromatin fibre also supports gene expression and replication of chromosomal DNA. Although there is some understanding of the structures that make up a nucleus, how those structures are organized and the full extent of how the structures interact with each other is not well known.

The team's findings bridge research done over the past 50 years on chromatin gels produced in the laboratory to demonstrate its existence in living cells, which has major implications for interpreting their elastic and mechanical properties, Hendzel explained.

For example, recent studies have shown that the deformability of chromatin in cancer cells is an important determinant of their ability to squeeze through small spaces to travel outside a tumour and metastasize elsewhere in the body--something that is much easier to explain if chromatin is gel-like rather than a liquid. Cancer cells do that by chemically changing the histone part of the chromatin to make it less sticky, Hendzel said.

Based on the new research, this can now be explained as a process that reduces the strength of the gel, making it more deformable and enabling cancer cells to spread through the body. Defining how this gel state is regulated could lead to new approaches to prevent metastasis by finding drugs that maintain the chromatin gel in a more rigid state.

A better understanding of chromatin could also affect cancer diagnosis, Underhill said.

"The texture and appearance of chromatin is something pathologists have used to do clinical assessment on tumour samples from patients," he said. "It's really looking at how the chromatin is organized within the nucleus that allows them to make insight into that clinical diagnosis. So now that's a process that we can reframe in a new context of the material state of the chromatin."

Hendzel said he is confident the discovery of the gel-like state of chromatin will provide a guiding principle for future research seeking to understand how the material properties of chromatin shape the function of the nucleus to ensure the health of cells and the organisms they make up.

"One of the most significant things to me is that this research highlights how limited our knowledge is in this area," he said. "Currently, we are focused on testing the widely held belief that the physical size of molecules determines their ability to access the DNA. Our ongoing experiments suggest that this too may be incorrect, and we are quite excited about learning new mechanisms that control access to DNA based on the properties of the chromatin gel and the liquid microenvironments that assemble around it."

"I think it forces us to go back and look at what's in textbooks and reinterpret a lot of that information in the context of whether 'this is a liquid,' or 'this is a gel' in terms of how the process actually takes place," added Underhill. "That will have a lot of impact on how we actually think about things moving forward and how we design experiments and interpret them."

Credit: 
University of Alberta Faculty of Medicine & Dentistry

New path to rare earth mineral formation has implications for green energy and smart tech

image: First author Adrienn Maria Szucs with Professor Juan Diego Rodriguez-Blanco in Trinity's Museum Building

Image: 
Trinity College Dublin

Researchers from Trinity College Dublin have shed new light on the formation mechanisms of a rare earth-bearing mineral that is in increasingly high demand across the globe for its use in the green energy and tech industries.

Their discovery has important economic implications because there are no substitute alternatives to these rare earth elements (REEs), which are indispensable due to their ability to form small and very powerful magnets essential for smart devices and low-carbon energy generation (e.g., electronics, wind turbines, hybrid cars).

Most REEs are exploited in carbonatite deposits (the largest known carbonatite is the Bayan Obo in China), but scientists still debate how and why they form due to their complicated mineralogy, element composition and geologic history.

There are more than 250 known REE-bearing minerals, but only three are economically viable and exploited commercially. Bastnäsite is likely the primary valuable mineral for REES in the world and was the focus of the Trinity team's study.

By considering how water containing REEs interacted with calcite, a mineral that is ubiquitous in nature and often present in hydrothermal environments, the team discovered a new pathway by which bastnäsite formed.

Adrienn Maria Szucs, PhD Candidate, Trinity, is the first author of the study, which has just been published by the international journal Crystal Growth & Design. She said:

"The fact that we need more REEs urges us to find out more about the geochemical behaviour of these precious elements. Simply, we need to know a lot more about REEs, and how and why they form, if we want more of them.

"The crystallisation pathway we discovered reveals that in some rare earth-bearing deposits the origin of bastnäsite could be simply a consequence of the interaction of calcite with rare earth-rich fluids. This is not the only reaction that forms bastnäsite but the discovery is particularly important because calcite is found everywhere and is also the most stable calcium carbonate in nature. As a result, it suggests it should be possible to support the formation of bastnäsite under the right conditions."

Juan Diego Rodriguez-Blanco, Ussher Assistant Professor in Nanomineralogy at Trinity, and funded investigator in the Irish Centre for Research in Applied Geosciences (iCRAG), is the Principal Investigator. He said:

"The use of REEs for high-tech products is continually increasing over the years, and therefore the demand for them is also shooting up. This has generated significant geopolitical competition because many REEs have become very valuable.

"Unfortunately, extracting and refining REEs is both financially and environmentally expensive, so work like this is important for bettering our understanding of formation mechanisms of bastnäsite, which in turn helps us improve existing extraction and refinement methods."

Credit: 
Trinity College Dublin

Still paying for broken smartphone display? Now, It is automatically fixed

image: Schematic of preparing PBF via LbL assembly

Image: 
Korea Institute of Science and Technology(KIST)

Smartphone display repair cost that caused so many people to cry about, it may no longer be an issue to worry about. Research team in South Korea has developed a smartphone display material that can self-heal from damages.

The Korea Institute of Science and Technology (KIST) announced that the collaborative research team led by Dr. Yong-Chae Jung, head of the center at Institute of Advanced Composites Materials, and Professor Hak-soo Han from Yonsei University (President: Seoung-Hwan Suh) was able to develop a self-healing colorless electronic material that can self-repair cracks or damaged functions occurred on the material.

Colorless polyimide (CPI) has outstanding mechanical, electrical, and chemical properties. It features transparency like glass along with strong tensile strength and does not encounter scratches even after folding hundreds of thousands of times. Thus, it is widely commercialized and used in mobile products such as foldable and flexible displays and is also popularly used throughout the overall industry such as aerospace and PV cells. As such, since CPI is a material widely used in various industries, constant efforts are made to secure durability by addressing cracks that can occur from various exposure environments and breakages caused by continuous electromagnetic waves. While several research teams attempted to address these issues by adding additives or coating a hard-protective layer on the surface, they were unable to successfully prevent damages to the underlying materials.

In order to support quick self-healing from cracks and damaged functions while maintaining the benefits of colorless polyimide, the KIST-Yonsei University collaborative research team developed a self-healing colorless polyimide by utilizing linseed oil extracted from seeds of the flax plant. Linseed oil is easily hardened at room temperature of 25 ?, hence, it is widely used as a coating material in preserving art pieces.

Research team at KIST fabricated linseed oil-loaded microcapsules; then, created upper-healing layers by mixing the fabricated microcapsules with silicone and coating them on colorless polyimides. In the material developed by the team, the microcapsules break when a mechanical damage occurs, then, the linseed oil leaks out and flows to the damaged area to become hardened, thereby healing the damaged area. Such self-healing capability has the advantage of being able to heal locally for local damages.

Other materials with self-repairing capabilities developed so far could only be implemented with soft materials and repaired only by applying high temperature heat to the materials. On the other hand, the material developed through this collaborative research is capable of self-healing even though it is implemented with a hard material, and it can be self-healed at room temperature without requiring high temperature heat. Additionally, it offers the advantage of accelerating healing process by reacting with humidity and UV, in turn, over 95% damage recovery has been achieved within short time of just 20 minutes.

"There is a significance in that we were able to develop a self-healing colorless polyimide that can radically solve the physical properties and lifespan of damaged polymer materials and that we presented the application range of the material, such as flexible displays and electronic material devices,"

Credit: 
National Research Council of Science & Technology

Tiny quantum computer solves real optimisation problem

image: Researchers at Chalmers University of Technology, Sweden, have now shown that they can solve a small part of a real logistics problem with their small, but well-functioning quantum computer.

Image: 
Yen Strandqvist/Chalmers University of Technology (for photo montage)

Quantum computers have already managed to surpass ordinary computers in solving certain tasks - unfortunately, totally useless ones. The next milestone is to get them to do useful things. Researchers at Chalmers University of Technology, Sweden, have now shown that they can solve a small part of a real logistics problem with their small, but well-functioning quantum computer.

Interest in building quantum computers has gained considerable momentum in recent years, and feverish work is underway in many parts of the world. In 2019, Google's research team made a major breakthrough when their quantum computer managed to solve a task far more quickly than the world's best supercomputer. The downside is that the solved task had no practical use whatsoever - it was chosen because it was judged to be easy to solve for a quantum computer, yet very difficult for a conventional computer.

Therefore, an important task is now to find useful, relevant problems that are beyond the reach of ordinary computers, but which a relatively small quantum computer could solve.

"We want to be sure that the quantum computer we are developing can help solve relevant problems early on. Therefore, we work in close collaboration with industrial companies", says theoretical physicist Giulia Ferrini, one of the leaders of Chalmers University of Technology's quantum computer project, which began in 2018.

Together with Göran Johansson, Giulia Ferrini led the theoretical work when a team of researchers at Chalmers, including an industrial doctoral student from the aviation logistics company Jeppesen, recently showed that a quantum computer can solve an instance of a real problem in the aviation industry.

All airlines are faced with scheduling problems. For example, assigning individual aircraft to different routes represents an optimisation problem, one that grows very rapidly in size and complexity as the number of routes and aircraft increases.

Researchers hope that quantum computers will eventually be better at handling such problems than today's computers. The basic building block of the quantum computer - the qubit - is based on completely different principles than the building blocks of today's computers, allowing them to handle enormous amounts of information with relatively few qubits.

However, due to their different structure and function, quantum computers must be programmed in other ways than conventional computers. One proposed algorithm that is believed to be useful on early quantum computers is the so-called Quantum Approximate Optimization Algorithm (QAOA).

The Chalmers research team has now successfully executed said algorithm on their quantum computer - a processor with two qubits - and they showed that it can successfully solve the problem of assigning aircraft to routes. In this first demonstration, the result could be easily verified as the scale was very small - it involved only two airplanes.

With this feat, the researchers were first to show that the QAOA algorithm can solve the problem of assigning aircraft to routes in practice. They also managed to run the algorithm one level further than anyone before, an achievement that requires very good hardware and accurate control.

"We have shown that we have the ability to map relevant problems onto our quantum processor. We still have a small number of qubits, but they work well. Our plan has been to first make everything work very well on a small scale, before scaling up," says Jonas Bylander, senior researcher responsible for the experimental design, and one of the leaders of the project of building a quantum computer at Chalmers.

The theorists in the research team also simulated solving the same optimisation problem for up to 278 aircraft, which would require a quantum computer with 25 qubits.

"The results remained good as we scaled up. This suggests that the QAOA algorithm has the potential to solve this type of problem at even larger scales," says Giulia Ferrini.

Surpassing today's best computers would, however, require much larger devices. The researchers at Chalmers have now begun scaling up and are currently working with five quantum bits. The plan is to reach at least 20 qubits by 2021 while maintaining the high quality.

Credit: 
Chalmers University of Technology

Organic molecules on a metal surface... a machinist's best friend

image: Purdue University innovators are working on technologies to make it easier to cut metals.

Image: 
Purdue University/Erin Easterling

WEST LAFAYETTE, Ind. - How can you improve the cutting of "gummy" metals? Purdue University innovators have come up with an answer - and their findings may help in manufacturing products and reducing component failures.

The researchers previously showed that the application of a permanent marker or Sharpie, glue or adhesive film made it easier to cut metals such as aluminum, stainless steels, nickel, copper and tantalum for industrial applications. Marking the metal surface to be machined with ink or an adhesive dramatically reduced the force of cutting, leaving a clean cut in seconds. Now, they have discovered how these films produce the effect.

"We have found that you only need the organic film from the markers or glue to be one molecule thick for it to work," said Srinivasan Chandrasekar, Purdue professor of industrial engineering. "This ultra-thin film helps achieve smoother, cleaner and faster cuts than current machining processes. It also reduces the cutting forces and energy, and improves the outcomes for manufacturing across industries such as biomedical, energy, defense and aerospace."

The research is published in Science Advances. The study involves a collaboration between researchers at Purdue, Osaka University (Japan) and the Indian Institute of Science (India). The research is supported by the National Science Foundation and U.S. Department of Energy.

If a significant improvement can be made to the machinability of gummy metals or alloys - that is how well they cut, drill or grind - then there is potential to lower the cost of products, improve their performance or enable new and improved product designs.

The researchers found, using organic monolayer films created by molecular self-assembly, that the molecule chain length and its adsorption to the metal surface are key to realizing these improvements. By using the "right" organic molecules, the metal is locally embrittled resulting in improved machining.

"We are also learning through our discovery more about how environmental factors influence failure of metals," said Anirudh Udupa, a lead author on the study and a researcher in Purdue's School of Industrial Engineering. "As we decipher how the organic molecular films improve the machinability of these metals, the better also is our understanding of common environment-assisted failures in metals, such as stress-corrosion cracking, hydrogen embrittlement and liquid metal embrittlement."

The Purdue innovators worked with the Purdue Research Foundation Office of Technology Commercialization to patent this technology.

Credit: 
Purdue University

AI-powered microscope could check cancer margins in minutes

image: A new microscope called DeepDOF uses artificial intelligence to quickly and inexpensively image all of the cells in large tissue sections (left) at high resolution with minimal preparation, eliminating the costly and time-consuming process of mounting thin tissue slices on slides (right).

Image: 
Photo by Brandon Martin/Rice University

HOUSTON - (Dec. 17, 2020) - When surgeons remove cancer, one of the first questions is, "Did they get it all?" Researchers from Rice University and the University of Texas MD Anderson Cancer Center have created a new microscope that can quickly and inexpensively image large tissue sections, potentially during surgery, to find the answer.

The microscope can rapidly image relatively thick pieces of tissue with cellular resolution, and could allow surgeons to inspect the margins of tumors within minutes of their removal. It was created by engineers and applied physicists at Rice and is described in a study published this week in the Proceedings of the National Academy of Sciences.

"The main goal of the surgery is to remove all the cancer cells, but the only way to know if you got everything is to look at the tumor under a microscope," said Rice's Mary Jin, a Ph.D. student in electrical and computer engineering and co-lead author of the study. "Today, you can only do that by first slicing the tissue into extremely thin sections and then imaging those sections separately. This slicing process requires expensive equipment and the subsequent imaging of multiple slices is time-consuming. Our project seeks to basically image large sections of tissue directly, without any slicing."

Rice's deep learning extended depth-of-field microscope, or DeepDOF, makes use of an artificial intelligence technique known as deep learning to train a computer algorithm to optimize both image collection and image post-processing.

With a typical microscope, there's a trade-off between spatial resolution and depth-of-field, meaning only things that are the same distance from the lens can be brought clearly into focus. Features that are even a few millionths of a meter closer or further from the microscope's objective will appear blurry. For this reason, microscope samples are typically thin and mounted between glass slides.

Slides are used to examine tumor margins today, and they aren't easy to prepare. Removed tissue is usually sent to a hospital lab, where experts either freeze it or prepare it with chemicals before making razor-thin slices and mounting them on slides. The process is time-consuming and requires specialized equipment and workers with skilled training. It is rare for hospitals to have the ability to examine slides for tumor margins during surgery, and hospitals in many parts of the world lack the necessary equipment and expertise.

"Current methods to prepare tissue for margin status evaluation during surgery have not changed significantly since first introduced over 100 years ago," said study co-author Ann Gillenwater, M.D., a professor of head and neck surgery at MD Anderson. "By bringing the ability to accurately assess margin status to more treatment sites, the DeepDOF has potential to improve outcomes for cancer patients treated with surgery."

Jin's Ph.D. advisor, study co-corresponding author Ashok Veeraraghavan, said DeepDOF uses a standard optical microscope in combination with an inexpensive optical phase mask costing less than $10 to image whole pieces of tissue and deliver depths-of-field as much as five times greater than today's state-of-the-art microscopes.

"Traditionally, imaging equipment like cameras and microscopes are designed separately from imaging processing software and algorithms," said study co-lead author Yubo Tang, a postdoctoral research associate in the lab of co-corresponding author Rebecca Richards-Kortum. "DeepDOF is one of the first microscopes that's designed with the post-processing algorithm in mind."

The phase mask is placed over the microscope's objective to module the light coming into the microscope.

"The modulation allows for better control of depth-dependent blur in the images captured by the microscope," said Veeraraghavan, an imaging expert and associate professor in electrical and computer engineering at Rice. "That control helps ensure that the deblurring algorithms that are applied to the captured images are faithfully recovering high-frequency texture information over a much wider range of depths than conventional microscopes."

DeepDOF does this without sacrificing spatial resolution, he said.

"In fact, both the phase mask pattern and the parameters of the deblurring algorithm are learned together using a deep neural network, which allows us to further improve performance," Veeraraghavan said.

DeepDOF uses a deep learning neural network, an expert system that can learn to make humanlike decisions by studying large amounts of data. To train DeepDOF, researchers showed it 1,200 images from a database of histological slides. From that, DeepDOF learned how to select the optimal phase mask for imaging a particular sample and it also learned how to eliminate blur from the images it captures from the sample, bringing cells from varying depths into focus.

"Once the selected phase mask is printed and integrated into the microscope, the system captures images in a single pass and the ML (machine learning) algorithm does the deblurring," Veeraraghavan said.

Richards-Kortum, Rice's Malcolm Gillis University Professor, professor of bioengineering and director of the Rice 360° Institute for Global Health, said DeepDOF can capture and process images in as little as two minutes.

"We've validated the technology and shown proof-of-principle," Richards-Kortum said. "A clinical study is needed to find out whether DeepDOF can be used as proposed for margin assessment during surgery. We hope to begin clinical validation in the coming year."

Credit: 
Rice University