Tech

'Spear and shield' inspire high toughness microstructure

A team led by Prof. NI Yong and Prof. HE Linghui from University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS) designed a discontinuous fibrous Bouligand (DFB) architecture, a combination of Bouligand and nacreous staggered structures. Systematic bending experiments for 3D-printed single-edge notched specimens with such architecture indicated that total energy dissipations are insensitive to initial crack orientations and show optimized values at critical pitch angles. The study was published in PNAS on June 22.

The survival war between the "spear" predators and "shield" preys in the biological world has inspired people that changing the microstructure of materials is an important way for structural materials to obtain extraordinary mechanical properties. The "spear" of mantis shrimps with Bouligand-type microstructures and the "shield" of abalone with nacreous staggered architectures are in research interest.

Inspired by the structural arrangements in the exoskeleton of aggressive crustaceans, where the chitin-protein nanofibrils with finite characteristic length are arranged in an overlapped array to form lamellae and the chitin-protein fibrils lamellae are arranged in the twisted plywood pattern, although not only the structure but also the mechanical properties of the materials, size, and geometry play important roles in the competition between mantis shrimp and abalone shells in nature, researchers hypothesized that the structure that exhibits the combination of the toughening mechanisms of crack twisting and crack bridging endows the mantis shrimps with remarkable fracture resistance as well as crack orientation insensitivity.

To prove the hypothesis, they designed a DFB architecture with the combination of Bouligand and nacreous staggered structures by 3D printing to examine how the fracture resistance depends on the controlled architectural parameters. Furthermore, they developed a fracture mechanics model to elucidate the mechanism of crack orientation insensitivity and maximum energy dissipations in DFB architecture.

The results showed that the sophisticated hybrid fracture mode due to the competition of energy dissipations between crack twisting and crack bridging arising in DFB architecture is identified as the origin of maximum fracture energy at a critical pitch angle.

This finding sheds light on how nature evolves materials to exceptional fracture toughness and crack orientation insensitivity. The provided design strategies with parameters selection principle enable the fabrication of formidable fracture-resistant fibrous composite systems that adapt to loads in various orientations.

This study not only reveals the origin of a microstructure for excellent fracture toughness of biomaterials, but also provides new biomimetic structural design ideas and performance optimization parameter selection principles for the preparation of high-performance advanced composite materials.

Credit: 
University of Science and Technology of China

3D magnetotelluric imaging reveals magma recharging beneath Weishan volcano

image: Cartoon interpretation of magma reservoirs beneath Weishan volcano.

Image: 
GAO Ji et al.

A collaborative research team from University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS) and China Geological Survey (CGS) have succeeded in obtaining a high-resolution 3D resistivity model of approximately 20 km depth beneath the Weishan volcano in the Wudalianchi volcanic field (WVF) for the first time. The study, published in Geology, revealed the image of potential magma chambers and the estimated melt fractions.

WVF in the northeast of China, comprised of 14 volcanoes that have erupted about 300 years ago, is one of the largest active volcanic areas. Volcanic activities are hazard to human life and have severe environmental consequences, thus it is important to characterize the magmatic system beneath the volcanoes to understand the nature of the eruption.

In conjunction with the Center for Hydrogeology and Environmental Geology, CGS, Prof. ZHANG Jianghai's group from School of Earth and Space Sciences, USTC, utilized magnetotelluric (MT) methods to image magma reservoirs beneath Weishan volcano and obtained its high-resolution spatial resistivity distribution up to 20 km deep. Their findings showed the existence of vertically distributed low-resistivity anomalies that are narrowest in the middle. This was further interpreted as magma reservoirs existing both in the upper crust and the middle crust, which are linked by very thin vertical channels for magma upwelling.

Meanwhile, they cooperated with Institute of Geodesy and Geophysics of CAS, combining both the velocity model from ambient noise tomography (ANT) and the resistivity model from MT imaging to estimate that the melt fractions of the magma reservoirs in the upper crust and the middle crust are reliably to be >~15%. This phenomenon demonstrated that there should be an even deeper source for recharging the magma chambers to keep the melt fraction increasing, and indicated that the volcano is still active.

Considering the significant melt fractions and the active earthquakes and tremors occurred around the magma reservoirs several years ago, the Weishan volcano is likely in an active stage with magma recharging. Although the melt fraction does not reach the eruption threshold (~40%), it is necessary to increase monitoring capabilities to better forecasting its potential future eruptions.

Overall, this study has revealed that the volcanoes in northeast China may be in an active stage. This poses a grave threat to man and environment, thus proper monitoring is required to forecast its hazardous implications.

Credit: 
University of Science and Technology of China

Spraying ethanol to nanofiber masks makes them reusable

image: Evaluation of breathing comfort by infrared thermal camera: (a) air and moisture
transmission and (b) CO2 transmission

Image: 
Hyung Joon Cha (POSTECH)

As the COVID-19 pandemic spreads around the world, masks have become an essential personal hygiene product as the first line of defense to protect one's respiratory system against the viruses and germs that spread through droplets in the air. It's a waste to dispose the masks after single use but troubling to reuse them. In light of this, a Korea-Japan research team recently released findings that compare and analyze the performance and functional differences of the filters used in masks after they are cleaned with ethanol.

Professor Hyung Joon Cha and doctoral candidates Jaeyun Lee and Yeonsu Jeong of the Department of Chemical Engineering at POSTECH and Professor Ick-Soo Kim and doctoral candidates Sana Ullah and Azeem Ullah of Shinshu University in Japan have been jointly analyzing the filtration efficiency, airflow rate, surface and morphological properties of the mask filters after they undergo cleaning treatments.

The research team verified the results using two types of cleaning procedures: First one was spraying 75% ethanol on the mask filter then air drying it, second was to saturate the filter in a 75% ethanol solution then air drying it.

A study of the melt-blown filters commonly used in N95 masks and the nanofiber filters produced by electrospinning found that just by spraying ethanol three or more times on the two materials or dipping them in the ethanol solution for more than five minutes effectively inhibit pathogens that can remain inside the mask filter.

The filtration efficiency of both materials in its first usage was measured at 95% and above, indicating that the wearer's respiratory system is effectively protected. It was also confirmed that surface of both materials do not allow water to adhere well, sufficiently hindering wetting by moisture or saliva (droplets).

However, the filtration efficiency of the melt-blown filter decreased by up to 64% when cleaned with ethanol solution and reused. On the other hand, the filtration efficiency of nanofiber filters when reused 10 times through ethanol spray cleaning or immersion in ethanol solution for 24 hours maintained nearly consistent high filtration efficiency.

The joint research team attributed this difference to the decrease in static electricity in the filters after cleaning. Melt-blown filters rely in part on the electrostatic effect of the surface when filtering particles. However, nanofiber does not rely on static electricity but filters according to the morphological properties and pore size of the surface, and is not deformed by ethanol.

Moreover, nanofiber filters have better breathability because they have much higher heat and carbon dioxide emission capabilities compared to the melt-blown filters. Biosafety tests using human skin and vascular cells also confirmed no cytotoxicity.

In summary, both mask filters have similar filtration performance in its initial use, but only nanofiber filters can be reused multiple times through a simple ethanol cleaning process.

POSTECH Professor Hyung Joon Cha explained the significance of the research by commenting, "This research is an experiment that has verified the biosafety of nanofiber masks - which have recently become a hot issue - and their ability to maintain filtration efficiency after cleaning."

In addition, Professor Ick-Soo Kim of Shinshu University added, "I hope the nanofiber masks will help everyone as a means of preventing contagion in the possible second or third wave of COVID-19.

Credit: 
Pohang University of Science & Technology (POSTECH)

A focused approach to imaging neural activity in the brain

image: Using a new calcium indicator that accumulates in the cell bodies of neurons (boxes at right), MIT neuroscientists are able to more accurately image neuron activity. Traditional calcium indicators (boxes at left) can generate crosstalk that blurs the images.

Image: 
Howard Gritton, Boston University

CAMBRIDGE, MA -- When neurons fire an electrical impulse, they also experience a surge of calcium ions. By measuring those surges, researchers can indirectly monitor neuron activity, helping them to study the role of individual neurons in many different brain functions.

One drawback to this technique is the crosstalk generated by the axons and dendrites that extend from neighboring neurons, which makes it harder to get a distinctive signal from the neuron being studied. MIT engineers have now developed a way to overcome that issue, by creating calcium indicators, or sensors, that accumulate only in the body of a neuron.

"People are using calcium indicators for monitoring neural activity in many parts of the brain," says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT. "Now they can get better results, obtaining more accurate neural recordings that are less contaminated by crosstalk."

To achieve this, the researchers fused a commonly used calcium indicator called GCaMP to a short peptide that targets it to the cell body. The new molecule, which the researchers call SomaGCaMP, can be easily incorporated into existing workflows for calcium imaging, the researchers say.

Boyden is the senior author of the study, which appears today in Neuron. The paper's lead authors are Research Scientist Or Shemesh, postdoc Changyang Linghu, and former postdoc Kiryl Piatkevich.

Molecular focus

The GCaMP calcium indicator consists of a fluorescent protein attached to a calcium-binding protein called calmodulin, and a calmodulin-binding protein called M13 peptide. GCaMP fluoresces when it binds to calcium ions in the brain, allowing researchers to indirectly measure neuron activity.

"Calcium is easy to image, because it goes from a very low concentration inside the cell to a very high concentration when a neuron is active," says Boyden, who is also a member of MIT's McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.

The simplest way to detect these fluorescent signals is with a type of imaging called one-photon microscopy. This is a relatively inexpensive technique that can image large brain samples at high speed, but the downside is that it picks up crosstalk between neighboring neurons. GCaMP goes into all parts of a neuron, so signals from the axons of one neuron can appear as if they are coming from the cell body of a neighbor, making the signal less accurate.

A more expensive technique called two-photon microscopy can partly overcome this by focusing light very narrowly onto individual neurons, but this approach requires specialized equipment and is also slower.

Boyden's lab decided to take a different approach, by modifying the indicator itself, rather than the imaging equipment.

"We thought, rather than optically focusing light, what if we molecularly focused the indicator?" he says. "A lot of people use hardware, such as two-photon microscopes, to clean up the imaging. We're trying to build a molecular version of what other people do with hardware."

In a related paper that was published last year, Boyden and his colleagues used a similar approach to reduce crosstalk between fluorescent probes that directly image neurons' membrane voltage. In parallel, they decided to try a similar approach with calcium imaging, which is a much more widely used technique.

To target GCaMP exclusively to cell bodies of neurons, the researchers tried fusing GCaMP to many different proteins. They explored two types of candidates -- naturally occurring proteins that are known to accumulate in the cell body, and human-designed peptides -- working with MIT biology Professor Amy Keating, who is also an author of the paper. These synthetic proteins are coiled-coil proteins, which have a distinctive structure in which multiple helices of the proteins coil together.

Less crosstalk

The researchers screened about 30 candidates in neurons grown in lab dishes, and then chose two -- one artificial coiled-coil and one naturally occurring peptide -- to test in animals. Working with Misha Ahrens, who studies zebrafish at the Janelia Research Campus, they found that both proteins offered significant improvements over the original version of GCaMP. The signal-to-noise ratio -- a measure of the strength of the signal compared to background activity -- went up, and activity between adjacent neurons showed reduced correlation.

In studies of mice, performed in the lab of Xue Han at Boston University, the researchers also found that the new indicators reduced the correlations between activity of neighboring neurons. Additional studies using a miniature microscope (called a microendoscope), performed in the lab of Kay Tye at the Salk Institute for Biological Studies, revealed a significant increase in signal-to-noise ratio with the new indicators.

"Our new indicator makes the signals more accurate. This suggests that the signals that people are measuring with regular GCaMP could include crosstalk," Boyden says. "There's the possibility of artifactual synchrony between the cells."

In all of the animal studies, they found that the artificial, coiled-coil protein produced a brighter signal than the naturally occurring peptide that they tested. Boyden says it's unclear why the coiled-coil proteins work so well, but one possibility is that they bind to each other, making them less likely to travel very far within the cell.

Boyden hopes to use the new molecules to try to image the entire brains of small animals such as worms and fish, and his lab is also making the new indicators available to any researchers who want to use them.

"It should be very easy to implement, and in fact many groups are already using it," Boyden says. "They can just use the regular microscopes that they already are using for calcium imaging, but instead of using the regular GCaMP molecule, they can substitute our new version."

Credit: 
Massachusetts Institute of Technology

Computational model decodes speech by predicting it

The brain analyses spoken language by recognising syllables. Scientists from the University of Geneva (UNIGE) and the Evolving Language National Centre for Competence in Research (NCCR) have designed a computational model that reproduces the complex mechanism employed by the central nervous system to perform this operation. The model, which brings together two independent theoretical frameworks, uses the equivalent of neuronal oscillations produced by brain activity to process the continuous sound flow of connected speech. The model functions according to a theory known as predictive coding, whereby the brain optimizes perception by constantly trying to predict the sensory signals based on candidate hypotheses (syllables in this model). The resulting model, described in the journal Nature Communications, has helped the live recognition of thousands of syllables contained in hundreds of sentences spoken in natural language. This has validated the idea that neuronal oscillations can be used to coordinate the flow of syllables we hear with the predictions made by our brain.

"Brain activity produces neuronal oscillations that can be measured using electroencephalography," begins Anne-Lise Giraud, professor in the Department of Basic Neurosciences in UNIGE's Faculty of Medicine and co-director of the Evolving Language NCCR. These are electromagnetic waves that result from the coherent electrical activity of entire networks of neurons. There are several types, defined according to their frequency. They are called alpha, beta, theta, delta or gamma waves. Taken individually or superimposed, these rhythms are linked to different cognitive functions, such as perception, memory, attention, alertness, etc.

However, neuroscientists do not yet know whether they actively contribute to these functions and how. In an earlier study published in 2015, Professor Giraud's team showed that the theta waves (low frequency) and gamma waves (high frequency) coordinate to sequence the sound flow in syllables and to analyse their content so they can be recognised.

The Geneva-based scientists developed a spiking neural network computer model based on these physiological rhythms, whose performance in sequencing live (on-line) syllables was better than that of traditional automatic speech recognition systems.

The rhythm of the syllables

In their first model, the theta waves (between 4 and 8 Hertz) made it possible to follow the rhythm of the syllables as they were perceived by the system. Gamma waves (around 30 Hertz) were used to segment the auditory signal into smaller slices and encode them. This produces a "phonemic" profile linked to each sound sequence, which could be compared, a posteriori, to a library of known syllables. One of the advantages of this type of model is that it spontaneously adapts to the speed of speech, which can vary from one individual to another.

Predictive coding

In this new article, to stay closer to the biological reality, Professor Giraud and her team developed a new model where they incorporate elements from another theoretical framework, independent of the neuronal oscillations: "predictive coding". "This theory holds that the brain functions so optimally because it is constantly trying to anticipate and explain what is happening in the environment by using learned models of how outside events generate sensory signals. In the case of spoken language, it attempts to find the most likely causes of the sounds perceived by the ear as speech unfolds, on the basis of a set of mental representations that have been learned and that are being permanently updated.", says Dr. Itsaso Olasagasti, computational neuroscientist in Giraud's team, who supervised the new model implementation.

"We developed a computer model that simulates this predictive coding," explains Sevada Hovsepyan, a researcher in the Department of Basic Neurosciences and the article's first author. "And we implemented it by incorporating oscillatory mechanisms."

Tested on 2,888 syllables

The sound entering the system is first modulated by a theta (slow) wave that resembles what neuron populations produce. It makes it possible to signal the contours of the syllables. Trains of (fast) gamma waves then help encode the syllable as and when it is perceived. During the process, the system suggests possible syllables and corrects the choice if necessary. After going back and forth between the two levels several times, it discovers the right syllable. The system is subsequently reset to zero at the end of each perceived syllable.

The model has been successfully tested using 2,888 different syllables contained in 220 sentences, spoken in natural language in English. "On the one hand, we succeeded in bringing together two very different theoretical frameworks in a single computer model," explains Professor Giraud. "On the other, we have shown that neuronal oscillations most likely rhythmically align the endogenous functioning of the brain with signals that come from outside via the sensory organs. If we put this back in predictive coding theory, it means that these oscillations probably allow the brain to make the right hypothesis at exactly the right moment."

Credit: 
Université de Genève

Function-based sequencing technique permits analysis of just a single bacteria cell

image: Design of the RAGE chip

Image: 
XU Teng

A new function-based sequencing technique using optical tweezers and taking advantage of the properties of gravity is letting researchers analyze bacteria cells one by one. The study, conducted by researchers from the Qingdao Institute of Bioenergy and Bioprocess Technology (QIBEBT) of the Chinese Academy of Sciences, was published in Small on June 9.

Bacteria cells are so teeny tiny that it has been very difficult to analyze just one bacterial cell, or bacterium, at a time. As a result, lots of them, sometimes millions at a time, have to be analyzed simultaneously. This tells us a lot about the group as a whole, but it prevents researchers from being able to investigate the link between a single bacterium's genotype, or complete set of genes, and its phenotype, or the set of characteristics that result from the interaction of its genes and the environment.

A simple way to think about the distinction between genotype and phenotype is to note that while a single corn plant's genotype might allow it to grow three feet tall, if not much fertilizer is applied, then the corn plant's phenotype might be that it only grew to be two feet tall.  

Analysis of the link between genotype and phenotype is straightforward for such a large organism, and very useful too.

Similar insights about the genotype-phenotype relationship of a single bacterium, not least with respect to infectious disease, have long been sought but hindered by a bacterium's size, which are typically just a few millionths of a meter in length. 

Researchers from the Single-Cell Center at QIBEBT have developed a bacteria-profiling technique called Raman-Activated Gravity-driven single-cell Encapsulation and Sequencing, or RAGE sequencing. In the technique, the phenotypes of individual cells are analyzed one by one, then carefully packaged in a 'picoliter microdroplet' (a trillionth of a liter) that is exported and indexed in a one cell per test-tube manner ready for gene sequencing later on. 

The process involves a RAGE 'chip' of two quartz layers bonded together and that have an inlet hole, oil well, and micro-channel etched into them. 'Optical tweezers', or a highly focused laser beam that produces an attractive or repulsive force, manipulates the bacterium in liquid through the channel, assisted by gravity.

The form, structure and metabolic features of the bacterium - essentially its phenotype - are then investigated via a detection window using 'Raman spectroscopy', an analytical technique that exploits the interaction of light with the chemical bonds within a material.

"Finally the bacterium is encapsulated in the microdroplet, which is then transferred to a tube for gene sequencing or cultivation of the cell," said Prof. MA Bo, corresponding author of the study.

The microdroplet packaging is extremely important, as it allows the very small amount of DNA in a single bacterial cell to be amplified in a very even way, a key challenge for decoding its genome fully, according to XU Teng, a graduate student on the team that developed the method.  

"We are able to, directly from a urine sample, obtain antibiotic resistance features and an essentially complete genome sequence simultaneously from precisely one cell. This offers the highest possible resolution for bacterial diagnosis and drug treatment," said Prof. XU Jian, another corresponding author of the study. 

Based on this technology, the researchers have developed an instrument called CAST-R to support rapid antibiotic selection and genome sequencing of pathogens, all at the level of one cell. This instrument means much faster and more precise antibiotic treatment, and much higher sensitivity in tracking and fighting bacterial antibiotic resistance, which is a major threat to the future of human society.  

Credit: 
Chinese Academy of Sciences Headquarters

Extensive review of spin-gapless semiconductors: Next-generation spintronics candidates

image: The band structures of parabolic and Dirac type SGS materials with spin-orbital coupling, which leads to the quantum anomalous Hall effect.

Image: 
FLEET

A University of Wollongong team has published an extensive review of spin-gapless semiconductors (SGSs) .

Spin gapless semiconductors (SGSs) are a new class of zero gap materials which have fully spin polarised electrons and holes.

The study tightens the search for materials that would allow for ultra-fast, ultra-low energy 'spintronic' electronics with no wasted dissipation of energy from electrical conduction.

Their defining property of SGS materials relates to their 'bandgap', the gap between the material's valence and conduction bands, which defines their electronic properties.

In general, one spin channel (ie, one of the spin directions, up or down) is semiconducting with a finite band gap, while the other spin channel has a closed (zero) band gap.

In a spin-gapless semiconductor (SGS), conduction and valence band edges touch in one spin channel, and no threshold energy is required to move electrons from occupied (valence) states to empty (conduction) states.

This property gives these materials unique properties: their band structures are extremely sensitive to external influences (eg, pressure or magnetic field).

Most of the SGS materials are all ferromagnetic materials with high Curie temperatures.

The band structures of the SGSs can have two types of energy-momentum dispersions: Dirac (linear) dispersion or parabolic dispersion.

The new review investigates both Dirac and the three sub-types of parabolic SGSs in different material systems.

For Dirac type SGS, their electron mobility is two to four orders of magnitude higher than in classical semiconductors. Very little energy is needed to excite electrons in an SGS, charge concentrations are very easily 'tuneable'. For example, this can be done by introducing a new element (doping) or by application of a magnetic or electric field (gating).

The Dirac type spin gapless semiconductors exhibit fully spin polarized Dirac cones and offer a platform for spintronics and low-energy consumption electronics via dissipationless edge states driven by the quantum anomalous Hall effect.

"Potential applications of SGSs in next-generation spintronic devices are outlined, along with low- electronics, and optoelectronics with high speed and low energy consumption." according to Professor Xiaolin Wang, who is the Director of Institute for Superconducting and Electronic Materials, UoW and the theme leader of FLEET.

Since spin-gapless semiconductors (SGSs) were first proposed by s Professor Xiaolin Wang in 2008, efforts worldwide to find suitable candidate materials have particularly focussed on Dirac type SGSs.

In the past decade, a large number of Dirac or parabolic type SGSs have been predicted by density functional theory, and some parabolic SGSs have been experimentally demonstrated in both monolayer and bulk materials.

Credit: 
ARC Centre of Excellence in Future Low-Energy Electronics Technologies

Side effects of testicular cancer predicted by machine learning

Testicular cancer is the most common cancer in young men. The number of new cases is increasing worldwide. There is a relatively high survival rate, with 95% surviving after 10 years - if detected in time and treated properly. However the standard chemotherapy includes cisplatin which has a wide range of long-term side effects, one of which can be nephrotoxicity.

"In testicular cancer patients, cisplatin-based chemotherapy is essential to ensure a high cure rate. Unfortunately, treatment can cause side effects, including renal impairment. However, we are not able to pinpoint who ends up having side effects and who does not," says Jakob Lauritsen from Rigshospitalet.

Patient data is key to knowledge

The researchers therefore asked the question: How far can we go in predicting nephrotoxicity risk in these patients using machine learning? First, it required some patient data.

"Using a cohort of testicular-cancer patients from Denmark- in collaboration with Rigshospitalet, we developed a machine learning predictive model to tackle this problem" says Sara Garcia, a researcher at DTU Health Technology, who, together with Jakob Lauritsen, are the first authors of an article published recently in JNCI Cancer Spectrum.

The high-quality of Danish patient records allowed the identification of key patients, and a technology partnership between DMAC and YouDoBio facilitated DNA collection from patients at their homes using postal delivered saliva kits. The project, originally funded by the Danish Cancer Society, saw the development of several analyses strategies of genomics and patient data, bringing forward the promise of artificial intelligence for integration of diverse data streams.

Best predictions for low-risk patients

A risk score for an individual to develop nephrotoxicity during chemotherapy was generated, and key genes likely at play were proposed. Patients were classified into high, low, and intermediate risk. For the high-risk, the model was able to correctly predict 67% of affected patients, while for the low-risk, the model correctly predicted 92% of the patients that did not develop nephrotoxicity.

"Understanding how and where AI technologies can be applied in clinical care, is increasingly important also in the future of responsible AI. Despite patient data complexity, the high quality of Danish registries and clinical research make it a great environment for exploring new data methodologies" says Ramneek Gupta.

"Being able to predict late side-effects will ultimately give us the opportunity for preventive action and improved quality of life" adds Gedske Daugaard who is joint senior author with Ramneek Gupta.

Credit: 
Technical University of Denmark

Al2Pt for oxygen evolution reaction in water splitting

image: Atomic QTAIM basins of platinum and aluminium (transparent) and Al-Pt bond basin (red) in the Al2Pt compound, revealing the pronounced charge transfer from Al to Pt atoms and polar character of Al-Pt atomic interactions.

Image: 
© MPI CPfS

The transition from fossil fuels to renewable energy sources strongly depends on an availability of effective systems for energy conversion and storage. Considering hydrogen as a carrier molecule, proton exchange membrane electrolysis offers numerous advantages, like operation at high current densities, low gas crossover, compact system design etc. However, its wide implementation is hindered by slow kinetics of oxygen evolution reaction (OER), enhancement of which requires the application of low-abundant and expensive Ir-based electrocatalysts. Looking for rational design of new types of OER electrocatalysts and addressing fundamental questions about the key reactions in energy conversion, the inter-institutional MPG-consortium MAXNET Energy integrated the scientists from different institutions in Germany and abroad. As a result of close and fruitful collaboration within this framework, the scientists from Chemical Metal Science department at MPI CPfS together with experts from Fritz Haber Institute in Berlin and MPI CEC in Muelheim an der Ruhr, developed a new concept for producing multifunctionality in electrocatalysis and successfully illustrated it with an example of intermetallic compound Al2Pt as precursor for OER electrocatalyst material.

Intermetallic compound Al2Pt (anti-CaF2 type of crystal structure) combines two characteristics important for electrocatalytic performance: (i) reduced density of states at the Fermi level of Pt, and (ii) pronounced charge transfer from aluminium towards platinum, leading to strongly polar chemical bonding in this compound (Figure 1). These features provide inherent OER activity (Figure 2) and increasing stability against complete oxidation under harsh oxidative conditions of OER. Upon OER conditions, Al2Pt undergoes restructuring in the near-surface region as a result of the self-controlled dissolution of aluminium (inset of Figure 2). The roughness and porosity of in situ-formed near-surface microstructure allow to compensate the specific activity loss. Even after exceptionally long stability experiment (19 days) at high current densities (90 mA cm-2) the bulk material retains its structural and compositional integrity. Extending the choice of synthesis techniques, e.g. thin films growth, and exploring the variety of intermetallic compounds draw the main guidelines for future development of the proposed strategy.

The research at the Max Planck Institute for Chemical Physics of Solids (MPI CPfS) in Dresden aims to discover and understand new materials with unusual properties.

In close cooperation, chemists and physicists (including chemists working on synthesis, experimentalists and theoreticians) use the most modern tools and methods to examine how the chemical composition and arrangement of atoms, as well as external forces, affect the magnetic, electronic and chemical properties of the compounds.

New quantum materials, physical phenomena and materials for energy conversion are the result of this interdisciplinary collaboration.

Credit: 
Max Planck Institute for Chemical Physics of Solids

Many families must 'dance' their way to COVID-19 survival -- study

Marketing managers and academics have been studying how families plan ahead and make decisions about family care and family consumption for a long time - but what happens when planning ahead is not possible? When consumers can't plan ahead...they 'dance'.

Although there has been a lot of talk about how COVID-19 has 'slowed down' family life, a new study in the Journal of Marketing Management by researchers at the University of Birmingham (UK), University of Melbourne (Australia) and Adolfo Ibanez University (Chile) argues that this is not the case for every family.

Dr Pilar Rojas Gaviria, Lecturer in Marketing at the University of Birmingham, comments: "For many families, life has become more precarious, anxious, and accelerated. Rather than a combination of strategic activities and well-planned decisions, we found that when normality is disrupted abruptly, family care looks more like an intricate improvised 'dance'."

Dr Rojas Gaviria and her colleagues note that when facing unplanned disruptions to family life, such as COVID-19, while some families may enjoy more free time because they are not commuting, others face unprecedented situations, such as disrupted careers, caring for others and suffering from the loss of income.

She comments: "We should avoid assumptions about families being affected in the same way. Many families are struggling with mental health while others are coping well. Many have lost friends or family members, others have not.

"This means that organisations should aim to better understand the needs of individual employees and their families and think about how they can support them by acknowledging that these needs are different and that they evolve through time."

Particularly, Dr. Rojas Gaviria and her colleagues found that families who already deal with more intensive care needs - such as those who have a family member with a chronic health condition - must 'dance' their way through unplanned disruptions such as the COVID-19 crisis.

Families strike a balance between day-to-day routines - resorting to what the researchers call 'grounding' activities - and other more creative, emotionally-laden and inspirational activities that go well beyond their daily schedules in order to counter massive disruption to their everyday life.

In their study of families living with diabetic children, they discovered how, in the midst of chaos, each family finds its own style to 'dance' through their life constraints by alternating 'grounding' and 'aerial' activities.

They also found that that this process often occurs instinctively and invisibly, and is usually lead by one family member who "orchestrates" resources and talents at hand to help their family develop its 'dance'.

Dr Rojas Gaviria adds: "In keeping that 'dance' going, it is essential for the family to balance 'grounding movements' with 'aerial movements' that soothe, inspire and motivate family members.

"For instance, we saw how, during the COVID-19 lockdown both 'grounding' activities - such as knitting, gardening and baking - combined with 'aerial' activities - like becoming a helping hand in the community, placing rainbows in the family home's windows, supporting local shops, fisheries and farms, or raising funds for the NHS - to comfort families and help them connect to each other, even from a distance."

Dr. Rojas Gaviria argues that there is an untapped need for public policies and support programmes that can be flexible and adaptable to different moments and different life circumstances and that aim at enhancing the creative competencies of the families.

"The aim should be helping families gather resources for movement (energy, time, focus, hope in the future) instead of telling them how to move by setting very strict rules that not everyone is able to follow. Designing a diverse set of support tools that can be offered for different circumstances and at different moments in time is a challenge for our societal systems," she adds.

Credit: 
University of Birmingham

Rapid genomic profiling of colon cancers can improve therapy selection for patients

image: New study of Idylla led by Gregory J. Tsongalis, PhD, of Dartmouth and Dartmouth-Hitchcock's Norris Cotton Cancer Center.

Image: 
Biocartis

LEBANON, NH - Genomic mutation testing is critical to therapeutic selection and management of patients with colorectal cancer. In a new multicenter study, researchers compared a new cartridge-based laboratory testing device called the IdyllaTM automated system (Biocartis) to current standard-of-care testing methods. They found that average turnaround time for test results could be cut by more than 65% from 15 days to 5 days, with some results available in a single day. The significant decrease in wait time means patients can begin appropriate treatments for colorectal cancer much sooner. The simplicity and ease of use of the new technology compared with other molecular techniques also makes it suitable for integration into clinical laboratories of any size, including those that may not have much molecular expertise.

"This is the first such study to address turnaround time for when the oncologist receives clinically actionable results from the lab," says lead author Gregory J. Tsongalis, PhD, of Dartmouth's and Dartmouth-Hitchcock's Norris Cotton Cancer Center. "Getting results to our oncologists in a timely fashion allows them to be better prepared in the selection of a therapeutic management strategy." Results of the study, "Comparison of Tissue Molecular Biomarker Testing Turnaround Times and Concordance Between Standard of Care and the Biocartis Idylla Platform in Patients With Colorectal Cancer," are newly published online in the American Journal of Clinical Pathology.

Also significant in this study is the participation of 20 laboratories of different types and sizes--not just larger academic center labs--throughout the United States and Puerto Rico, with an accrual of almost 800 colorectal cancer tissue samples to measure. In collecting the data, the study also addressed the use of minimal tissue from the patient sample. "One of the study sites included several 1-mm tissue biopsy samples, showing that even smaller tissue-based specimens can be successfully analyzed by Idylla," notes Tsongalis. "Our results are in line with findings of other studies showing successful analyses using very small tissue amounts, including those deemed too small for standard molecular testing methods."

Next steps include looking at strategies to integrate the new platform into regular laboratory processes. "We already use something similar for our melanoma patients," notes Tsongalis. As the majority of care of cancer patients in the United States is provided in smaller medical facilities, this user-friendly system would allow those institutions to offer the most clinically actionable testing at their own hospitals with minimal hands-on time for testing and rapid result reporting.

Credit: 
Dartmouth Health

Process for 'two-faced' nanomaterials may aid energy, information tech

image: Selenium atoms, represented by orange, implant in a monolayer of blue tungsten and yellow sulfur to form a Janus layer. In the background, electron microscopy confirms atomic positions.

Image: 
Oak Ridge National Laboratory, U.S. Dept. of Energy

A team led by the Department of Energy's Oak Ridge National Laboratory used a simple process to implant atoms precisely into the top layers of ultra-thin crystals, yielding two-sided structures with different chemical compositions. The resulting materials, known as Janus structures after the two-faced Roman god, may prove useful in developing energy and information technologies.

"We're displacing and replacing only the topmost atoms in a layer that is only three atoms thick, and when we're done, we have a beautiful Janus monolayer where all the atoms in the top are selenium, with tungsten in the middle and sulfur in the bottom," said ORNL's David Geohegan, senior author of the study, which is published in ACS Nano, a journal of the American Chemical Society. "This is the first time that Janus 2D crystals have been fabricated by such a simple process."

Yu-Chuan Lin, a former ORNL postdoctoral fellow who led the study, added, "Janus monolayers are interesting materials because they have a permanent dipole moment in a 2D form, which allows them to separate charge for applications ranging from photovoltaics to quantum information. With this straightforward technique, we can put different atoms on the top or bottom of different layers to explore a variety of other two-faced structures."

This study probed 2D materials called transition metal dichalcogenides, or TMDs, that are valued for their electrical, optical and mechanical properties. Tuning their compositions may improve their abilities to separate charge, catalyze chemical reactions or convert mechanical energy to electrical energy and vice versa.

A single TMD layer is made of a ply of transition metal atoms, such as tungsten or molybdenum, sandwiched between plies of chalcogen atoms, such as sulfur or selenium. A molybdenum disulfide monolayer, for example, features molybdenum atoms between plies of sulfur atoms, structurally similar to a sandwich cookie with a creamy center between two chocolate wafers. Replacing one side's sulfur atoms with selenium atoms produces a Janus monolayer, akin to swapping one of the chocolate wafers with a vanilla one.

Before this study, turning a TMD monolayer into a two-faced structure was more a theoretical feat than an actual experimental accomplishment. In the many scientific papers about Janus monolayers published since 2017, 60 reported theoretical predictions and only two described experiments to synthesize them, according to Lin. This reflects the difficulty in making Janus monolayers due to the significant energy barriers that prevent their growth by typical methods.

In 2015, the ORNL group discovered that pulsed laser deposition could convert molybdenum diselenide to molybdenum disulfide. At the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility at ORNL, pulsed laser deposition is a critical technique for developing quantum materials.

"We speculated that by controlling the kinetic energy of atoms, we could implant them in a monolayer, but we never thought we could achieve such exquisite control," Geohegan said. "Only with atomistic computational modeling and electron microscopy at ORNL were we able to understand how to implant just a fraction of a monolayer, which is amazing."

The method uses a pulsed laser to vaporize a solid target into a hot plasma, which expands from the target toward a substrate. This study used a selenium target to produce a beam-like plasma of clusters of two to nine selenium atoms, which were directed to strike pre-grown tungsten disulfide monolayer crystals.

The key to success in creating two-faced monolayers is bombarding the crystals with a precise amount of energy. Throw a bullet at a door, for example, and it bounces off the surface. But shoot the door and the bullet rips right through. Implanting selenium clusters into only the top of the monolayer is like shooting a door and having the bullet stop in its surface.

"It's not easy to tune your bullets," Geohegan said. The fastest selenium clusters, with energies of 42 electron volts (eV) per atom, ripped through the monolayer; they needed to be controllably slowed to implant into the top ply.

"What's new from this paper is we are using such low energies," said Lin. "People never explored the regime below 10 eV per atom because commercial ion sources only go down to 50 eV at best and don't allow you to choose the atoms you would like to use. However, pulsed laser deposition lets us choose the atoms and explore this energy range fairly easily."

The key to tuning the kinetic energy, Lin said, is to controllably slow the selenium clusters by adding argon gas in a pressure-controlled chamber. Limiting the kinetic energy restricts the penetration of atomically thin layers to specific depths. Injecting a pulse of atom clusters at low energy temporarily crowds and displaces atoms in a region, causing local defects and disorder in the crystal lattice. "The crystal then ejects the extra atoms to heal itself and recrystallizes into an orderly lattice," Geohegan explained. Repeating this implantation and healing process over and over can increase the selenium fraction in the top layer to 100% to complete the formation of a high-quality Janus monolayer.

Controllably implanting and recrystallizing 2D materials in this low-kinetic-energy regime is a new road to making 2D quantum materials. "Janus structures can be made in mere minutes at the low temperatures that are required for semiconductor electronic integration," Lin said, paving the way for production-line manufacturing. Next the researchers want to try making Janus monolayers on flexible substrates useful in mass production, such as plastics.

To prove that they had achieved a Janus structure, Chenze Liu and Gerd Duscher, both of the University of Tennessee, Knoxville, and Matthew Chisholm of ORNL used high-resolution electron microscopy to examine a tilted crystal to identify which atoms were in the top layer (selenium) versus the bottom layer (sulfur).

However, understanding how the process replaced sulfur atoms with larger selenium atoms -- an energetically difficult feat -- was a challenge. ORNL's Mina Yoon used supercomputers at the Oak Ridge Leadership Computing Facility, a DOE Office of Science user facility at ORNL, to calculate the energy dynamics of this uphill battle from theory using first principles.

Further, the scientists needed to understand how energy transferred from clusters to lattices to create local defects. With molecular dynamics simulations, ORNL's Eva Zarkadoula showed clusters of selenium atoms collide with the monolayer at different energies and either bounce off it, crash through it or implant in it -- consistent with the experimental results.

To further confirm the Janus structure, ORNL researchers proved structures had predicted characteristics by calculating their vibrational modes and conducting Raman spectroscopy and X-ray photoelectron spectroscopy experiments.

To understand that the plume was made of clusters, scientists used a combination of optical spectroscopy and mass spectrometry to measure molecular masses and velocities. Taken together, theory and experiment indicated 3 to 5 eV per atom was the optimal energy for precise implantation to form Janus structures.

Credit: 
DOE/Oak Ridge National Laboratory

Simple bed-side test detects bleeding risk in patients after surgery or major injury

Impaired blood clotting after surgery or major injury can lead to severe bleeding or hemorrhage in patients if they are not rapidly treated with blood transfusions.

A team led by investigators at Massachusetts General Hospital has developed a novel, inexpensive and portable device that can quickly and accurately measure the ability of blood to properly clot (or coagulate).

Traditional laboratory tests for clotting defects take several hours to measure and are often not reflective of the clotting state at the time of transfusion, but the new test--called iCoagLab--instead reports several coagulation metrics within minutes.

The iCoagLab test involves the use of an inexpensive laser that can illuminate a few drops of blood taken from a patient. Fluctuations made visible by the laser indicate whether the blood can properly clot.

In the new study, which is published in the journal Thrombosis and Haemostasis, iCoagLab was compared with a conventional test for clotting defects and applied to blood samples from 270 patients. Clotting parameters included reaction time, clot progression time, clot progression rate and maximum clot strength.

In all patients, there was good correlation between iCoagLab results and conventional test results. Analyses revealed that iCoagLab was as sensitive at detecting bleeding risk as the conventional test, and it had a higher accuracy.

The researchers have recently developed a hand-held version of the technology that will allow clinicians to simply place a drop of blood into a disposable cartridge to generate results within minutes at a patient's bedside.

"Clinicians in the operating room or the ICU often walk a thin line to maintain the delicate balance between bleeding and coagulation," said senior author Seemantini K. Nadkarni, PhD, an associate professor at Harvard Medical School and director of the Nadkarni Laboratory for Optical Micromechanics and Imaging at the Wellman Center for Photomedicine at Mass General.

"The iCoagLab innovation will likely advance clinical capability to rapidly identify patients with defective clotting at the point-of-care, assess risk of hemorrhage, and tailor treatments based on individual coagulation deficits to help prevent life-threatening bleeding in patients."

Credit: 
Massachusetts General Hospital

Unknown currents in Southern Ocean have been observed with help of seals

image: A Weddell seal collects data in the ocean while swimming. This information doesn't only help marine sciences researchers, but also biologist, to understand the seals' habitat better. (With permission: Dan Costa.)

Image: 
Dan Costa

Using state-of-the-art ocean robots and scientific sensors attached to seals, researchers in Marine Sciences at the University of Gothenburg have for the first time observed small and energetic ocean currents in the Southern Ocean. The currents are critical at controlling the amount of heat and carbon moving between the ocean and the atmosphere - information vital for understanding our global climate and how it may change in the future.

Two new studies, one led by Associate Professor Sebastiaan Swart and the other led by Dr Louise Biddle, both working at the University of Gothenburg, use highly novel techniques to collect rare data in the ocean both under and near the sea ice surrounding Antarctica.

Ocean currents have significant effect

These papers present for the first time upper ocean currents of approximately 0.1-10 km in size. These currents, which are invisible to satellite and ship-based data, are seen to interact with strong Southern Ocean storms and with physical processes occurring under sea ice.

"Using the data collected by the seals, we're able to look at the impact these upper ocean currents have underneath the sea ice for the first time. It's a really valuable insight into what was previously completely unknown in the Southern Ocean," says Dr Louise Biddle, Department of Marine Sciences, University of Gothenburg.

The winter had assumed to be a "quiet" time due to the dampening effect of sea ice on the ocean's surface. However, the two studies show that these upper ocean currents have a significant effect on the ocean during winter.

Unprecedented high-resolution measurements

Some of the findings by Sebastiaan Swart and his team gives further insight how these observed ocean currents work. Their study highlights that during times when there are no storms and winds are weak, upper ocean currents start to become much more energetic. This energy enhances the rate of ocean mixing and transport of properties, like heat, carbon and nutrients, around the ocean and into the deep ocean.

"These new ocean robots, so-called gliders, which we control by satellite for months at a time, have allowed us to measure the ocean at unprecedented high resolution. The measurements have revealed strong physical linkages between the atmosphere and ocean. It's pretty amazing we can remotely 'steer' these robots in the most far-flung parts of the world - the ocean around Antarctica - while collecting new science data," says Associate Professor Sebastiaan Swart, Department of Marine Sciences, University of Gothenburg.

Fill a critical knowledge gap

Together, these studies contribute to improving our understanding of small-scale ocean and climate processes that have impacts globally. These kinds of observations are a critical knowledge gap in the ocean that has an impact on various processes occurring at global scale, such as ecosystems and climate.

"We are excited to grow this research capability at the University of Gothenburg. This is really a world-leading direction we should be taking to collect part of our data in marine sciences," says Sebastiaan Swart.

Credit: 
University of Gothenburg

The nature of nuclear forces imprinted in photons

image: A two-dimensional map of the "quality of gamma line fit" surface (chi^2 surface) as a function of its transition energy Eγ and lifetime τ of the nuclear state studied. The surface minimum, marked
with a cross, determines the best fitting values Eγ and τ, and the black line illustrates the uncertainties (errors) of these quantities. In the background of the graphics are presented three detector systems used during the experiment: AGATA, PARIS and VAMOS. (Source: IFJ PAN)

Image: 
Source: IFJ PAN

IFJ PAN scientists together with colleagues from the University of Milano (Italy) and other countries confirmed the need to include the three-nucleon interactions in the description of electromagnetic transitions in the 20O atomic nucleus. Vital for validating the modern theoretical calculations of the nuclear structure was the application of state-of-the-art gamma-ray detector systems and the newly developed technique for measurements of femtosecond lifetimes in exotic nuclei produced in heavy-ion deep-inelastic reactions.

Atomic nuclei consist of nucleons - protons and neutrons. Protons and neutrons are systems of quarks and gluons held together by strong nuclear interactions. The physics of quarks and gluons is described by quantum chromodynamics (QCD), so we could expect that the properties of nuclear forces would also result from this theory. Unfortunately, despite many attempts, determining the characteristics of strong interactions based on QCD faces enormous computational difficulties. However, relatively much is known about the properties of nuclear forces - this knowledge is based on many years of experimentation. Theoretical models were also developed that can reproduce the basic properties of forces acting between a pair of nucleons - they make use of the so-called effective nucleon-nucleon interaction potentials.

Knowing the details of the interaction between two nucleons, we would expect that the description of the structure of any atomic nucleus will not be a problem. Surprisingly, it turns out that when a third nucleon is added to the two-nucleon system, the attraction between the initial two nucleons increases. What follows, the strength of the interaction between the components of each pair of nucleons in the three-body system increases - an additional force shows up that seems not to exist in the case of an isolated pair. This puzzling contribution is called the irreducible three-nucleon force.

This situation turned out to be an inspiration for the scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences and their colleagues from the University of Milan. They realized that a perfect test for the presence of three-nucleon interactions in nuclei could be to determine the lifetimes of selected excited states in neutron-rich oxygen and carbon isotopes. As a result of detailed analyzes, the concept of an experiment was born, whose coordinators became Prof. Silvia Leoni from the University of Milan and Dr. Michal Ciemala and Prof. Bogdan Fornal from IFJ PAN. Researchers working at the French GANIL laboratory in Caen and other research institutions from around the world were also invited to cooperate in this project.

"The experiment focused on determining the lifetime of excited nuclear states for neutron-rich carbon and oxygen isotopes, 16C and 20O," explains Prof. Fornal. "In these nuclei, the excited states appear, which seem to be particularly sensitive to the inclusion in the calculations of the three-body interaction (nucleon-nucleon-nucleon - NNN) in addition to the two-body nuclear interaction (nucleon-nucleon - NN). In the case of the 20O nucleus, the lifetime of the second excited state 2+, calculated for only the NN interaction, should be 320 femtoseconds, while taking into account the NN and NNN interactions, the calculations give the result of 200 femtoseconds. For the lifetime of the second state 2+ in 16C, the difference is even greater: 370 femtoseconds (NN) versus 80 femtoseconds (NN + NNN)."

The experiment dedicated to measuring the lifetimes was carried out at the GANIL research center in Caen, France. Scientists used gamma radiation detectors (AGATA and PARIS) connected to a magnetic spectrometer (VAMOS). The reaction of a 18O beam with a 181Ta target generated excited atomic nuclei of elements such as B, C, N, O and F as a result of deep inelastic scattering or nucleon transfer processes. In the investigated moving nuclei, the excited quantum states decayed by the emission of high-energy photons, whose energy was shifted compared to the energy of transitions in the rest frame. This shift depends on the velocity of the photon emitting nucleus and the angle of emission. This phenomenon is described by the relativistic Doppler formula.

For nuclear level lifetimes shorter than the flight time of the excited nucleus through the target (about 300 femtoseconds), gamma quantum emission happens mostly when the nucleus is still in the target. In the case described scientists measured the nucleus velocity after its passing through the target. Using this velocity to correct the spectrum of gamma radiation energy, obtained spectral lines have the shape corresponding to the Gaussian distribution for cases where the excited state lifetime is long. For lifetimes from 100 to 200 femtoseconds spectral lines show an asymmetrical component and for lifetimes less than 100 femtoseconds they are completely shifted to smaller energies.

"To determine the lifetime, we conducted simulations and compared their results with the measured spectrum of gamma radiation energy," says Dr. Ciemala, the author of the concept of measuring nuclear state decay time used in the experiment. "In these studies, the method described above was applied for the first time to determine the lifetime of excited states in nuclei produced in deep-inelastic reactions. It required the development of advanced Monte Carlo simulation codes that included reaction kinematics and reproduced the measured velocity distributions of reaction products. The method used, in conjunction with the applied detection systems, brought very satisfactory results."

The described research for the first time allowed scientists to measure the lifetime of tens and hundreds of femtoseconds of a nuclear state created in a deep-inelastic reaction - in the described case it was the second state 2+ in the 20O nucleus for which the lifetime of 150 femtoseconds was obtained. The validity of the new method was demonstrated by determining the lifetimes for the excited states in the 19O nucleus that perfectly agreed with the literature data. It needs to be stressed that the lifetime of the second 2+ state in 20O, obtained in this work, agrees with the theoretical predictions only if two- and three-body interactions are taken into account at the same time. This leads to the conclusion that the measurement quantities provided by electromagnetic transitions and obtained using precise gamma spectroscopy can be very good probes in assessing the quality of ab initio calculations of the nuclear structure.

"This developed pioneering procedure will help us measure the lifetime of excited states for very exotic nuclei far from the stability valley, which can be created in deep-inelastic reactions using high-intensity radioactive beams, which will soon be available, for example, at the INFN Laboratori Nazionali di Legnaro near Padua in Italy," argues Prof. Fornal. "The information obtained will be essential for nuclear astrophysics and certainly contribute to the progress in understanding the formation of atomic nuclei in the rapid neutron-capture process in supernova explosions or the merging of neutron stars that has recently been observed by measuring gravitational waves in coincidence with gamma radiation."

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences