Tech

Brain mapping study suggests motor regions for the hand also connect to the entire body

Mapping different parts of the brain and determining how they correspond to thoughts, actions, and other neural functions is a central area of inquiry in neuroscience, but while previous studies using fMRI scans and EEG have allowed researchers to rough out brain areas connected with different types of neural activities, they have not allowed for mapping the activity of individual neurons.

Now in a paper publishing March 26 in the journal Cell, investigators report that they have used microelectrode arrays implanted in the brains of two people to map out motor functions down to the level of the single nerve cell. The study revealed that an area believed to control only one body part actually operates across a wide range of motor functions. It also demonstrated how different neurons coordinate with each other.

"This research shows for the first time that an area of the brain previously thought to be connected only to the arm and hand has information about the entire body," says first author Frank Willett, a postdoctoral fellow in the Neural Prosthetics Translational Laboratory at Stanford University and the Howard Hughes Medical Institute. "We also found that this area has a shared neural code that links all the body parts together."

The study, a collaboration between neuroscientists at Stanford and Brown University, is part of BrainGate2, a multisite pilot clinical trial focused on developing and testing medical devices to restore communication and independence in people affected by neurological conditions like paralysis and locked-in syndrome. A major focus of the Stanford team has been developing ways to restore the ability of these people to communicate through brain-computer interfaces (BCIs).

The new study involved two participants who have chronic tetraplegia--partial or total loss of function in all four limbs. One of them has a high-level spinal cord injury and the other has amyotrophic lateral sclerosis. Both have electrodes implanted in the so-called hand knob area of the motor cortex of their brains. This area--named in part for its knoblike shape--was previously thought to control movement in the hands and arms only.

The investigators used the electrodes to measure the action potentials in single neurons when the participants were asked to attempt to do certain tasks--for example, lifting a finger or turning an ankle. The researchers looked at how the microarrays in the brain were activated. They were surprised to find that the hand knob area was activated not only by movements in the hand and arm, but also in the leg, face, and other parts of the body.

"Another thing we looked at in this study was matching movements of the arms and legs," Willett says, "for example, moving your wrist up or moving your ankle up. We would have expected the resulting patterns of neural activity in motor cortex to be different, because they are a completely different set of muscles. We actually found that they were much more similar than we would have expected." These findings reveal an unexpected link between all four limbs in motor cortex that might help the brain to transfer skills learned with one limb to another one.

Willett says that the new findings have important implications for the development of BCIs to help people who are paralyzed to move again. "We used to think that to control different parts of the body, we would need to put implants in many areas spread out across the brain," he notes. "It's exciting, because now we can explore controlling movements throughout the whole body with an implant in only one area."

One important potential application for BCIs is allowing people who are paralyzed or have locked-in syndrome to communicate by controlling a computer mouse or other device. "It may be that we can connect different body movements to different types of computer clicks," Willett says. "We hope we can leverage these different signals more accurately to enable someone who can't talk to use a computer, since neural signals from different body parts are easier for a BCI to tease apart than those from the arm or hand alone."

Credit: 
Cell Press

New fabrication approach paves way to low cost mid-infrared lasers useful for sensing

WASHINGTON -- For the first time, researchers have fabricated high-performance mid-infrared laser diodes directly on microelectronics-compatible silicon substrates. The new lasers could enable the widespread development of low-cost sensors for real-time, accurate environmental sensing for applications such as air pollution monitoring, food safety analysis, and detecting leaks in pipes.

"Most optical chemical sensors are based on the interaction between the molecule of interest and mid-infrared light," said research team leader Eric Tournié from the University of Montpellier in France. "Fabricating mid-infrared lasers on microelectronics-compatible silicon can greatly reduce their cost because they can be made using the same high-volume processing techniques used to make the silicon microelectronics that power cell phones and computers."

The new fabrication approach is described in Optica, The Optical Society's (OSA) journal for high impact research. The work was conducted at the EXTRA facilities and as part of the REDFINCH consortium, which is developing miniaturized, portable low-cost optical sensors for chemical detection in both gases and liquids.

"For this project, we are working upstream by developing photonic devices for future sensors," said Tournié. "At a later stage, these new mid-infrared lasers could be combined with silicon photonics components to create smart, integrated photonic sensors."

Industry-compatible fabrication

Laser diodes are made of semiconductor materials that convert electricity into light. Mid-infrared light can be created using a type of semiconductor known as III-V. For about a decade, the researchers have been working on depositing III-V semiconductor material on silicon using a method known as epitaxy.

Although the researchers previously demonstrated lasers on silicon substrates, those substrates were not compatible with industry standards for microelectronics fabrication. When using industry-compatible silicon, differences in the material structures of the silicon and the III-V semiconductor cause defects to form.

"A particular defect called an anti-phase boundary is a device killer because it creates short-circuits," said Tournié. "In this new work, we developed an epitaxial approach that prevents these defects from reaching the active part of a device."

The researchers also improved the process used to fabricate the laser diode from the epitaxial material. As a result, they were able to create an entire laser structure on an industry-compatible silicon substrate with a single run of an epitaxial tool.

High-performance lasers

The researchers demonstrated the new approach by producing mid-infrared laser diodes that operated in continuous-wave mode and exhibited low optical losses. They now plan to study the lifetime of the new devices and how that lifetime relates to the fabrication and operation mode of the devices.

They say that once their method is fully mature, epitaxy of lasers on large silicon substrates (up to 300 millimeters across) using silicon microelectronics tools will improve control of the fabrication process. This will, in turn, further reduce laser fabrication costs and enable the design of new devices. The new lasers could also be combined with passive silicon photonics integrated circuits or CMOS technology to create small, low cost, smart photonic sensors for gas and liquid measurements with high sensitivity.

"The semiconductor material we work with allows fabrication of lasers or photodetectors operating in a broad spectral range, from 1.5 microns (telecom band) to 25 microns (far infrared)," said Tournié. "Our fabrication method can be applied in any field where one needs to integrate III-V semiconductors on silicon platforms. For example, we have already fabricated quantum-cascade lasers emitting at 8 microns by applying this new epitaxial approach."

Credit: 
Optica

Depression severity, care among older adults from different racial/ethnic groups

What The Study Did: Racial and ethnic differences appear to exist in depression severity and care in this observational study of older adults who participated in a randomized clinical trial of cancer and cardiovascular disease prevention.

Authors: Olivia I. Okereke, M.D., S.M., of Massachusetts General Hospital in Boston, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(10.1001/jamanetworkopen.2020.1606)

Editor's Note:  The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

New framework will help decide which trees are best in the fight against air pollution

A study from the University of Surrey has provided a comprehensive guide on which tree species are best for combatting air pollution that originates from our roads - along with suggestions for how to plant these green barriers to get the best results.

In a paper published in npj Climate and Atmospheric Science, air pollution experts from Surrey's Global Centre for Clean Air Research (GCARE) conducted a wide-ranging literature review of research on the effects of green infrastructure (trees and hedges) on air pollution. The review found that there is ample evidence of green infrastructure's ability to divert and dilute pollutant plumes or reduce outdoor concentrations of pollutants by direct capture, where some pollutants are deposited on plant surfaces.

As part of their critical review, the authors identified a gap in information to help people - including urban planners, landscape architects and garden designers - make informed decisions on which species of vegetation to use and, crucially, what factors to consider when designing a green barrier.

To address this knowledge gap, they identified 12 influential traits for 61 tree species that make them potentially effective barriers against pollution. Beneficial plant properties include small leaf size, high foliage density, long in-leaf periods (e.g. evergreen or semi-evergreen), and micro-characteristics such as leaf hairiness. Generally detrimental aspects of plants for air quality include wind pollination and biogenic volatile organic compound emissions. In the paper, the team emphasise that the effectiveness of a plant is determined by its environmental context - whether, for example, it will be used in a deep (typical of a city commercial centre) or shallow (typical of a residential road) street canyon or in an open road environment. To help concerned citizens with complex decisions, such as which tree is best for a road outside a school in a medium-sized street canyon, the team from Surrey has also developed a plant selection framework.

Professor Prashant Kumar, Founding Director of GCARE at the University of Surrey, said: "We are all waking up to the fact that air pollution and its impact on human health and the health of our planet is the defining issue of our time. Air pollution is responsible for one in every nine deaths each year and this could be intensified by projected population growth.

"The use of green infrastructure as physical barriers between ourselves and pollutants originating from our roads is one promising way we can protect ourselves from the devastating impact of air pollution. We hope that our detailed guide to vegetation species selection and our contextual advice on how to plant and use green infrastructure is helpful to everyone looking to explore this option for combatting pollution."

Credit: 
University of Surrey

Experiments in mice and human cells shed light on best way to deliver nanoparticle therapy for cancer

image: Histology image of HER2+ tumor showing accumulation of Herceptin-labeled nanoparticle (upper right, and blue in histology) accumulation in tumor microenvironment (immune) and not on HER2+ cancer cells.

Image: 
Robert Ivkov, Ph.D.

Researchers in the cancer nanomedicine community debate whether use of tiny structures, called nanoparticles, can best deliver drug therapy to tumors passively -- allowing the nanoparticles to diffuse into tumors and become held in place, or actively -- adding a targeted anti-cancer molecule to bind to specific cancer cell receptors and, in theory, keep the nanoparticle in the tumor longer. Now, new research on human and mouse tumors in mice by investigators at the Johns Hopkins Kimmel Cancer Center suggests the question is even more complicated.

Laboratory studies testing both methods in six models of breast cancer; five human cancer cell lines and one mouse cancer in mice with three variants of the immune system found that nanoparticles coated with trastuzumab, a drug that targets human epidermal growth factor receptor 2 (HER2)-positive breast cancer cells, were better retained in the tumors than plain nanoparticles, even in tumors that did not express the pro-growth HER2 protein. However, immune cells of the host exposed to nanoparticles induced an anti-cancer immune response by activating T cells that invaded and slowed tumor growth.

A description of the work will be published March 25 in Science Advances.

"It's been known for a long time that nanoparticles, when injected into the bloodstream, are picked up by scavengerlike macrophages and other immune system cells," explains senior study author Robert Ivkov, Ph.D., M.Sc., associate professor of radiation oncology and molecular radiation sciences at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins. "Many researchers in the field have been focused on trying to reduce interactions with immune cells, because they have been trying to increase the circulation time of the nanoparticles and their retention in tumor cells. But our study demonstrates that the immune cells in the tumor collect and react to the particles in such a way to stimulate an anti-cancer response. This may hold potential for advancing beyond drug delivery toward developing cancer immunotherapies."

The investigators conducted a few in vitro experiments in their study. First, they applied some plain starch-coated iron oxide nanoparticles and others coated with trastuzumab to five human breast cancer cell lines, finding that the amount of binding between the trastuzumab-coated nanoparticles and cells depended on how much the cancer cells expressed the oncogene HER2. In people, HER2-positive breast cancers are among the most resistant to standard chemotherapy. Trastuzumab, sold under the name Herceptin, targets the HER2-positive tumor cells and triggers the immune system as well.

Responses were surprisingly different in animal models, the researchers report. In separate experiments, the team used the nanoparticles in two immune-deficient strains of mice engrafted with cells from five human breast cancer cell lines -- two that were HER2 negative and three that were HER2 positive. When they studied the animals' tumors 24 hours later, they noticed that nanoparticles coated with trastuzumab were found in a concentration two to five times greater than the plain nanoparticles in all types of tumors, regardless of whether they expressed the HER2 protein. They also found that the amount of trastuzumab-coated nanoparticles was even greater (tenfold) in mice that had a fully functional immune system and were bearing mouse-derived tumors.

This led the researchers to suspect that the host animals' immune systems were interacting strongly with the nanoparticles and playing a role in determining retention of the particles in the tumor, whether or not a drug was added.

More experiments, the team reports, revealed that tumor-associated immune cells were responsible for collecting the nanoparticles, and that mice bred with an intact immune system retained more of the trastuzumab-coated nanoparticles than mice bred without a fully functioning immune system.

In addition, inflammatory immune cells in the tumors' immediate surroundings, or microenvironment, seized more of the coated nanoparticles than the plain ones. Finally, in a series of 30-day experiments, the researchers found that exposure to nanoparticles inhibited tumor growth three to five times more than controls, and increased CD8-positive cancer-killing T cells in the tumors. Surprisingly, Ivkov notes, the anti-cancer immune activating response was equally effective with exposure to either plain or trastuzumab-coated nanoparticles. Mice with defective T cells did not show tumor growth inhibition. The investigators say this demonstrated that systemic exposure to nanoparticles can cause a systemic host immune response that leads to anti-cancer immune stimulation, and does not require nanoparticles to be inside the tumors.

"Overall, our work suggests that complex interdependencies exist between the host and tumor immune responses to nanoparticle exposure," Ivkov says. "These results offer intriguing possibilities for exploring nanoparticle 'targeting' of the tumor immune microenvironment. They also demonstrate exciting new potential to develop nanoparticles as platforms for cancer immune therapies."

The investigators say they also plan to study whether the same types of immune responses can be generated for noncancer conditions, such as infectious diseases.

Credit: 
Johns Hopkins Medicine

Holographic cosmological model and thermodynamics on the horizon of the universe

image: Figure 1 shows the boundary for maximization of the entropy in the (α, ψ) plane for three values of the normalized scale factor a. ψ represents a type of density parameter for the effective dark energy and α is an exponent of the power-law term Hα. The closed circle represents the result from the fine-tuned LCDM model, i.e., (α, ψ) = (0, 0.685). Three boundaries for values of a=0.5, 1, and 4 are shown, where a=1 corresponds to the present time. The arrow at each boundary indicates a region that satisfies the conditions for maximization of the entropy. This region gradually extends downward as the normalized scale factor increases. However, the region does not currently exceed α=2.

Image: 
Kanazawa University

Kanazawa, Japan - The expansion of the Universe has occupied the minds of astronomers and astrophysicists for decades. Among the cosmological models that have been suggested over the years, Lambda cold dark matter (LCDM) models are the simplest models that can provide elegant explanations of the properties of the Universe, e.g., the accelerated expansion of the late Universe and structural formations. However, the LCDM model suffers from several theoretical difficulties, such as the cosmological constant problem. To resolve these difficulties, alternative thermodynamic scenarios have recently been proposed that extend the concept of black hole thermodynamics.

"Previous research implies that a certain type of universe will behave like an ordinary macroscopic system. The expansion of the Universe is considered likely to be related to thermodynamics on its horizon, based on the holographic principle," explains the study's author, Kanazawa University's Nobuyoshi Komatsu.

"I considered a cosmological model with a power-law term, assuming application of the holographic equipartition law. The power-law term is proportional to Hα, where H is the Hubble parameter and α is considered to be a free parameter (α may be related to the entanglement of the quantum fields close to the horizon)."

"I used the proposed model to study the thermodynamic properties on the horizon of the Universe, focusing on the evolutions of the Bekenstein-Hawking entropy. I found that the model satisfies the second law of thermodynamics on the horizon," says Associate Professor Komatsu.

"In addition, I used the model to examine the relaxation-like processes that occur before the last stage of the evolution of the Universe and thus enable study of the maximization of the entropy."

"Figure 1 shows the boundaries for maximization of the entropy in the (α, ψ) plane. Here, ψ represents a type of density parameter for the effective dark energy. The upper side of each boundary corresponds to the region that satisfies the conditions for maximization of the entropy. For example, the point for the fine-tuned LCDM model is found to satisfy the conditions for maximization of the entropy at the present time. In addition, the region close to this point also satisfies the conditions for maximization of the entropy, both at the present time and in the future. Cosmological models in this region are likely to be favored from a thermodynamics viewpoint," says Associate Professor Komatsu.

In addition to the reported results of the study, it is hoped that the developed model will serve to enable discussion and analysis of the wide range of currently available cosmological models from a thermodynamics perspective.

Credit: 
Kanazawa University

Under extreme heat and drought, trees hardly benefit from an increased CO2 level

image: Experimental setup: In high-tech plant chambers, Aleppo pines were exposed to increasing temperatures. (Photo: Plant Ecophysiology Lab, KIT)

Image: 
Plant Ecophysiology Lab, KIT

The increase in the CO2 concentration of the atmosphere does not compensate the negative effect of greenhouse gas-induced climate change on trees: The more extreme drought and heat become, the less do trees profit from the increased supply with carbon dioxide in terms of carbon metabolism and water use efficiency. This finding was obtained by researchers of Karlsruhe Institute of Technology (KIT) when studying Aleppo pines. Their study is reported in New Phytologist (DOI: 10.1111/nph.16471).

Due to greenhouse gas-induced climate change, trees are increasingly exposed to extreme drought and heat. The question of how the increased CO2 concentration in the atmosphere influences physiological reaction of the trees under climate stress, however, is highly controversial. Carbon dioxide is known to be the main nutrient of plants. By photosynthesis, plants use sunlight to convert CO2 and water into carbohydrates and biomass. Periods of drought and heat, however, increase the stress level of the trees. Their roots have difficulties reaching the water. To reduce evaporation losses, trees close the stomata of their leaves, as a result of which they take up less CO2 from the air.

These relationships have now been studied in more detail by the Plant Ecophysiology Lab of the Atmospheric Environmental Research Division of KIT's Institute of Meteorology and Climate Research (IMK-IFU), KIT's Campus Alpine in Garmisch-Partenkirchen. Together with scientists of Ludwig-Maximilians-Universität München, the University of Vienna, and Weizmann Institute of Science in Rechovot/Israel, KIT researchers studied the impact of an increased CO2 concentration on carbon metabolism and water use efficiency of Aleppo pines (pinus halepensis) under drought and heat.

As reported by the researchers in New Phytologist, they cultivated Aleppo pines from seeds under atmospheric as well as under a significantly increased CO2 concentration (421 ppm and 867 ppm). The one-and-a-half year old trees were then either watered well or left dry for a month. Then, they were planted into high-tech plant chambers and exposed to temperatures gradually increasing from 25°C to 40°C over a period of ten days. During this period, the scientists continuously measured gas and water exchange of the trees and analyzed vital metabolic products.

The findings of the study: An increased CO2 concentration reduced water loss of the trees and increased their water use efficiency under increasing heat load. Net carbon uptake, however, decreased considerably. In addition, heat adversely affected metabolic properties of the trees, while metabolism hardly profited from the increased CO2 supply. The main positive effect observed was an increased stability of root proteins. "Overall, the impact of the increased CO2 concentration on stress reactions of the trees was rather moderate. With increasing heat and drought, it decreased considerably," summarizes Dr. Nadine Rühr, Head of the Plant Ecophysiology Lab of IMK-IFU. "From this, we conclude that the increasing CO2 concentration of the atmosphere cannot compensate the stress of the trees resulting from extreme climate conditions."

Credit: 
Karlsruher Institut für Technologie (KIT)

Astronomers use slime mould to map the universe's largest structures

image: Astronomers have designed a computer algorithm, inspired by slime mould behavior, and tested it against a computer simulation of the growth of dark matter filaments in the Universe. The researchers then applied the slime mould algorithm to data containing the locations of over 37 000 galaxies mapped by the Sloan Digital Sky Survey. The algorithm produced a three-dimensional map of the underlying cosmic web structure.

Image: 
NASA, ESA, and J. Burchett and O. Elek (UC Santa Cruz)

The single-cell organism known as slime mould (Physarum polycephalum) builds complex web-like filamentary networks in search of food, always finding near-optimal pathways to connect different locations.

In shaping the Universe, gravity builds a vast cobweb-like structure of filaments tying galaxies and clusters of galaxies together along invisible bridges of gas and dark matter hundreds of millions of light-years long. There is an uncanny resemblance between the two networks, one crafted by biological evolution, the other by the primordial force of gravity.

The cosmic web is the large-scale backbone of the cosmos, consisting primarily of dark matter and laced with gas, upon which galaxies are built. Even though dark matter cannot be seen, it makes up the bulk of the Universe's material. Astronomers have had a difficult time finding these elusive strands, because the gas within them is too dim to be detected.

The existence of a web-like structure to the Universe was first hinted at in galaxy surveys in the 1980s. Since those studies, the grand scale of this filamentary structure has been revealed by subsequent sky surveys. The filaments form the boundaries between large voids in the Universe. Now a team of researchers has turned to slime mould to help them build a map of the filaments in the local Universe (within 100 million light-years of Earth) and find the gas within them.

They designed a computer algorithm, inspired by the behaviour of slime mould, and tested it against a computer simulation of the growth of dark matter filaments in the Universe. A computer algorithm is essentially a recipe that tells a computer precisely what steps to take to solve a problem.

The researchers then applied the slime mould algorithm to data containing the locations of over 37 000 galaxies mapped by the Sloan Digital Sky Survey. The algorithm produced a three-dimensional map of the underlying cosmic web structure.

They then analysed the light from 350 faraway quasars catalogued in the Hubble Spectroscopic Legacy Archive. These distant cosmic flashlights are the brilliant black-hole-powered cores of active galaxies, whose light shines across space and through the foreground cosmic web. Imprinted on that light was the telltale signature of otherwise invisible hydrogen gas that the team analysed at specific points along the filaments. These target locations are far from the galaxies, which allowed the research team to link the gas to the Universe's large-scale structure.

"It's really fascinating that one of the simplest forms of life actually enables insights into the very largest-scale structures in the Universe," said lead researcher Joseph Burchett of the University of California (UC), U.S.A. "By using the slime mould simulation to find the location of the cosmic web filaments, including those far from galaxies, we could then use the Hubble Space Telescope's archival data to detect and determine the density of the cool gas on the very outskirts of those invisible filaments. Scientists have detected signatures of this gas for over half a century, and we have now proven the theoretical expectation that this gas comprises the cosmic web."

The survey further validates research that indicates intergalactic gas is organised into filaments and also reveals how far away gas is detected from the galaxies. Team members were surprised to find gas associated with the cosmic web filaments more than 10 million light-years away from the galaxies.

But that wasn't the only surprise. They also discovered that the ultraviolet signature of the gas gets stronger in the filaments' denser regions, but then disappears. "We think this discovery is telling us about the violent interactions that galaxies have in dense pockets of the intergalactic medium, where the gas becomes too hot to detect," Burchett said.

The researchers turned to slime mould simulations when they were searching for a way to visualise the theorised connection between the cosmic web structure and the cool gas, detected in previous Hubble spectroscopic studies.

Then team member Oskar Elek, a computer scientist at UC Santa Cruz, discovered online the work of Sage Jenson, a Berlin-based media artist. Among Jenson's works were mesmerizing artistic visualisations showing the growth of a slime mould's tentacle-like network of structures moving from one food source to another. Jenson's art was based on scientific work from 2010 by Jeff Jones of the University of the West of England in Bristol, which detailed an algorithm for simulating the growth of slime mould.

The research team was inspired by how the slime mould builds complex filaments to capture new food, and how this mapping could be applied to how gravity shapes the Universe, as the cosmic web constructs the strands between galaxies and galaxy clusters. Based on the simulation outlined in Jones's paper, Elek developed a three-dimensional computer model of the buildup of slime mould to estimate the location of the cosmic web's filamentary structure.

Credit: 
ESA/Hubble Information Centre

Artificial intelligence for very young brains

image: Example of segmentation produced by the tool which separates the structures in cerebrospinal fluid (red), grey matter (blue) and white matter (yellow) from MRI images T2 (middle column) and T1 (right column).

Image: 
CHU Sainte-Justine

Canadian scientists have developed an innovative new technique that uses artificial intelligence to better define the different sections of the brain in newborns during a magnetic resonance imaging (MRI) exam.

The results of this study -- a collaboration between researchers at Montreal's CHU Sainte-Justine children's hospital and the ÉTS engineering school -- are published today in Frontiers in Neuroscience.

"This is one of the first times that artificial intelligence has been used to better define the different parts of a newborn's brain on an MRI: namely the grey matter, white matter and cerebrospinal fluid," said Dr. Gregory A. Lodygensky, a neonatologist at CHU Sainte-Justine and professor at Université de Montréal.

"Until today, the tools available were complex, often intermingled and difficult to access," he added.

In collaboration with Professor Jose Dolz, an expert in medical image analysis and machine learning at ÉTS, the researchers were able to adapt the tools to the specificities of the neonatal setting and then validate them.

This new technique allows babies' brains to be examined quickly, accurately and reliably. Scientists see it as a major asset for supporting research that not only addresses brain development in neonatal care, but also the effectiveness of neuroprotective strategies.

In evaluating a range of tools available in artificial intelligence, CHU Sainte-Justine researchers found that these tools had limitations, particularly with respect to pediatric research. Today's neuroimaging analysis programs are primarily designed to work on "adult" MRIs. The cerebral immaturity of newborns, with an inversion of the contrasts between grey matter and white matter, complicates such analyses.

Inspired by Dolz's most recent work, the researchers proposed an artificial neural network that learns how to efficiently combine information from several MRI sequences. This methodology made it possible to better define the different parts of the brain in the newborn automatically and to establish a new benchmark for this problem.

"We've decided not only to share the results of our study on open source, but also the computer code, so that brain researchers everywhere can take advantage of it, all of which benefits patients," said Dolz.

CHU Sainte-Justine is one of the most important players in the Canadian Neonatal Brain Platform and also has one of the largest neonatal units in Canada specializing in neurodevelopment. As part of the platform, research teams are implementing projects like this one with the aim of improving the long-term health of those newborns who are most vulnerable to brain injury.

"In studies to assess the positive and negative impact of different therapies on the maturation of babies' brains, we need to have the ability to quantify brain structures with certainty and reliability," Lodygensky said. "By offering the scientific community the fruits of all our discoveries, we are helping them, while generating an extraordinary benefit for at-risk newborns."

He added: "We now want to democratize this tool so that it becomes the benchmark for the study of brain structure in newborns around the world. To this end, we are continuing to work on its generalizability -- that is, its use on MRI data acquired in different hospitals."

Credit: 
University of Montreal

Samara Polytech scientist designs wind-powered generator

image: Wind-powered generator photo

Image: 
@SamaraPolytech

Professor of the Department of Theoretical and Basic Electrical Engineering of Samara Polytech Pavel Grachev has designed the construction of electrical starter-generator for the power unit of a hybrid car. His post-graduate student Aleksey Tabachinskiy has improved the construction and adjusted it to the wind turbines. The recent research results are published in IEEE Transactions on Industry Applications (DOI: 10.1109/TIA.2020.2964231).

The new construction of winding consists of irregular cross-section conductor, which varies sequentially. Innovative design reduces mass and dimensions of the generator, as well as reduces the losses of electromechanical conversion.

"It's the unique feature of our generator because the serial generators either meet the requirements of energy efficiency by increasing the size and weight, either reduce the size by efficiency loss", Aleksey Tabachinskiy says.

Besides the generator winding, know-how of our researches includes the implementation of control algorithms and electronic units for robust operation in a wide range of wind-wheel speed.

"In stand-alone application, all kinds of generator, such as in wind turbines, micro hydro power stations or on-board vehicle generator, extra problem occur for the required operation mode. The installation should just generating power, but doing that high-efficiently and consistently", Tabachinskiy explains. "The control strategy has been developed by my research supervisor in his doctoral thesis. And I have considered the functional possibilities of its implementation in electric generators with our compact windings". This design has already been patented as a key part of electric machine with integrated electronic units and liquid cooling.

Credit: 
Samara Polytech (Samara State Technical University)

Biophysics -- lifting the lid on beta-barrels

Mechanical forces play a vital role at all levels in biological systems. The contraction and relaxation of muscle cells is undoubtedly the best known example of this, but mechanosensitive proteins are actually found in virtually all cells. For example, the shear stress exerted by blood flow on the cells that line blood vessels is sensed by mechanoreceptors, which trigger signaling pathways that control vessel diameter. In their efforts to understand the molecular mechanisms that mediate such processes, scientists study the responses of mechanosensitive proteins by analyzing their behavior under mechanical stress. Many of these experiments rely on the use of the tight and highly specific interaction between biotin (a vitamin) and its binding protein streptavidin as a force gauge. In collaboration with Rafael Bernardi at the Beckman Institute in Urbana (Illinois), LMU biophysicists led by Professor Hermann Gaub have now performed a detailed analysis of the mechanical stability of this complex, which appears in the journal Science Advances. Their findings show that the geometry of the interface between the ligand and its binding pocket in the receptor has a marked impact on the stability of the complex, and that this factor must be taken into account when evaluating experimental data.

A molecular dowel

Physicists use a technique called single-molecule force spectroscopy to measure how biomolecules react to mechanical forces. In this method, the protein of interest is bracketed between two molecular tags. One of these serves to covalently attach the protein to a glass slide. The other is linked to a setup that enables the experimenter to exert a graduated force on the protein. Often, this second linker makes use of the non-covalent interaction between biotin and streptavidin. "This ligand-receptor system is the 'rawlplug' of force spectroscopy," says Steffen Sedlak, a member of Gaub's team and lead author of the paper.

The biotin tag is covalently attached to the protein itself and is thus available for high-affinity binding to streptavidin. The remarkable mechanical stability of the interaction between the ligand and its receptor is a vital factor in these experiments. Since the first force spectroscopy experiments on this system by Gaub some 25 years ago, measurements of the response of the complex to mechanical forces have been measured by force spectroscopy in laboratories all over the world. However, the results obtained in different settings have been inconsistent and have shown quite a wide range of variation. The new study set out to determine the reasons for this.

The streptavidin protein is made up of four structurally similar domains, named 'beta-barrels' these barrels can accommodate one biotin molecule - and like every self-respecting barrel, each is equipped with a lid, which closes over the bound biotin. In this conformation, the biotin is accessible only from the other end - the end that is attached to the protein of interest.

Extracting biotin from its barrel

For their experiments, Sedlak and his colleagues created several mutant variants of the streptavidin protein. In each case, only one specific barrel structure was capable of binding to biotin, while the other three remained empty. To measure the stability of binding to each individual barrel, the team measured the forces required to pull the bound biotin out of the four streptavidin proteins. "We discovered that the magnitude of the force required to extract the biotin varied depending on which of the four barrels it was sitting in - even though all four of the active binding pockets are exactly the same," Gaub explains.

Out the door or through the wall?

With the aid of complex computer simulations, the team was able to puzzle out the reason for these surprising findings. When subjected to a gradually increasing mechanical force, the biotin molecules in the different pockets apparently take up different positions before the force becomes sufficiently large to displace them. This in turn alters the geometry of subsequent force application, and this factor can account for the differences in the response of each pocket. Extraction occurs most readily when the force is able to act on the flexible lid of the beta-barrel, but if tension is exerted on any other position, the system can resist much higher stresses. "As Diogenes himself surely discovered, it's easier to exit from a barrel by lifting the lid than by loosening the staves," says Sedlak. "But the fact that this principle also holds for the action of mechanical forces on single biomolecules is not at all trivial."

The streptavidin protein is made up of four structurally similar domains, named 'beta-barrels' these barrels can accommodate one biotin molecule - and like every self-respecting barrel, each is equipped with a lid, which closes over the bound biotin. In this conformation, the biotin is accessible only from the other end - the end that is attached to the protein of interest.

Credit: 
Ludwig-Maximilians-Universität München

Concerns over 'exaggerated' study claims of AI outperforming doctors

Many studies claiming that artificial intelligence is as good as (or better than) human experts at interpreting medical images are of poor quality and are arguably exaggerated, posing a risk for the safety of 'millions of patients' warn researchers in The BMJ today.

Their findings raise concerns about the quality of evidence underpinning many of these studies, and highlight the need to improve their design and reporting standards.

Artificial intelligence (AI) is an innovative and fast moving field with the potential to improve patient care and relieve overburdened health services. Deep learning is a branch of AI that has shown particular promise in medical imaging.

The volume of published research on deep learning is growing, and some media headlines that claim superior performance to doctors have fuelled hype for rapid implementation. But the methods and risk of bias of studies behind these headlines have not been examined in detail.

To address this, a team of researchers reviewed the results of published studies over the past 10 years, comparing the performance of a deep learning algorithm in medical imaging with expert clinicians.

They found just two eligible randomised clinical trials and 81 non-randomised studies.

Of the non-randomised studies, only nine were prospective (tracking and collecting information about individuals over time) and just six were tested in a 'real world' clinical setting.

The average number of human experts in the comparator group was just four, while access to raw data and code (to allow independent scrutiny of results) was severely limited.

More than two thirds (58 of 81) studies were judged to be at high risk of bias (problems in study design that can influence results), and adherence to recognised reporting standards was often poor.

Three quarters (61 studies) stated that performance of AI was at least comparable to (or better than) that of clinicians, and only 31 (38%) stated that further prospective studies or trials were needed.

The researchers point to some limitations, such as the possibility of missed studies and the focus on deep learning medical imaging studies so results may not apply to other types of AI.

Nevertheless, they say that at present, "many arguably exaggerated claims exist about equivalence with (or superiority over) clinicians, which presents a potential risk for patient safety and population health at the societal level."

Overpromising language "leaves studies susceptible to being misinterpreted by the media and the public, and as a result the possible provision of inappropriate care that does not necessarily align with patients' best interests," they warn.

"Maximising patient safety will be best served by ensuring that we develop a high quality and transparently reported evidence base moving forward," they conclude.

Credit: 
BMJ Group

Scientists seek to establish community-driven metadata standards for microbiomes research

"We are living through an explosion in the availability of microbiome data," according to JP Dundore-Arias, assistant professor of plant pathology at California State University, Monterey Bay. "In agricultural systems, the proliferation of research on plant and soil microbiomes has been coupled with excitement for the potential that microbiome data may have for the development of novel, sustainable, and effective crop management strategies."

While this is an exciting development, as the collective body of microbiome data for diverse crops grows, the lack of consistency in recording data makes it harder for the data to be utilized across research projects. In a recent article published in Phytobiomes Journal, Dundore-Arias and others in his field discuss the need for agriculture-specific metadata standards for microbiome research.

"Metadata is known as 'data about other data,' or in other words, the what, where, when, and how of the data or sample. This can include, for example, the crop, the sample location, the time of sampling, crop management factors, the method of DNA extraction, and many other factors," explains Dundore-Arias. "Developing a shared consensus of what needs to be reported about a microbiome sample is critical to advancing our field."

Consistent metadata allows researchers to determine whether data from other studies can be integrated for analysis into their own research. Standard recording and sharing practices will also provide a rigorous foundation for building understanding and increase the long-term value of microbiome data within the plant health community.

Through a workshop sponsored by the National Science Foundation-funded Agricultural Microbiomes Research Coordination Network, Dundore-Arias, Emiley Eloe-Fadrosh (LBNL, DOE-JGI), Lynn Schriml (University of Maryland School of Medicine) and Linda Kinkel (University of Minnesota) began the process of engaging the research community in an open process for developing consensus on Agricultural Microbiomes Metadata Standards.

They proposed a checklist of required and desirable metadata standards, which is meant to stimulate discussion and move the community toward standardized reporting of metadata, sampling, processing, and analytical pipelines in agricultural microbiome research. After gathering feedback, the next step, according to Dundore-Arias, "is to develop, along with members of the Genomic Standards Consortium (GSC), a MIxS-Ag metadata standard and ontology that will be incorporated into the GSC MIxS collection and released to other commonly used data management platforms and repositories."

During the process of developing this proposed list of metadata standards categories, and in the next step, developing the official metadata standards checklist for describing agricultural microbiome studies (MIxS-Ag), Dundore-Arias and colleagues have and will continue to seek feedback and endorsement from agricultural microbiome researchers representing diverse public and private sectors. Feedback can be given in the comments at the bottom of the article, found here.

Credit: 
American Phytopathological Society

Those living in rural areas, uninsured or on Medicaid less likely to receive recommended lung cancer treatment

image: This is Elizabeth A. David, MD, MAS, a cardiothoracic surgeon with Keck Medicine of USC.

Image: 
Ricardo Carrasco III

LOS ANGELES - Lung cancer is the leading cause of cancer-associated deaths in the United States. Non-small cell lung cancer (NSCLC), a group of lung cancers named for the kinds of cells found in the cancer, constitutes more than 80% of all lung cancer cases.

In NSCLC patients where the cancer has spread to one or more lymph nodes close to the lung, a condition known as pathologic N1 (pN1) disease, current guidelines recommend a two-part protocol: the surgical removal of the cancerous tissue (resection) followed by a chemotherapy regimen that contains a cocktail of cancer-fighting drugs.

However, not all pN1 patients are receiving the second part of the protocol.

In a Keck Medicine of USC retrospective study of the National Cancer Database published in The Annals of Thoracic Surgery, of almost 15,000 patients who underwent resection for pN1 disease, only slightly more than half (54.1%) received any chemotherapy. Patients were less likely to receive chemotherapy if they lived in rural areas or were on Medicaid or uninsured.

"This study shows that inequalities exist when it comes to getting the highest level of care," says Elizabeth A. David MD, MAS, a cardiothoracic surgeon with Keck Medicine and the study's lead author. "Previous studies have determined that socioeconomic status plays a role in the surgical management of patients with lung cancer, but we are the first to examine the relationship between socioeconomic status and access to chemotherapy in patients with pN1 disease."

The study also revealed that the benefit of receiving chemotherapy in this patient population is higher than generally thought. Previous research has shown that patients with pN1 disease treated with both surgery and chemotherapy increase their five-year cancer survival rate by 5.4% over those who receive only surgery. David and her colleagues found that the survival rate actually increases by 14% - almost triple the accepted number.

"While this is a significantly greater number than historically reported, due to the breadth of our study, we believe this new statistic is accurate, and we have other studies ongoing to provide more validation," says David, who is also an associate professor of clinical surgery at the Keck School of Medicine of USC.

Researchers collected data on NSCLC patients with pN1 disease from the National Cancer Database, a hospital-based oncology registry sponsored by the American College of Surgeons and the American Cancer Society that captures approximately 70% of all patients with newly diagnosed cancer in the United States.

The study authors examined multiple socioeconomic status variables of the patients -- race/ethnicity, median household income, education level, urban/rural area of residence and insurance status -- to reach their conclusion of a disparity in treatment among those in rural areas or without insurance or on Medicaid. The other socioeconomic categories did not play a role in treatment received.

While it was out of the scope of the study to determine why patients are not receiving chemotherapy, David believes that patients may be offered the treatment, but turn it down. Those in rural areas may have to travel to an urban area far from home to receive surgery and may not have the resources, such as transportation, to commit to follow-up chemotherapy treatments. For those with no insurance or on Medicaid, the cost of the chemotherapy may be a barrier to follow-up treatment.

"It is clear that as medical professionals, we need to find creative solutions to help at-risk populations receive guideline-recommended care," David says. "Lung cancer survival rates are improving and all patients, regardless of where they live or financial status, should be able to take advantage of the treatment that will give them the best chance of recovery and survival."

Credit: 
University of Southern California - Health Sciences

Eclectic rocks influence earthquake types

image: The UTIG-led expedition drilled cores from the subduction zone and revealed a surprising diversity in the rocks buried half a mile beneath the seafloor. This mash-up of rocks means a mix of weak and strong points in the Earth's crust which the scientists say influences the occurrence of earthquakes.

Image: 
IODP JRSO

New Zealand's largest fault is a jumble of mixed-up rocks of all shapes, sizes, compositions and origins. According to research from a global team of scientists, this motley mixture could help explain why the fault generates slow-motion earthquakes known as "slow slip events" as well as destructive, tsunami-generating tremors.

"One thing that really surprised us was the sheer diversity of rock types," said Laura Wallace, a research scientist at the University of Texas Institute for Geophysics (UTIG) and co-chief scientist on the expedition that retrieved rock samples from the fault. "These rocks that are being mashed up together all behave very differently in terms of their earthquake generating potential."

The finding was described in a paper published March 25, 2020, in Science Advances. It is the latest discovery to emerge from two scientific drilling expeditions in New Zealand led by scientists at The University of Texas at Austin and colleagues at institutions in New Zealand.

Subduction zones--places where one tectonic plate dives beneath another--are where the world's largest and most damaging earthquakes occur. Scientists have long debated why quakes are more powerful or more frequent at some subduction zones than at others, and whether there may be a connection with the slow slip events, which can take weeks or months to unfold. Although they are not felt by people on the surface, the energy they release into the Earth is comparable to powerful earthquakes.

"It has become apparent only in the last few years that slow slip events happen at many different types of faults, and some at depths in the Earth much shallower than previously thought," said the paper's lead author, Philip Barnes of the New Zealand Institute for Water and Atmospheric Research (NIWA). "It's raised a lot of big questions about why they happen, and how they affect other kinds of earthquakes."

To answer these questions, Barnes, Wallace, and UTIG Director Demian Saffer led two scientific ocean drilling expeditions to a region off the coast of New Zealand, where they drilled into and recovered rocks from the vicinity of the tremors' source. UTIG is a research unit of the UT Jackson School of Geosciences.

"The earthquake and geological science community has speculated about what goes into a subduction zone where slow earthquakes occur," said Saffer, who was co-chief scientist on the second expedition. "But this was the first time we've literally held those rocks--and physical evidence for any of those ideas--in our hands."

The team drilled into the remains of a buried, ancient sea mountain where they found pieces of volcanic rock, hard, chalky, carbonate rocks, clay-like mudrocks, and layers of sediments eroded from the mountain's surface.

Kelin Wang, an expert in earthquake physics and slow slip events at the Geological Survey of Canada, said that the paper was effectively a breakthrough in understanding how the same fault can generate different types of earthquakes.

"In addition to helping us understand the geology of slow slip events this paper also helps explain how the same fault can exhibit complex slip behavior, including tsunami-generating earthquakes," said Wang, who was not part of the study.

Efforts to understand the connection between slow slip events and more destructive earthquakes are already underway. These studies, which are being led by other UTIG researchers, include detailed seismic imaging--which is similar to a geological CAT scan--of the slow slip zone in New Zealand, and an ongoing effort to track the behavior of subduction zones around the world by installing sensors on and beneath the seafloor. The goal of the work is to develop a better understanding of the events that lead up to a slow slip event versus a tsunami-generating earthquake.

"The next needed steps are to continue installing offshore instruments at subduction zones in New Zealand and elsewhere so we can closely monitor these large offshore faults, ultimately helping communities to be better prepared for future earthquakes and tsunami," said Wallace, who also works at GNS Science, New Zealand's government-funded geosciences research institute.

Credit: 
University of Texas at Austin