Tech

Humans are born with brains 'prewired' to see words

COLUMBUS, Ohio - Humans are born with a part of the brain that is prewired to be receptive to seeing words and letters, setting the stage at birth for people to learn how to read, a new study suggests.

Analyzing brain scans of newborns, researchers found that this part of the brain - called the "visual word form area" (VWFA) - is connected to the language network of the brain.

"That makes it fertile ground to develop a sensitivity to visual words - even before any exposure to language," said Zeynep Saygin, senior author of the study and assistant professor of psychology at The Ohio State University.

The VWFA is specialized for reading only in literate individuals. Some researchers had hypothesized that the pre-reading VWFA starts out being no different than other parts of the visual cortex that are sensitive to seeing faces, scenes or other objects, and only becomes selective to words and letters as children learn to read or at least as they learn language.

"We found that isn't true. Even at birth, the VWFA is more connected functionally to the language network of the brain than it is to other areas," Saygin said. "It is an incredibly exciting finding."

Saygin, who is a core faculty member of Ohio State's Chronic Brain Injury Program, conducted the study with graduate students Jin Li and Heather Hansen and assistant professor David Osher, all in psychology at Ohio State. Their results were published today in the journal Scientific Reports.

The researchers analyzed fMRI scans of the brains of 40 newborns, all less than a week old, who were part of the Developing Human Connectome Project. They compared these to similar scans from 40 adults who participated in the separate Human Connectome Project.

The VWFA is next to another part of visual cortex that processes faces, and it was reasonable to believe that there wasn't any difference in these parts of the brain in newborns, Saygin said.

As visual objects, faces have some of the same properties as words do, such as needing high spatial resolution for humans to see them correctly.

But the researchers found that, even in newborns, the VWFA was different from the part of the visual cortex that recognizes faces, primarily because of its functional connection to the language processing part of the brain.

"The VWFA is specialized to see words even before we're exposed to them," Saygin said.

"It's interesting to think about how and why our brains develop functional modules that are sensitive to specific things like faces, objects, and words," said Li, who is lead author of the study.

"Our study really emphasized the role of already having brain connections at birth to help develop functional specialization, even for an experience-dependent category like reading."

The study did find some differences in the VWFA in newborns and adults.

"Our findings suggest that there likely needs to be further refinement in the VWFA as babies mature," Saygin said.

"Experience with spoken and written language will likely strengthen connections with specific aspects of the language circuit and further differentiate this region's function from its neighbors as a person gains literacy."

Saygin's lab at Ohio State is currently scanning the brains of 3- and 4-year-olds to learn more about what the VWFA does before children learn to read and what visual properties the region is responsive to.

The goal is to learn how the brain becomes a reading brain, she said. Learning more about individual variability may help researchers understand differences in reading behavior and could be useful in the study of dyslexia and other developmental disorders.

"Knowing what this region is doing at this early age will tell us a bit more about how the human brain can develop the ability to read and what may go wrong," Saygin said. "It is important to track how this region of the brain becomes increasingly specialized."

Credit: 
Ohio State University

Cord blood DNA can hold clues for early ASD diagnosis and intervention

A new study led by UC Davis MIND Institute researchers found a distinct DNA methylation signature in the cord blood of newborns who were eventually diagnosed with autism spectrum disorder (ASD). This signature mark spanned DNA regions and genes linked to early fetal neurodevelopment. The findings may hold clues for early diagnosis and intervention.

“We found evidence that a DNA methylation signature of ASD exists in cord blood with specific regions consistently differentially methylated,” said Janine LaSalle, lead author on the study and professor of microbiology and immunology at UC Davis.

The study published Oct. 14 in Genome Medicine also identified sex-specific epigenomic signatures that support the developmental and sex-biased roots of ASD.

The U.S. Centers for Disease Control and Prevention (CDC) estimates that one in 54 children are diagnosed with ASD, a complex neurological condition linked to genetic and environmental factors. It is much more prevalent in males than females.

The role of the epigenome in DNA functioning

The epigenome is a set of chemical compounds and proteins that tell the DNA what to do. These compounds attach to DNA and modify its function. One such compound is CH3 (known as the methyl group) that could lead to DNA methylation. DNA methylation can change the activity of a DNA segment without changing its sequence. Differentially methylated regions (DMRs) are areas of DNA that have significantly different methylation status. 

The epigenome compounds do not change the DNA sequence but affect how cells use the DNA's instructions. These attachments are sometimes passed on from cell to cell as cells divide. They can also be passed down from one generation to the next. The neonatal epigenome has the potential to reflect past interactions between genetic and environmental factors during early development. They may also influence future health outcomes.

Finding factors in fetal cord blood that might predict autism

The researchers studied the development of 152 children born to mothers enrolled in the MARBLES and EARLI studies. These mothers had at least one older child with autism and were considered at high risk of having another child with ASD. When these children were born, the mothers’ umbilical cord blood samples were preserved for analysis. At 36 months, these children got diagnostic and developmental assessments. Based on these, the researchers grouped the children under “typically developing” (TD) or “with ASD.”

The researchers also analyzed the umbilical cord blood samples taken at birth from the delivering mothers. They performed whole-genome sequencing of these blood samples to identify an epigenomic signature or mark of ASD at birth. They were checking for any patterns of DNA-epigenome binding that could predict future ASD diagnosis.

They split the samples into discovery and replication sets and stratified them by sex. The discovery set included samples from 74 males (39 TD, 35 ASD) and 32 females (17 TD, 15 ASD). The replication set was obtained from 38 males (17 TD, 21 ASD) and eight females (3TD, 5 ASD).

Using the samples in the discovery set, the researchers looked to identify specific regions in the genomes linked to ASD diagnosis. They tested the DNA methylation profiles for DMRs between ASD and TD cord blood samples. They mapped the DMRs to genes and assessed them in gene function, tissue expression, chromosome location and overlap with prior ASD studies. They later compared the results between discovery and replication sets and between males and females.

Cord blood to reveal insights into genes related to ASD

The researchers identified DMRs stratified by sex that discriminated ASD from TD cord blood samples in discovery and replication sets. They found that seven regions in males and 31 in females replicated, and 537 DMR genes in males and 1762 DMR genes in females replicated by gene association. These DMRs identified in cord blood overlapped with binding sites relevant to fetal brain development. They showed brain and embryonic expression and X chromosome location and matched with prior epigenetic studies of ASD.

“Findings from our study provide key insights for early diagnosis and intervention,” LaSalle said. “We were impressed by the ability of cord blood to reveal insights into genes and pathways relevant to the fetal brain.”

The researchers pointed out that these results will require further replication before being used diagnostically. Their study serves as an important proof of principle that the cord blood methylome is informative about future ASD risk.

Credit: 
University of California - Davis Health

FEFU scientists helped design a new type of ceramics for laser applications

image: FESEM of 3 at.% Ho3+:Y2O3-MgO composite ceramics: general view (a), and area with secondary phase (b).

Image: 
FEFU press office

Material scientists from Far Eastern Federal University (FEFU) joined an international team of researchers to develop new nanocomposite ceramics (Ho3+:Y2O3-MgO) that can be employed in high-capacity laser systems operating in the medium infrared range (IR) of 2-6 micrometers. These lasers are safe for the human vision and have multiple applications in various fields of economy, including industry, atmosphere probing, medicine, and light radars. An article about the work was published in the Ceramics International.

Lasers operating at a wavelength of 2-6 micrometers are very interesting for wide applications, from medicine to space, and industry. However, such lasers should consist of materials with a high degree of thermal conductivity and suitable mechanical and optical characteristics.

The prototype of a new optical ceramics (Ho3+:Y2O3-MgO) fabricated from yttrium oxide nanopowders with the addition of holmium (Ho3+:Y2O3) and magnesium oxide (MgO). They were sintered via special technology in the FEFU laboratory. The resulted material has increased thermal and mechanical resistance, which are following from its almost "pore-free" structure and average grain size of only 200 nanometers. Thanks to this, ceramics transmit more than 75% of the light in the medium IR wavelengths (up to 6 micrometers). The material has a high microhardness of 10.7 GPa, which makes it resistant to high temperatures when lasers are in action.

Before this development, FEFU scientists had already studied the key aspects of producing ceramic nanocomposites based on nominally "pure" Y2O3-MgO. An article about it had been published in early 2020.

"In the new article, we demonstrated the possibility of developing an active laser medium based on a nanocomposite ceramic matrix that we had developed earlier. This time our goal was to choose a doping ion for the matrix and to optimize its content, and to test the luminescent properties of the new IR-transparent composite materials for potential laser applications. Having selected holmium as an alloying ion, we managed to obtain unique laser characteristics. For example, it became safe for the human vision which made it applicable in many areas, from parktronics to 3D landscaping. Potentially, by adding holmium to a ceramic matrix one could be able to develop highly concentrated laser media, i.e. to minimize the size of the laser element and the whole installation without reducing its capacity. Our work is the first in this field of ceramic studies," says Denis Kosyanov, the Head of the Science and Educational Center "Advanced Ceramic Materials" of the Department of Industrial Safety at the Polytechnic Institute of FEFU.

To obtain the material, the team applied the method of self-propagating high-temperature synthesis (SHS) to the yttrium oxide nanopowders with the addition of holmium (Ho3+:Y2O3) and magnesium oxide (MgO). After that, the powders were subjects to spark plasma sintering (SPS) at 1,300°C and under 30 MPa of pressure for 5 minutes. This high-speed consolidation method is being actively developed by FEFU and the Institute of Chemistry of the Far Eastern Branch of the Russian Academy of Sciences.

According to Denis Kosyanov, for nanocomposite ceramics to be used on the industrial scale, its light transmission capacity in the medium IR range should be increased from 75% to 80%. The team plans to focus on this task at the next stage of their work.

Credit: 
Far Eastern Federal University

Robotic trunk support trainer improves upper body control of children with cerebral palsy

image: Left figure: TruST is composed of four steel cables (1), connecting a pliable belt (2), with the motor/spools (3), through pulleys (4). Cable tensions are measured with springs and load cells (5). A lift table (6) is used to regulate the height of the seated child to keep the belt and cables in the horizontal plane.
Right figure: Force field characterization. A ball (blue circle) is used as a reference point and to encourage the child to reach as far as possible during the functional reach test. Children have to recover upright sitting without assistance after achieving the maximum reaching distance.

Image: 
Victor Santamaria, Moiz Khan,Tatiana Luna/Columbia Engineering

New York, NY--October 22, 2020--Cerebral palsy (CP) is the most common childhood physical disability--2.0-3.5 per 1000 births--and children born with it have impaired development and diminished control of movement and posture. In particular, children with moderate to severe bilateral CP have poor upper extremity abilities and segmental trunk control deficits, limiting independent functional sitting. Many children with CP need wheelchairs to travel long distances, and some need wheelchairs in most settings. A treatment designed to improve their sitting control abilities would greatly improve their ability to function independently, live an active physical life, and participate in social activities.

Researchers at Columbia Engineering report that their newly developed robotic Trunk Support Trainer (TruST), when combined with active practice of postural movements, improves trunk and reaching control in children with CP who have impaired sitting control. This finding is in line with their earlier study on adults with spinal cord injury who were able to expand their sitting workspace when TruST actively assisted their trunk movements.

The team then investigated the effectiveness of TruST on children with trunk control issues. They ran a two-year-longitudinal pilot study on four children aged 6-14 years with CP and sitting control problems to examine how TruST technology can be used to provide an optimal amount of trunk support while the children are trained in activities and games. After completion of TruST-intervention, the children showed short- and long-term postural and reaching control improvements. Most importantly, they were able to perform all the game-oriented activities without any external help coming from TruST, supportive straps, or a clinician. The study was published online today by IEEE Transactions of Neural Systems and Rehabilitation Engineering.

"The ability to control the trunk in sitting posture is pivotal for everyday functions such as sitting, feeding, and social interactions," says Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine. "Our Trunk Support Trainer, which we call TruST, is an innovative robotic device that helps physical therapists to not only support the children in the region of the trunk where they suffer from weakness and incoordination but also challenge them to perform rehabilitation tasks outside their base of support to improve their movement and coordination."

TruST is a motorized cable-driven belt placed on the child's trunk that exerts active-assistive forces when the trunk moves beyond postural stability limits. This means that TruST can provide assistance that is individualized for each child and can be systematically reduced as children improve trunk control during the training. Thus, TruST addresses postural-task progression in each training session by matching the assistive-force fields to the ability of each child to control the trunk in sitting. The idea is to assist the child's motor efforts when the trunk moves beyond these stability limits by modulating the wire tensions.

TruST-intervention is intense, about two hours per session, but completed over a relatively short time period for a total of 12 training sessions. Children had to wear additional strapping around the waist to secure their sitting position during the first 6 training sessions. However, after the 6th session, children acquired a level of trunk control that allowed the researchers to remove the waist straps so they could sit independently for training purposes.

"We wanted to scientifically demonstrate how robotic TruST can be used to deliver an intense activity-based postural and reaching training to improve the functional sitting abilities of children with CP and trunk control problems", says Victor Santamaria, a physical therapist and associate researcher scientist in Agrawal's Robotics and Rehabilitation Laboratory, and first author of the paper.

Recent developments in robotic equipment have enabled clinicians to address engagement, repetition, and intensity for their patients to practice task-oriented movements in CP. A team led by Agrawal, together with other researchers at Teacher's College and the Columbia University Irving Medical Center, recently won a five-year National Institutes of Health R01 award (#1R01 HD101903-01) to conduct a randomized clinical trial.

The project--"Improving seated postural control and upper extremity function in bilateral CP with a robotic Trunk-Support-Trainer (TruST)"--will involve up to 80 children with poor trunk control. Some will use the TruST robotic rehabilitation while others will try conventional rehabilitation. This new NIH study will compare the efficacy of the motorized TruST to engage children in play-oriented practice while advancing their skill progression with static trunk support.

"Our new NIH project is a randomized clinical trial with a large sample size to study the efficacy of TruST-intervention as a unique therapeutic solution to promote seated functional abilities in children with bilateral CP," Agrawal adds.

Credit: 
Columbia University School of Engineering and Applied Science

USC leads massive new artificial intelligence study of Alzheimer's

A massive problem like Alzheimer's disease --which affects nearly 50 million people worldwide--requires bold solutions. New funding expected to total $17.8 million, awarded to USC's Mark and Mary Stevens Neuroimaging Informatics Institute and its collaborators, is one key piece of that puzzle.

The five-year National Institutes of Health-funded effort, "Ultrascale Machine Learning to Empower Discovery in Alzheimer's Disease Biobanks," or AI4AD, will develop state-of-the-art artificial intelligence methods and apply them to giant databases of genetic, imaging and cognitive data collected from AD patients. Forty co-investigators at 11 research centers will team up to leverage artificial intelligence and machine learning to bolster precision diagnostics, prognosis and the development of new treatments for the memory-robbing disease.

"Our team of experts in computer science, genetics, neuroscience and imaging sciences will create algorithms that analyze data at a previously impossible scale," says Paul Thompson, associate director of the USC INI and project leader for the new grant. "Collectively, this will enable the discovery of new features in the genome that influence the biological processes involved in Alzheimer's disease."

The project's first objective is to identify genetic and biological markers that predict an Alzheimer's diagnosis and distinguish between several subtypes of the disease. To accomplish this, the research team will apply sophisticated AI and machine learning methods to a variety of data types, including tens of thousands of brain images and whole genome sequences. The investigators will then relate these findings to the clinical progression of Alzheimer's, including in patients who have not yet developed dementia symptoms. The researchers will train AI methods on large databases of brain scans to identify patterns that can help detect the disease as it emerges in individual patients.

"As we get older, each of us has a unique mix of brain changes that occur for decades before we develop any signs of Alzheimer's disease-changes in our blood vessels, the buildup of abnormal protein deposits and brain cell loss," says Thompson, who also directs the USC INI's Imaging Genetics Center. "Our new AI methods will help us determine what changes are happening in each patient, as well as drivers of these processes in their DNA, that we can target with new drugs."

The team is even creating a dedicated "Drug Repurposing Core" to identify ways to repurpose existing drugs to target newly identified segments of the genome, molecules or neurobiological processes involved in the disease.

"We predict that combining AI with whole genome data and advanced brain scans will outperform methods used today to predict Alzheimer's disease progression," says Thompson.

The AI4AD effort is part of the "Cognitive Systems Analysis of Alzheimer's Disease Genetic and Phenotypic Data" and "Harmonization of Alzheimer's Disease and Related Dementias (AD/ADRD) Genetic, Epidemiologic, and Clinical Data to Enhance Therapeutic Target Discovery" initiatives from the NIH's National Institute on Aging. These initiatives aim to create and develop advanced AI methods and apply them to extensive and harmonized rich genomic, imaging and cognitive data. Collectively, the goals of AI4AD leverage the promise of machine learning to contribute to precision diagnostics, prognostication, and targeted and novel treatments.

Thompson and his USC team will collaborate with four co-principal investigators at the University of Pennsylvania, the University of Pittsburgh and the Indiana University School of Medicine.

The researchers will also host regular training events at major AD neuroimaging and genetics conferences to help disseminate newly developed AI tools to investigators across the field.

Credit: 
University of Southern California

Galactic archaeology

image: 'Galactic archaeology' refers to the study of second generation stars to learn about the physical characteristics of the first stars, which disappeared only tens of millions of years after the Big Bang. A computational physics study modeled for the first time faint supernovae of metal-free first stars, yielding carbon-enhanced abundance patterns for star formation. Slice of density, temperature, and carbon abundance for a 13 solar mass progenitor model at times (left-right) 0.41, 15.22, and 29.16 million years after the supernovae explosion in a box with a side 2 kpc.

Image: 
Chiaki, et al.

No one has yet found the first stars.

They're hypothesized to have formed about 100 million years after the Big Bang out of universal darkness from the primordial gases of hydrogen, helium, and trace light metals. These gases cooled, collapsed, and ignited into stars up to 1,000 times more massive than our sun. The bigger the star, the faster they burn out. The first stars probably only lived a few million years, a drop in the bucket of the age of the universe, at about 13.8 billion years. They're unlikely to ever be observed, lost to the mists of time.

As the metal-free first stars collapsed and exploded into supernovae, they forged heavier elements such as carbon that seeded the next generation of stars. One type of these second stars is called a carbon-enhanced metal-poor star. They're like fossils to astrophysicists. Their composition reflects the nucleosynthesis, or fusion, of heavier elements from the first stars.

"We can get results from indirect measurements to get the mass distribution of metal-free stars from the elemental abundances of metal-poor stars," said Gen Chiaki, a post-doctoral researcher in the Center for Relativistic Astrophysics, School of Physics, Georgia Tech.

Chiaki is the lead author of a study published in the September 2020 issue of the Monthly Notices of the Royal Astronomical Society. The study modeled for the first time faint supernovae of metal-free first stars, which yielded carbon-enhanced abundance patterns through the mixing and fallback of the ejected bits.

Their simulations also showed the carbonaceous grains seeding the fragmentation of the gas cloud produced, leading to formation of low-mass 'giga-metal-poor' stars that can survive to the present day and possibly be found in future observations.

"We find that these stars have very low iron content compared to the observed carbon-enhanced stars with billionths of the solar abundance of iron. However, we can see the fragmentation of the clouds of gas. This indicates that the low mass stars form in a low iron abundance regime. Such stars have never been observed yet. Our study gives us theoretical insight of the formation of first stars," Chiaki said.

The investigations of Wise and Chiaki are a part of a field called 'galactic archaeology.' They liken it to searching for artifacts underground that tell about the character of societies long gone. To astrophysicists, the character of long-gone stars can be revealed from their fossilized remains.

"We can't see the very first generations of stars," said study co-author John Wise, an associate professor also at the Center for Relativistic Astrophysics, School of Physics, Georgia Tech. "Therefore, it's important to actually look at these living fossils from the early universe, because they have the fingerprints of the first stars all over them through the chemicals that were produced in the supernova from the first stars."

"These old stars have some fingerprints of the nucleosynthesis of metal-free stars. It's a hint for us to seek the nucleosynthesis mechanism happening in the early universe," Chiaki said.

"That's where our simulations come into play to see this happening. After you run the simulation, you can watch a short movie of it to see where the metals come from and how the first stars and their supernovae actually affect these fossils that live until the present day," Wise said.

The scientists first modeled the formation of their first star, called a Population III or Pop III star, and ran three different simulations that corresponded to its mass at 13.5, 50, and 80 solar masses. The simulations solved for the radiative transfer during its main sequence and then after it dies and goes supernova. The last step was to evolve the collapse of the cloud of molecules spewed out by the supernova that involved a chemical network of 100 reactions and 50 species such as carbon monoxide and water.

The majority of the simulations ran on the Georgia Tech PACE cluster. They were also awarded computer allocations by the National Science Foundation (NSF)-funded Extreme Science and Engineering Discovery Environment (XSEDE). Stampede2 at the Texas Advanced Computing Center (TACC) and Comet at the San Diego Supercomputer Center (SDSC) ran some of the main sequence radiative transfer simulations through XSEDE allocations.

"The XSEDE systems Comet at SDSC and Stampede2 at TACC are very fast and have a large storage system. They were very suitable to conduct our huge numerical simulations," Chiaki said.

"Because Stampede2 is just so large, even though it has to accommodate thousands of researchers, it's still an invaluable resource for us," Wise said. "We can't just run our simulations on local machines at Georgia Tech."

Chiaki said he was also happy with the fast queues on Comet at SDSC. "On Comet, I could immediately run the simulations just after I submitted the job," he said.

Wise has been using XSEDE system allocations for over a decade, starting when he was a postdoc. "I couldn't have done my research without XSEDE."

XSEDE also provided expertise for the researchers to take full advantage of their supercomputer allocations through the Extended Collaborative Support Services (ECSS) program. Wise recalled using ECSS several years ago to improve the performance of the Enzo adaptive mesh refinement simulation code he still uses to solve the radiative transfer of stellar radiation and supernovae.

"Through ECSS, I worked with Lars Koesterke at TACC, and I found out that he used to work in astrophysics. He worked with me to improve the performance by about 50 percent of the radiation transport solver. He helped me profile the code to pinpoint which loops were taking the most time, and how to speed it up by reordering some loops. I don't think I would have identified that change without his help," Wise said.

Wise has also been awarded time on TACC's NSF-funded Frontera system, the fastest academic supercomputer in the world. "We haven't gotten to full steam yet on Frontera. But we're looking forward to using it, because that's even a larger, more capable resource."

Wise added: "We're all working on the next generation of Enzo. We call it Enzo-E, E for exascale. This is a total re-write of Enzo by James Bordner, a computer scientist at the San Diego Supercomputer Center. And it scales almost perfectly to 256,000 cores so far. That was run on NSF's Blue Waters. I think he scaled it to the same amount on Frontera, but Frontera is bigger, so I want to see how far it can go."

The downside, he said, is that since the code is new, it doesn't have all the physics they need yet. "We're about two-thirds of the way there," Wise said.

He said that he's also hoping to get access to the new Expanse system at SDSC, which will supersede Comet after it retires in the next year or so. "Expanse has over double the compute cores per node than any other XSEDE resource, which will hopefully speed up our simulations by reducing the communication time between cores," Wise said.

According to Chiaki, the next steps in the research are to branch out beyond the carbon features of ancient stars. "We want to enlarge our interest to the other types of stars and the general elements with larger simulations," he said.

Said Chiaki: "The aim of this study is to know the origin of elements, such as carbon, oxygen, and calcium. These elements are concentrated through the repetitive matter cycles between the interstellar medium and stars. Our bodies and our planet are made of carbon and oxygen, nitrogen, and calcium. Our study is very important to help understand the origin of these elements that we human beings are made of."

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

Collaboration sparks new model for ceramic conductivity

As insulators, metal oxides - also known as ceramics - may not seem like obvious candidates for electrical conductivity. While electrons zip back and forth in regular metals, their movement in ceramic materials is sluggish and difficult to detect.

small polaron hopping model
Provided

An interdisciplinary collaboration led by Richard Robinson updated the "small polaron hopping model" to reflect different pathways for conduction in ceramics. Their work will help researchers who are custom-tailoring the properties of metal oxides in technologies such as lithium ion batteries, fuel cells and electrocatalysis.

But ceramics do contain a large range of conductivities. This behavior was laid out in 1961 in the "small polaron hopping model," which described the movement of polarons - essentially electrons coupled to a lattice distortion - from one end of a material to the other.

An interdisciplinary collaboration led by Richard Robinson, associate professor of materials science and engineering in the College of Engineering, has shown just how outdated and inaccurate that model is, especially regarding complex oxide systems. By updating the model to reflect different pathways for conduction, the team hopes its work will help researchers who are custom-tailoring the properties of metal oxides in technologies such as lithium ion batteries, fuel cells and electrocatalysis.

Their paper, "Breakdown of the Small-Polaron Hopping Model in Higher-Order Spinels," published Oct. 21 in Advanced Materials. The lead author is doctoral student Anuj Bhargava.

"This is the most commonly-used formula in the field, but it hadn't been touched in 60 years. That's a big deal because, nowadays, metal oxides are used in many applications where the performance is directly impacted by the conductivity - for example, in energy systems like electrical energy storage and generation, electrocatalysis, and in new-generation materials," Robinson said. "Many people are putting a great amount of experimental effort into oxides right now, but they haven't carefully examined how the charge carriers move in the material, and how the composition influences that conductivity.

Radical Collaboration

"If we understood how electrons are conducted and we could customize the composition to have the highest conductivity, we could optimize the energy efficiency of a lot of materials out there," he said.

To get a detailed look at the way electrons move in metal oxides and how their occupation sites can affect the material's conductivity, Robinson turned to Darrell Schlom, the Herbert Fisk Johnson Professor of Industrial Chemistry. Schlom and his team used the Platform for the Accelerated Realization, Analysis, and Discovery of Interface Materials (PARADIM) and the Cornell NanoScale Science and Technology Facility (CNF) to grow and characterize thin crystalline films of manganese-doped iron oxide (MnxFe3-xO4).

Robinson's group then used the Cornell High Energy Synchrotron Source (CHESS) to determine the atomic locations and the charge state of the positively charged ions, called cations, and measured how the material's conductivity changes at different temperatures.

They brought the material to Lena Kourkoutis, associate professor in applied and engineering physics, who used advanced electron microscopy to get an atomically precise view of the crystal's substrate and compositional gradients, and confirmed the team's findings.

Lastly, Robinson's team consulted researchers at Technion - Israel Institute of Technology, who used computational methods to explain how polarons hop differently in materials based on the energy barriers and oxidation states. Their results uncovered the existence of large energetic barriers associated with "switching" conduction paths between the two different cations, and this provided the crucial final piece that was necessary to put a new formula together.

"This new finding gives us insight into something that's been overlooked. Instead of the Edisonian, trial-and-error approach of just making and testing a bunch of new materials, we can now take a more systematic approach to figuring out why the materials behave differently, especially on this really important level, which is electronic conductivity," Robinson said. "The important processes in energy materials involve conductivity, electrons coming in and out of the material. So for any application with metal oxides, conductivity is important."

Credit: 
Cornell University

Wildfires can cause dangerous debris flows

image: House damaged by debris flows generated in Los Angeles County's Mullally Canyon in response to a rainstorm on February 6, 2010.

Image: 
Susan Cannon/USGS

Wildfires don't stop being dangerous after the flames go out. Even one modest rainfall after a fire can cause a deadly landslide, according to new UC Riverside research.

"When fire moves through a watershed, it creates waxy seals that don't allow water to penetrate the soil anymore," explained environmental science doctoral student and study author James Guilinger.

Instead, the rainwater runs off the soil surface causing debris flows, which are fast-moving landslides that usually start on steep hills and accelerate as they move.

"The water doesn't behave like water anymore, it's more like wet cement," Guilinger said. "It can pick up objects as big as boulders that can destroy infrastructure and hurt or even kill people, which is what happened after the 2018 Thomas fire in Montecito."

Guilinger and his team of mentors and collaborators wanted to understand in detail how multiple storm cycles affect an area that's been burned by wildfire, since Southern California tends to have much of its rain in the same season.

The team headed to the burn scar caused by the 23,000-acre Holy Fire near Lake Elsinore to observe this phenomenon, and their results have recently been published in the Journal of Geophysical Research: Earth Surface.

"It's only recently that technology has advanced to the point that we can directly monitor soil erosion at extremely small scales," said Andrew Gray, assistant professor of watershed hydrology and Guilinger's advisor. Gray's laboratory works to understand how wildfire impacts the movement of water and sediment through landscapes after wildfire.

Even with the latest technology, the data was not easy to obtain. To deploy their ground-based laser scanner, which uses visible and infrared waves to reconstruct surfaces down to millimeter accuracy, the scientists had to climb steep hill slopes. They also deployed drones in collaboration with Nicolas Barth, assistant professor of geomorphology, in order to zoom out and see up to 10 hectares of land after the storms.

What they found is that most of the soil in channels at the bottom of valleys between hill slopes eroded during the first few rains, even though the rains were relatively modest. The channels fill with material during the years between fires as well as in response to fire, with rain then causing rapid erosion resulting in the debris flows.

"This proves the first storm events that strike an area are the most critical," Guilinger said. "You can't really mitigate them at the source. Instead, people downstream need to be aware of the dangers, and land managers need hazard modeling tools to help them respond effectively and create a plan to catch the sediment as it flows."

U.S. Geological Survey models incorporate widely available 10-meter data for watershed slopes and information about burn severity from satellite images to estimate the probability and magnitude of debris flow that would occur under a given amount of rainfall.

However, elevation data at the 1-meter scale is becoming more widely available in fire-prone areas like California. This more refined data could allow the researchers to extract finer-scale information, such as variations in hill slope gradient and the shape of water channels that may play a large role in controlling debris flows.

"We can use data like these and the results of studies like ours to inform dynamically updating hazard models in the future," Guilinger said. "Rather than have a single set of predictions for the entire wet season, we may be able to update these models after each storm."

Guilinger plans to use funding from the federal Joint Fire Science Program to improve upon existing hazard models.

"This could prove very useful to land managers either immediately affected by or planning to mitigate the dangerous aftermath of wildfires," he said.

Credit: 
University of California - Riverside

0.5°C matters: Seasonal contrast of rainfall becomes intense in warming target of the Paris agreement

image: The schematic of the precipitation in the wet and dry seasons and the annual range with the additional 0.5? of warming.

Image: 
Ziming Chen

The Paris Agreement in 2015 proposed a target to limit global warming to less than 2°C and pursue efforts to limit warming to less than 1.5°C. Since then, great efforts have been devoted to exploring the impacts of the 1.5°C and 2°C warming scenarios.

A recent work published in Earth's Future by a team of researchers from the Institute of Atmospheric Physics (IAP) at the Chinese Academy of Sciences has found that the seasonal cycle of precipitation is likely to enhance at stabilized 1.5°C and 2°C warming scenarios.

"Based on the output data of the Community Earth System Model low-warming experiment, we conclude that the enhancement is mainly caused by the increase in water vapor," said Ziming Chen, the first author of the study and a doctoral student from IAP.

The intensity of seasonal cycle is defined as the difference in precipitation between wet and dry seasons, representing the contrast of precipitation within a year. The wet and dry seasons are usually fixed as June to August and December to February, respectively, in the Northern Hemisphere and vice versa in the Southern Hemisphere. Neither the spatial distinction nor the temporal shifts in the wet and dry seasons have previously been considered.

"In our study, the intensity of the seasonal cycle is represented by the difference between mean precipitation in the wet and dry seasons for different regions and for each year," said Chen.

CHEN and his collaborators in IAP found that based on the above metric, the intensity of seasonal cycle would enhance by 3.90% and 5.27% under 1.5°C and 2°C warming. Under the additional 0.5°C of warming, a pronounced enhancement in seasonal cycle occurred over 22% of the land regions.

The enhancement was associated with the enhanced precipitation during wet season, caused by thermodynamic responses due to the increased moisture. It indicated that the contrast between the wet and dry seasons would become stronger, resulting in a more uneven distribution of freshwater resources within a year. The probability of flooding would increase in the wet season.

"This study emphasizes the pronounced enhancement in seasonal cycle over land regions associated with the additional 0.5°C warming, despite the insignificant increases in the annual precipitation," added Chen. "Though the number in temperature seems small, 0.5°C still matters."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Analyzing web searches can help experts predict, respond to COVID-19 hot spots

ROCHESTER, Minn. -- Web-based analytics have demonstrated their value in predicting the spread of infectious disease, and a new study from Mayo Clinic indicates the value of analyzing Google web searches for keywords related to COVID-19.

Strong correlations were found between keyword searches on the internet search engine Google Trends and COVID-19 outbreaks in parts of the U.S., according to a study published in Mayo Clinic Proceedings. These correlations were observed up to 16 days prior to the first reported cases in some states.

"Our study demonstrates that there is information present in Google Trends that precedes outbreaks, and with predictive analysis, this data can be used for better allocating resources with regards to testing, personal protective equipment, medications and more," says Mohamad Bydon, M.D., a Mayo Clinic neurosurgeon and principal investigator at Mayo's Neuro-Informatics Laboratory.

"The Neuro-Informatics team is focused on analytics for neural diseases and neuroscience. However, when the novel coronavirus emerged, my team and I directed resources toward better understanding and tracking the spread of the pandemic," says Dr. Bydon, the study's senior author. "Looking at Google Trends data, we found that we were able to identify predictors of hot spots, using keywords, that would emerge over a six-week timeline."

Several studies have noted the role of internet surveillance in early prediction of previous outbreaks such as H1N1 and Middle East respiratory syndrome. There are several benefits to using internet surveillance methods versus traditional methods, and this study says a combination of the two methods is likely the key to effective surveillance.

The study searched for 10 keywords that were chosen based on how commonly they were used and emerging patterns on the internet and in Google News at that time.

The keywords were:

COVID symptoms
Coronavirus symptoms
Sore throat+shortness of breath+fatigue+cough
Coronavirus testing center
Loss of smell
Lysol
Antibody
Face mask
Coronavirus vaccine
COVID stimulus check

Most of the keywords had moderate to strong correlations days before the first COVID-19 cases were reported in specific areas, with diminishing correlations following the first case.

"Each of these keywords had varying strengths of correlation with case numbers," says Dr. Bydon. "If we had looked at 100 keywords, we may have found even stronger correlations to cases. As the pandemic progresses, people will search for new and different information, so the search terms also need to evolve."

The use of web search surveillance data is important as an adjunct for data science teams who are attempting to predict outbreaks and new hot spots in a pandemic. "Any delay in information could lead to missed opportunities to improve preparedness for an outbreak in a certain location," says Dr. Bydon.

Traditional surveillance, including widespread testing and public health reporting, can lag behind the incidence of infectious disease. The need for more testing, and more rapid and accurate testing, is paramount. Delayed or incomplete reporting of results can lead to inaccuracies when data is released and public health decisions are being made.

"If you wait for the hot spots to emerge in the news media coverage, it will be too late to respond effectively," Dr. Bydon says. "In terms of national preparedness, this is a great way of helping to understand where future hot spots will emerge."

Mayo Clinic recently introduced an interactive COVID-19 tracking tool that reports the latest data for every county in all 50 states, and in Washington, D.C., with insight on how to assess risk and plan accordingly. "Adding variables such as Google Trends data from Dr. Bydon's team, as well as other leading indicators, have greatly enhanced our ability to forecast surges, plateaus and declines of cases across regions of the country," says Henry Ting, M.D., Mayo Clinic's chief value officer.

Dr. Ting worked with Mayo Clinic data scientists to develop content sources, validate information and correlate expertise for the tracking tool, which is in Mayo's COVID-19 resource center on mayoclinic.org.

The study was conducted in collaboration with the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery. The authors report no conflicts of interest.

Credit: 
Mayo Clinic

Scientists use gene therapy and a novel light-sensing protein to restore vision in mice

image: Detail shows structure of retina, including location of a bipolar cell expressing Nanoscope's MCO1 opsin.

Image: 
National Eye Institute

A newly developed light-sensing protein called the MCO1 opsin restores vision in blind mice when attached to retina bipolar cells using gene therapy. The National Eye Institute, part of the National Institutes of Health, provided a Small Business Innovation Research grant to Nanoscope, LLC for development of MCO1. The company is planning a U.S. clinical trial for later this year.

Nanoscope's findings, reported today in Nature Gene Therapy, show that totally blind mice--meaning they have no light perception--regain significant retinal function and vision after treatment. Studies described in the report showed that treated mice were significantly faster in standardized visual tests, such as navigating mazes and detecting changes in motion.

Opsins are proteins that signal other cells as part of a cascade of signals essential to visual perception. In a normal eye, opsins are expressed by the rod and cone photoreceptors in the retina. When activated by light, the photoreceptors pulse and send a signal through other retinal neurons, the optic nerve, and on to neurons in the brain.

A variety of common eye diseases, including age-related macular degeneration and retinitis pigmentosa, damage the photoreceptors, impairing vision. But while the photoreceptors may no longer fully function, other retinal neurons, including a class of cells called bipolar cells, remain intact. The investigators identified a way for bipolar cells to take on some of the work of damaged photoreceptors.

"The beauty of our strategy is its simplicity," said Samarendra Mohanty, Ph.D., Nanoscope founder and corresponding author of a report on the mouse study that appears today in Nature Gene Therapy. "Bipolar cells are downstream from the photoreceptors, so when the MCO1 opsin gene is added to bipolar cells in a retina with nonfunctioning photoreceptors, light sensitivity is restored."

The strategy could overcome challenges plagued by other approaches to retinal regeneration, according to the researchers. Gene replacement therapy has thus far worked principally in rare diseases that leave photoreceptors intact, such as Luxurna for Leber congenital amaurosis. Bionic eyes, such as the Argus II retinal prosthesis, require invasive surgery and wearable hardware. Other opsin replacement therapies require the intensification of light in order to reach the threshold required for signal transduction. But intense light risks further damage to the retina. Nanoscope's therapy requires a one-time injection into the eye and no hardware. MCO1 is sensitive to ambient light, so no need exists for strong light to be shined into the eye. And therapy with MCO1 could treat a wider range of degenerative retinal diseases, since photoreceptor survival not required.

The researchers found no concerning safety issues in treated mice. Examination of blood and tissues found no signs of inflammation due to treatment and the therapy had no off-target effect--only bipolar cells expressed the MCO1 opsin.

Under a best-case scenario, the therapy could help patients achieve 20/60 vision, according to the researchers; however, no one knows how the restored vision will compare to normal vision.

"A clinical study in people will help us understand how signaling through bipolar cells affects vision quality; for example, how well treated eyes can pick out fast-moving objects.," said Subrata Batabyal, Ph.D., lead author of the manuscript. The therapy will likely be limited for treatment of patients with severe retinal disease.

"If this optogenetic approach using cells spared in degenerated retina can prove to be effective in vision restoration in humans, beyond light perception, it could offer a valuable alternative to the retinal prosthesis approach for people with late-stage retinitis pigmentosa," said PaekGyu Lee, Ph.D., NEI's program officer for the Small Business Innovation Research program.

Credit: 
NIH/National Eye Institute

Cognitive elements of language have existed for 40 million years

image: The chimpanzees learned that certain sounds were always followed by other specific sounds, even if they were sometimes separated by other acoustic signals.

Image: 
National Center for Chimpanzee Care in Bastrop, Texas

Humans are not the only beings that can identify rules in complex language-like constructions - monkeys and great apes can do so, too, a study at the University of Zurich has shown. Researchers at the Department of Comparative Language Science of UZH used a series of experiments based on an 'artificial grammar' to conclude that this ability can be traced back to our ancient primate ancestors.

Language is one of the most powerful tools available to humankind, as it enables us to share information, culture, views and technology. "Research into language evolution is thus crucial if we want to understand what it means to be human," says Stuart Watson, postdoctoral researcher at the Department of Comparative Language Science of the University of Zurich. Until now, however, little research has been conducted about how this unique communication system came to be.

Identifying connections between words

An international team led by Professor Simon Townsend at the Department of Comparative Language Science of the University of Zurich has now shed new light on the evolutionary origins of language. Their study examines one of the most important cognitive elements needed for language processing - that is, the ability to understand the relationship between the words in a phrase, even if they are separated by other parts of the phrase, known as a "non-adjacent dependency". For example, we know that in the sentence "the dog that bit the cat ran away", it is the dog who ran away, not the cat, even though there are several other words in between the two phrases. A comparison between apes, monkeys and and humans has now shown that the ability to identify such non-adjacent dependencies is likely to have developed as far back as 40 million years ago.

Acoustic signals instead of words

The researchers used a novel approach in their experiments: They invented an artificial grammar, where sequences are formed by combining different sounds rather than words. This enabled the researchers to compare the ability of three different species of primates to process non-adjacent dependencies, even though they do not share the same communication system. The experiments were carried out with common marmosets - a monkey native to Brazil - at the University of Zurich, chimpanzees (University of Texas) and humans (Osnabrück University).

Mistakes followed by telltale looks

First, the researchers taught their test subjects to understand the artificial grammar in several practice sessions. The subjects learned that certain sounds were always followed by other specific sounds (e.g. sound 'B' always follows sound 'A'), even if they were sometimes separated by other acoustic signals (e.g. 'A' and 'B' are separated by 'X'). This simulates a pattern in human language, where, for example, we expect a noun (e.g. "dog") to be followed by a verb (e.g. "ran away"), regardless of any other phrasal parts in between (e.g. "that bit the cat").

In the actual experiments that followed, the researchers played sound combinations that violated the previously learned rules. In these cases, the common marmosets and chimpanzees responded with an observable change of behavior; they looked at the loudspeaker emitting the sounds for about twice as long as they did towards familiar combinations of sounds. For the researchers, this was an indication of surprise in the animals caused by noticing a 'grammatical error'. The human test subjects were asked directly whether they believed the sound sequences were correct or wrong.

Common origin of language

"The results show that all three species share the ability to process non-adjacent dependencies. It is therefore likely that this ability is widespread among primates," says Townsend. "This suggests that this crucial element of language already existed in our most recent common ancestors with these species." Since marmosets branched off from humanity's ancestors around 40 million years ago, this crucial cognitive skill thus developed many million years before human language evolved.

Credit: 
University of Zurich

Genome archeologists discover path to activate immune response against cancer

image: Study first authors Dr. Parinaz Mehdipour, Dr. Sajid Marhon and Mr. Ilias Ettayebi are trainees in Dr. De Carvalho's laboratory, who discovered a path to activate an immune response to kill cancer cells like an infection.

Image: 
Courtesy of Dr. Parinaz Mehdipour, Dr. Sajid Marhon and Mr. Ilias Ettayebi.

(Toronto - Wednesday, Oct. 21, 2020) -- Ancient embedded elements in our DNA from generations past can activate a powerful immune response to kill cancer cells like an infection.

The work builds on Princess Margaret Senior Scientist Dr. De Carvalho's previous ground-breaking discovery known as viral mimicry-- the ability to cause cancer cells to behave as though they have been infected, thereby activating the immune system to fight cancer like an infection.

Dr. Daniel De Carvalho and his team have now identified silent ancient DNA elements buried in our genome that when 'reactivated' can initiate this immune response. Importantly, they have also discovered a key enzyme used by cancer cells to prevent this from happening in order to survive.

The enzyme is known as ADAR1, and it acts to prevent the cancer cells from signalling to the immune system. Dr. De Carvalho, Associate Professor, Medical Biophysics, University of Toronto, discovered that by inhibiting this enzyme, cancer cells were more sensitive to new drug therapies that induce viral mimicry.

The research is published online on October 21, 2020 in Nature, under the title, "Epigenetic therapy induces transcription of inverted SINEs and ADAR1 dependency." The study first authors are Dr. Parinaz Mehdipour, Dr. Sajid Marhon and Masters' graduate student Ilias Ettayebi, trainees in Dr. De Carvalho's laboratory.

"Humans acquired a series of 'silent' repetitive elements in our DNA over millions of years of evolution, but it has been unclear why or what purpose they serve," explains Dr. De Carvalho. "As 'genome archeologists', we set out to identify the function of these 'DNA relics' and have found that under the right conditions they can be reactivated and stimulate our immune system."

Dr. De Carvalho's discovery of ADAR1 explains how some cancer cells mount a defense against this and protect themselves from our immune system.

"These findings open up a new field of cancer therapies," says Dr. De Carvalho. "It gives us the opportunity to take advantage of these ancient repetitive DNA elements to fight cancer."

Studying the potential to modulate the immune response against tumour cells is one of the most rapidly changing and exciting areas in clinical oncology.

While much knowledge has been gained about how the immune system interacts with cancer, leading to the development of novel immunotherapy drugs, there is still a large proportion of cancer patients who do not respond to immunotherapy alone.

In Dr. De Carvalho's initial discovery, epigenetic drugs were shown to reactivate these repetitive DNA elements and lead to production of double-stranded RNA, a molecular pattern that is also observed following viral infection.

This 'viral mimicry' leads to an antiviral response directed specifically against cancer cells. In this latest research, Dr. De Carvalho's lab identified the specific ancient repetitive DNA elements as SINEs (Short Interspersed Nuclear Elements). These SINEs usually lie quiet in our genome, having little effect on the host.

However, if activated by new epigenetic drugs, these SINES produce double-stranded RNA - a marker for infection - and can ultimately be used by cells to trigger an innate immune response.

Dr. De Carvalho likens this response "to an ancient dagger that can be used against cancer."

But cancer cells are wily and have also evolved to evade detection by the immune system even under conditions where the ancient DNA sequences are activated.

Dr. De Carvalho discovered that cancer cells strike back by making more of the ADAR1 enzyme, which functions to disrupts the double-stranded RNA produced by the ancient DNA. In this way ADAR1 prevents the cancer cells from activating the immune system.

Dr. Carvalho and his team went on to demonstrate that deleting ADAR1 from cancer cells makes them exquisitely vulnerable to epigenetic drugs that induce the antiviral response.

"Since the ADAR1 activity is enzymatic, our work provides an exciting new target for drug development efforts for a completely new class of drugs that are able to exploit these 'ancient weapons' in our genome," explains Dr. De Carvalho.

Credit: 
University Health Network

Smile, wave: Some exoplanets may be able to see us, too

ITHACA, N.Y. - Three decades after Cornell astronomer Carl Sagan suggested that Voyager 1 snap Earth's picture from billions of miles away - resulting in the iconic Pale Blue Dot photograph - two astronomers now offer another unique cosmic perspective: Some exoplanets - planets from beyond our own solar system - have a direct line of sight to observe Earth's biological qualities from far, far away.

Lisa Kaltenegger, associate professor of astronomy at Cornell University and director of Cornell's Carl Sagan Institute; and Joshua Pepper, associate professor of physics at Lehigh University, have identified 1,004 main-sequence stars (similar to our sun) that might contain Earth-like planets in their own habitable zones - all within about 300 light-years of Earth - and which should be able to detect Earth's chemical traces of life.

The paper, "Which Stars Can See Earth as a Transiting Exoplanet?" was published in the Monthly Notices of the Royal Astronomical Society.

"Let's reverse the viewpoint to that of other stars and ask from which vantage point other observers could find Earth as a transiting planet," Kaltenegger said. A transiting planet is one that passes through the observer's line of sight to another star, such as the sun, revealing clues as to the makeup of the planet's atmosphere.

"If observers were out there searching, they would be able to see signs of a biosphere in the atmosphere of our Pale Blue Dot," she said, "And we can even see some of the brightest of these stars in our night sky without binoculars or telescopes."

Transit observations are a crucial tool for Earth's astronomers to characterize inhabited extrasolar planets, Kaltenegger said, which astronomers will start to use with the launch of NASA's James Webb Space telescope next year.

But which star systems could find us? Holding the key to this science is Earth’s ecliptic – the plane of Earth’s orbit around the Sun. The ecliptic is where the exoplanets with a view of Earth would be located, as they will be the places able to see Earth crossing its own sun – effectively providing observers a way to discover our planet’s vibrant biosphere.

Pepper and Kaltenegger created the list of the thousand closest stars using NASA's Transiting Exoplanet Survey Satellite (TESS) star catalog .

"Only a very small fraction of exoplanets will just happen to be randomly aligned with our line of sight so we can see them transit." Pepper said. "But all of the thousand stars we identified in our paper in the solar neighborhood could see our Earth transit the sun, calling their attention."

"If we found a planet with a vibrant biosphere, we would get curious about whether or not someone is there looking at us too," Kaltenegger said.

"If we're looking for intelligent life in the universe, that could find us and might want to get in touch" she said, "we've just created the star map of where we should look first."

Credit: 
Cornell University

Oncotarget: quantitative ultrasound radiomics in prediction of treatment response for breast cancer

image: Figure 4: Generation of parametric and texture maps from radiofrequency data. Diagram showing flowchart of the generation of QUS parametric maps, texture, and texture derivative from radiofrequency ultrasound data. QUS: quantitative ultrasound; GLCM: grey level co-occurrence matrix.

Image: 
Correspondence to - Gregory J. Czarnota - gregory.czarnota@sunnybrook.ca

The cover for Issue 42 of Oncotarget features Figure 4, "Generation of parametric and texture maps from radiofrequency data," recently published in "Quantitative ultrasound radiomics using texture derivatives in prediction of treatment response to neo-adjuvant chemotherapy for locally advanced breast cancer" by Dasgupta, et al. which reported that to investigate quantitative ultrasound based higher-order texture derivatives in predicting the response to neoadjuvant chemotherapy in patients with locally advanced breast cancer.

Credit: 
Impact Journals LLC