Tech

Researchers reversibly disable brain pathway in primates

image: The researchers temporarily switched off the connection between the ventral tegmental area (VTA) and the nucleus accumbens (NAc). As a result, the monkeys' motivation to work harder for a bigger reward decreased.

Image: 
Wim Vanduffel et al. (2020)

For the first time ever, neurophysiologists of KU Leuven, Harvard and the University of Kyoto have succeeded in reversibly disabling a connection between two areas in the brains of primates while they were performing cognitive tasks, or while their entire brain activity was being monitored. The disconnection had a negative impact on the motivation of the animals, but not on their learning behaviour. The study, which was published in Neuron, may eventually lead to more targeted treatments for certain brain disorders.

Learning is crucial to man's survival. The brain's reward system plays an important part in this. Babies learn to hold their cup to be able to drink on their own. As a student, you learn skills that are useful for your career. In times of COVID-19, we learn to adapt quickly to measures in order to avoid infections - or at least a fine or disapproving look. In addition to reward, motivation is another important factor when it comes to learning. Without motivation, not even the smartest student would ever obtain a degree. But what is most important: motivation or reward?

Wim Vanduffel and his colleagues investigated whether a specific brain pathway is crucial for motivation or reward-driven learning. In a first task, the animals had to put in a big effort to get a big reward, or a small effort for a smaller reward. The researchers showed two simple shapes on a screen. The primate learned, for example, that he received more orange juice when he looked at a red triangle than when he looked at a blue circle. He also had to look at the triangle for longer and, thus, put in more effort to receive the bigger reward. So, the primate had to be strongly motivated if he wanted the bigger reward.

In a second task, the primate again had to look at two shapes. In this case, however, the choice for one shape was linked to a higher probability of getting orange juice than the choice for the other one. The size of the reward was the same, but looking at one shape was rewarded more often than looking at the other shape.

Just like people, primates learn to select the stimulus that yields the most reward very quickly; they try to maximise their profits. In a later stage, the researchers altered the chances of getting a reward. Without the primate knowing, the shape that previously yielded less reward suddenly became the most profitable. Once again, as with humans, primates learn to change strategies quickly by choosing the new, more profitable stimulus.

The researchers temporarily switched off the nerve pathway between the area tegmentalis ventralis (VTA) and the nucleus accumbens (NAc)

Decrease in motivation

Next, the researchers reversibly disabled a specific pathway. This pathway, the connection between two important cerebral nuclei of the reward system, mostly produces dopamine. The intervention had a strong effect on the motivation of the animals during the first task. Suddenly, the animals always went for the easy small reward instead of the big reward that took more effort. There was no change in behaviour during the second reinforcement-based learning task. The animals figured out which stimulus was the most profitable as quickly as the first time.

This specific pathway is, therefore, important to keep motivation up in order to make greater efforts, but not to learn about changes in links between a stimulus and a reward.

The researchers used functional MRI scans to look at the brain activity of the primates. When the pathway was disabled, they noticed a surprising increase in functional connectivity between areas of the temporal and frontal cerebral cortex: the areas are activated more synchronously. This increase was, therefore, linked to a decrease in motivation.

First time

"This is the first time that scientists have succeeded in reversibly disabling a specific pathway between two areas in the brains of primates while they perform cognitive tasks and their whole brain activity is being monitored", explains Professor Vanduffel. In earlier studies, brain areas were usually activated or deactivated in their entirety, which has an impact on all connections of that specific area. "In the very few pathway-selective experiments published so far, monkeys did not have to perform cognitive tasks, nor was whole brain activity sampled. Contrary to the consensus so far, which is mainly based on rodent research, it appears that this pathway in the brain's reward system is more important for the motivation to make big efforts than for reward-based learning."

Psychiatry

A lack or excess of motivation plays an important role in many psychiatric disorders, including depression, compulsive behaviour, addiction, anxiety disorders, mood disorders and schizophrenia. "Our study provides new perspectives to increase or decrease the activity in specific pathways without affecting an entire area or brain system, because other pathways that originate from the same cerebral nucleus remain unaffected", says Professor Vanduffel. "This is not the case for methods in which an entire area and, therefore, all connections of that area are affected. This opens the door to much more precise interventions in brain systems and, subsequently, the development of more effective therapies for brain disorders with fewer side effects.

Credit: 
KU Leuven

Faulty brain circuits arise from abnormal fusion

image: Nematode worms were engineered to express fusogens in their neurons, these nerve cells fused together, causing behavioural impairments.

Image: 
Hilliard Lab, Queensland Brain Institute

A discovery that could rewrite the textbooks on neurons could also help us understand the basis of some neurological diseases.

For one hundred years, scientists believed that nerve cells always stayed separate and never fused, but University of Queensland (UQ) researchers have found that neurons can lose their individuality in some conditions.

Fusogens, molecules essential for the fusion of cells in tissue development, are generally not found in the nervous system apart from during some viral infections, stress or neurological diseases, such as multiple sclerosis, schizophrenia and bipolar disorder.

A team led by Professor Massimo Hilliard, Dr Rosina Giordano-Santini and Dr Zhaoyu Li at UQ's Queensland Brain Institute and Clem Jones Centre for Ageing Dementia Research (CJCADR), found that when nematode worms were engineered to express fusogens in their neurons, these nerve cells fused together, causing behavioural impairments.

"We have limited knowledge of the effect of fusogens in neurons, and what happens if neurons fuse together, so we explored these questions in the 1 mm-long nematode worm C. elegans, where we can easily visualise neurons under the microscope and manipulate their genes," Professor Hilliard said.

"When the nerve cells fused, we found that their electric circuits became coupled together and this affected the animal's behaviour--we can liken this to two rooms, each with its own light switch--a problem with the electric circuit causes one switch to turn the lights on in both rooms," Professor Hilliard said.

In the worm, the researchers fused neurons that are necessary to detect different odours, and they observed that when the nerve cells fused, the animals lost their ability to be attracted to food or repelled by dangers.

The results show a novel cause of malfunction of the brain's electrical circuits and a possible underlying cause of neurological diseases.

So where do the fusogens in the brain come from?

Dr Giordano-Santini said there are at least two possibilities for fusogens arising in neurons--one is that a neurological disease causes the nerve cells to start producing fusogens, and the other is that viruses get into brain cells and hack the cell machinery to make fusogens.

"We know that viruses can infect the brain--herpes simplex virus has been found in the brain cells of some patients with Alzheimer's disease--causing havoc and resulting in disease," Dr Giordano-Santini said.

Now the researchers are keen to find out if the fusing and disruption of the electrical circuits can also happen in mammalian cells.

"Understanding more about neuronal fusion will allow us to determine how often it occurs in human disease conditions, and eventually develop ways to prevent it or to rescue neurons from this fate," Professor Hilliard said.

Credit: 
University of Queensland

Fuel cells for hydrogen vehicles are becoming longer lasting

image: The new electrocatalyst for hydrogen fuel cells consists of a thin platinum-cobalt alloy network and, unlike the catalysts commonly used today, does not require a carbon carrier.

Image: 
Gustav Sievers

Fuel cells are gaining in importance as an alternative to battery-operated electromobility in heavy traffic, especially since hydrogen is a CO2-neutral energy carrier if it is obtained from renewable sources. For efficient operation, fuel cells need an electrocatalyst that improves the electrochemical reaction in which electricity is generated. The platinum-cobalt nanoparticle catalysts used as standard today have good catalytic properties and require only as little as necessary rare and expensive platinum. In order for the catalyst to be used in the fuel cell, it must have a surface with very small platinum-cobalt particles in the nanometer range, which is applied to a conductive carbon carrier material. Since the small particles and also the carbon in the fuel cell are exposed to corrosion, the cell loses efficiency and stability over time.

An international team led by Professor Matthias Arenz from the Department of Chemistry and Biochemistry (DCB) at the University of Bern has now succeeded in using a special process to produce an electrocatalyst without a carbon carrier, which, unlike existing catalysts, consists of a thin metal network and is therefore more durable. "The catalyst we have developed achieves high performance and promises stable fuel cell operation even at higher temperatures and high current density," says Matthias Arenz. The results have been published in Nature Materials. The study is an international collaboration between the DCB and, among others, the University of Copenhagen and the Leibniz Institute for Plasma Science and Technology, which also used the Swiss Light Source (SLS) infrastructure at the Paul Scherrer Institute.

The fuel cell - direct power generation without combustion

In a hydrogen fuel cell, hydrogen atoms are split to generate electrical power directly from them. For this purpose, hydrogen is fed to an electrode, where it is split into positively charged protons and negatively charged electrons. The electrons flow off via the electrode and generate electric current outside the cell, which drives a vehicle engine, for example. The protons pass through a membrane that is only permeable to protons and react on the other side on a second electrode coated with a catalyst (here from a platinum-cobalt alloy network) with oxygen from the air, thus producing water vapor. This is discharged via the "exhaust".

The important role of the electrocatalyst

For the fuel cell to produce electricity, both electrodes must be coated with a catalyst. Without a catalyst, the chemical reactions would proceed very slowly. This applies in particular to the second electrode, the oxygen electrode. However, the platinum-cobalt nanoparticles of the catalyst can "melt together" during operation in a vehicle. This reduces the surface of the catalyst and therefore the efficiency of the cell. In addition, the carbon normally used to fix the catalyst can corrode when used in road traffic. This affects the service life of the fuel cell and consequently the vehicle. "Our motivation was therefore to produce an electrocatalyst without a carbon carrier that is nevertheless powerful," explains Matthias Arenz. Previous, similar catalysts without a carrier material always only had a reduced surface area. Since the size of the surface area is crucial for the catalyst's activity and hence its performance, these were less suitable for industrial use.

Industrially applicable technology

The researchers were able to turn the idea into reality thanks to a special process called cathode sputtering. With this method, a material's individual (here platinum or cobalt) are dissolved (atomized) by bombardment with ions. The released gaseous atoms then condense as an adhesive layer. "With the special sputtering process and subsequent treatment, a very porous structure can be achieved, which gives the catalyst a high surface area and is self-supporting at the same time. A carbon carrier is therefore superfluous," says Dr. Gustav Sievers, lead author of the study from the Leibniz Institute for Plasma Science and Technology.

"This technology is industrially scalable and can therefore also be used for larger production volumes, for example in the automotive industry," says Matthias Arenz. This process allows the hydrogen fuel cell to be further optimized for use in road traffic. "Our findings are consequently of importance for the further development of sustainable energy use, especially in view of the current developments in the mobility sector for heavy goods vehicles," says Arenz.

Credit: 
University of Bern

New study: Eyes linger less on 'fake news' headlines

The term 'fake news' has been a part of our vocabulary since the 2016 US presidential election. As the amount of fake news in circulation grows larger and larger, particularly in the United States, it often spreads like wildfire. Subsequently, there is an ever-increasing need for fact-checking and other solutions to help people navigate the oceans of factual and fake news that surround us.

Help may be on the way, via an interdisciplinary field where eye-tracking technology and computer science meet. A study by University of Copenhagen and Aalborg University researchers shows that people's eyes react differently to factual and false news headlines.

Eyes spend a bit less time on fake headlines

Researchers placed 55 different test subjects in front of a screen to read 108 news headlines. A third of the headlines were fake. The test subjects were assigned a so-called 'pseudo-task' of assessing which of the news items was the most recent. What they didn't know, was that some of the headlines were fake. Using eye-tracking technology, the researchers analyzed how much time each person spent reading the headlines and how many fixations the person per headline.

"We thought that it would be interesting to see if there's a difference in the way people read news headlines, depending on whether the headlines are factual or false. This has never been studied. And, it turns out that there is indeed a statistically significant difference," says PhD fellow and lead author Christian Hansen, of the University of Copenhagen's Department of Computer Science.

His colleague and co-author from the same department, PhD fellow Casper Hansen, adds:

"The study demonstrated that our test subjects' eyes spent less time on false headlines and fixated on them a bit less compared with the headlines that were true. All in all, people gave fake news headlines a little less visual attention, despite their being unaware that the headlines were fake."

The computer scientists can't explain for the difference, nor do they dare make any guesses. Nevertheless, they were surprised by the result.

The researchers used the results to create an algorithm that can predict whether a news headline is fake based on eye movements.

Could support fact-checking

As a next step, the researchers would like to examine whether it is possible to measure the same differences in eye movements on a larger scale, beyond the lab - preferably using ordinary webcams or mobile phone cameras. It will, of course, require that people allow for access to their cameras.

The two computer scientists imagine that eye-tracking technology could eventually help with the fact-checking of news stories, all depending upon their ability to collect data from people's reading patterns. The data could come from news aggregator website users or from the users of other sources, e.g., Feedly and Google News, as well as from social media, like Facebook and Twitter, where the amount of fake news is large as well.

"Professional fact-checkers in the media and organizations need to read through lots of material just to find out what needs to be fact-checked. A tool to help them prioritize material could be of great help," concludes Christian Hansen.

Credit: 
University of Copenhagen

UVA-led team warns negative emissions technologies may not solve climate crisis

A team led by researchers at the University of Virginia cautions that when it comes to climate change, the world is making a bet it might not be able to cover.

The team's new paper in Nature Climate Change explores how plans to avoid the worst outcomes of a warming planet could bring their own side effects.

The handful of models the United Nations Intergovernmental Panel on Climate Change and decisions makers around the world trust to develop strategies to meet carbon neutrality commitments all assume negative emissions technologies will be available as part of the solution.

Negative emissions technologies, often called NETs, remove carbon dioxide from the atmosphere. The three most widely studied approaches are bioenergy with carbon capture and storage, which entails growing crops for fuel, then collecting and burying the CO2 from the burned biomass; planting more forests; and direct air capture, an engineered process for separating CO2 from the air and storing it permanently, likely underground.

"The trouble is, nobody has tried these technologies at the demonstration scale, much less at the massive levels necessary to offset current CO2 emissions," said Andres Clarens, a professor in UVA Engineering's Department of Engineering Systems and Environment and associate director of UVA's pan-University Environmental Resilience Institute. The institute partially funded the research leading to the Nature Climate Change paper.

"Our paper quantifies their costs so we can have an honest conversation about it before we start doing this on a large scale," Clarens said.

Since the Paris Agreement to limit global warming to 1.5 degrees Celsius, hammered out by world leaders in 2015, a growing number of corporations such as BP and many institutions and governments -- including UVA and Virginia -- have committed to reaching zero carbon emissions in the next few decades. Microsoft has pledged to erase its carbon emissions since its founding in 1975.

To Clarens, an engineer who studies carbon management, and his fellow researchers, these are encouraging developments. Led by Clarens' Ph.D. student Jay Fuhrman, the group also includes economist Haewon McJeon and computational scientist Pralit Patel of the Joint Global Change Research Institute at the University of Maryland; UVA Joe D. and Helen J. Kington Professor of Environmental Sciences Scott C. Doney; and William M. Shobe, research director at the Weldon Cooper Center for Public Service and professor at UVA's Batten School of Leadership and Public Policy.

For the research, the team used an integrated model -- one of those the United Nations relies on -- called the Global Change Assessment Model. The model was developed at the University of Maryland, which partners with the Pacific Northwest National Laboratory to run the Joint Global Change Research Institute. They compared the effects of the three negative emissions technologies on global food supply, water use and energy demand. The work looked at the role having direct air capture available would have on future climate scenarios.

Biofuels and reforestation take up vast land and water resources needed for agriculture and natural areas; biofuels also contribute to pollution from fertilization. Direct air capture uses less water than planting biofuels and trees, but it still demands a lot of water and even more energy -- largely supplied by fossil fuels, offsetting some of the benefits of carbon dioxide removal. Until recently, direct air technologies also were considered too expensive to include in emissions reduction plans.

The team's analysis shows that direct air capture could begin removing up to three billion tons of carbon dioxide from the atmosphere per year by 2035 -- more than 50% of U.S. emissions in 2017, the most recent year for which reliable data was available. But even if government subsidies make rapid and widespread adoption of direct air capture feasible, we'll need biofuels and reforestation to meet CO2 reduction goals. The analysis showed staple food crop prices will still increase approximately threefold globally relative to 2010 levels and fivefold in many parts of the world where inequities in the cost of climate change already exist.

"Direct air capture can soften -- but not eliminate -- the sharpest tradeoffs resulting from land competition between farmland and land needed for new forests and bioenergy," Fuhrman and Clarens wrote in a blog accompanying the release of the paper.

The costs that remain increase with time, making determined, multipronged actions toward reducing carbon dioxide emissions and removing it from the atmosphere all the more urgent, the researchers argue.

"We need to move away from fossil fuels even more aggressively than many institutions are considering," Clarens said. "Negative emissions technologies are the backstop the UN and many countries expect will one day save us, but they will have side effects we have to be prepared for. It's a huge gamble to sit on our hands for the next decade and say, we've got this because we're going to deploy this technology in 2030, but then it turns out there are water shortages, and we can't do it."

"Before we bet the house, let's understand what the consequences are going to be," Fuhrman added. "This research can help us sidestep some of the pitfalls that could arise from these initiatives."

Credit: 
University of Virginia School of Engineering and Applied Science

Crossbreeding of Holstein cows improves fertility without detriment to milk production

image: This study establishes the Viking Red and Montbéliarde breeds as highly complementary for crossbreeding with Holsteins and well suited for milk production in high-performance dairy herds.

Image: 
A.R. Hazel, B.J. Heins, and L.B. Hansen

Philadelphia, August 24, 2020 - Since 1960, Holstein dairy cows have exhibited a substantial decline in fertility, with serious economic consequences for farmers. Genetic selection programs in the United States and elsewhere have emphasized milk production at the expense of other traits. Attention has turned to improving these neglected traits for better overall well-being of cows and to ameliorate dairy producers' profitability. In a recent article appearing in the Journal of Dairy Science, scientists from the University of Minnesota examined the effects of crossbreeding on fertility and milk production across three generations in a large sample of Holstein and crossbred cows.

Although in recent years Holstein breeding programs have made strides toward remedying the problem of diminished fertility, crossbreeding is often seen as a possible means to achieve greater and more rapid gains, while eliminating concerns about inbreeding.

"A larger response in phenotypic fertility will be experienced over a shorter period of time from crossbreeding than from selection within a pure Holstein population," explained lead author Amy Hazel, PhD, University of Minnesota, St. Paul, MN, USA. Whether this is true, and whether crossbred cows can compete with Holsteins in a high-producing commercial dairy setting, were questions that the team investigated.

Purebred Holsteins were compared with cows from a three-breed rotation of Holstein with Viking Red and Montbéliarde in this 10-year study with initial enrollment of 3,550 Holstein cows from Minnesota commercial dairies. The team found that each combination of two- and three-breed crossbred cows demonstrated significant advantages over pure Holsteins for all fertility traits at each studied lactation. This confirmed expectations, but what about the possibility that milk production might be negatively affected by crossbreeding?

"Because of the global predominance of high-producing Holsteins, some dairy producers have been concerned that crossbred cows will have poorer milk production traits," observed Prof. Hazel. "But our study found little, if any, loss of fat and protein production for crossbred cows compared with their Holstein herdmates."

As dairy producers place increased emphasis on minimizing the major expenses for cows--including feed intake, repeated inseminations, health treatments, and premature replacement--this large and carefully designed study confirms that strategic crossbreeding can improve fertility of dairy herds, reduce costs of insemination, and result in more efficient milk production, without significant losses in milk composition.

Additionally, a larger effect should be the longer herd life of crossbred cows compared with pure Holsteins. Although further research remains to be performed, this study establishes the Viking Red and Montbéliarde breeds as highly complementary for crossbreeding with Holsteins and well suited for milk production in high-performance dairy herds.

Credit: 
Elsevier

Researchers develop flat lens a thousand times thinner than a human hair

image: The lens can be used to produce high-resolution images with a wide field of view. It can serve as a camera lens in smartphones and can be used in other devices that depend on sensors (high resolution wide angle selfie obtained using metalens

Image: 
Augusto Martins/USP

By José Tadeu Arantes | Agência FAPESP – A lens that is a thousand times thinner than a human hair has been developed in Brazil by researchers at the University of São Paulo’s São Carlos School of Engineering (EESC-USP). It can serve as a camera lens in smartphones or be used in other devices that depend on sensors.

“In the present technological context, its applications are almost unlimited,” Emiliano Rezende Martins, a professor in EESC-USP’s Department of Electrical Engineering and Computing and last author of a published paper on the invention, told Agência FAPESP.

The paper is entitled “On Metalenses with Arbitrarily Wide Field of View” and is published in ACS Photonics. The study was supported by FAPESP via a scholarship for a research internship abroad awarded to Augusto Martins, PhD candidate and lead author of the paper.

The lens consists of a single nanometric layer of silicon on arrays of nanoposts that interact with light. The structure is printed by photolithography, a well-known technique used to fabricate transistors.

This kind of lens is known as a metalens. Metalenses were first developed ten years ago and achieve the highest resolution that is physically feasible, using an ultrathin array of tiny waveguides called a metasurface that bends light as it passes through the lens.

According to Rezende Martins, metalenses have long faced the problem that the angle of view is extremely small (less than 1°). “One way to solve the problem is to combine metalenses, forming complex structures,” he said.

Based on the realization that in a conventional lens an increase in refraction index increases the field of view in proportion to the flatness of the lens, the authors designed a metalens to mimic a totally flat lens with an infinite refraction index, which could not be obtained with a conventional lens.

“Our lens has an arbitrary field of view, which ideally can reach 180° without image distortion,” Rezende Martins said. “We’ve tested its effectiveness for an angle of 110°. With wider angles of view, light energy decreases owing to the shadow effect, but this can be corrected by post-processing.”

Combining metalenses prevents super-resolution, but the resolution obtained is sufficient for all conventional applications. Martins tested the metalens with a 3D printed camera and obtained high-resolution images with a wide field of view. “So far we’ve only succeeded in photographing in green, but in the months ahead we’ll upgrade the lens so that all colors are feasible,” he said.

The article “On metalenses with arbitrarily wide field of view” can be retrieved from: pubs.acs.org/doi/10.1021/acsphotonics.0c00479.

DOI

10.1021/acsphotonics.0c00479

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Punitive sentencing led to higher incarceration rates throughout adulthood for certain birth cohorts in North Carolina

Although U.S. crime rates have dropped significantly since the mid-1990s, rates of incarceration peaked in 2008, and still remain high. The standard explanation for this pattern is that all people exposed to the criminal justice system today are treated more harshly than before. A new study using 45 years of incarceration data from North Carolina suggests an alternative explanation: this pattern is driven by the prolonged involvement in the criminal justice system by members of Generation X, who came of age during the 1980s and early 1990s.

The study, by researchers at the University at Albany SUNY, RAND Corporation, and the University of Pennsylvania, appears in Criminology, a publication of the American Society of Criminology.

"Birth cohorts who were young adults during the crime and punishment boom that occurred in the 1980s and 1990s have higher rates of incarceration throughout their lives, even after the crime-punishment wave ended," explains Shawn Bushway, a senior policy researcher at the RAND Corporation on leave from the University at Albany, who coauthored the study. "We believe that this occurs because these individuals accumulated an extended criminal history under a determinate sentencing regime - implemented in 1994 and still in effect today - that systematically increases punishment for individuals with prior convictions."

When members of this generation were convicted again in their 30's and 40's, they were much more likely to receive a prison sentence because of their prior records. One additional consequence is that the median age of people in prison has increased substantially during this time period, even for newly admitted prisoners.

The study examined individual-level data from public administrative records in North Carolina from 1972 to 2016, which included the sentencing and corrections records of 450,000 current and former state prison inmates, probationers, and parolees. The study found that the crime-punishment wave of the late 1980s and early 1990s increased rates of incarceration for all age groups during this time period. However, this shock was particularly big for Generation X who were in their peak crime years. This short-term shock became a long-term effect for Generation X, who faced increased levels of incarceration when they were convicted at later ages, because of their longer prior records. These effects existed for both Black people and White people, despite disproportionate increases in incarceration rates among Black people in the 1980s and 1990s.

"The criminal justice policies when an individual came of age as well as criminal behavior itself played a role in determining levels of criminal justice involvement in young adulthood, which generated further disadvantages in one's interaction with the criminal justice system later in life," notes Yinzhi Shen, PhD candidate at the University at Albany SUNY, who is the lead author on the study. "Policies to reduce the number of people imprisoned should pay attention to the ways in which current policies weigh prior criminal involvement."

Birth cohorts who are now in their peak years of criminal involvement - Generation Z/Zoomers - have dramatically lower incarceration rates than members of Generation X. In fact, their rates of incarceration resemble those of people who came of age in the early 1970's. These low rates of involvement should continue as they age into their late 20's and 30's, and aggregate incarceration rates should begin to drop even faster as Generation X exits the prison system, even without additional policy changes.

Among the study's limitations, the authors acknowledge, are that the researchers were unable to investigate the behavioral mechanisms behind the differences in cohorts. In addition, United States does not operate under a single criminal justice system, more study is needed to determine whether similar patterns exist in other states.

Credit: 
Crime and Justice Research Alliance

UK/UPenn researchers provide insights into new form of dementia

image: Dr. Pete Nelson, M.D.,PhD, of the Sanders-Brown Center on Aging on July 2, 2020.

Image: 
Photo by Mark Cornelison | UKphoto

LEXINGTON, Ky. - Working with their colleagues at the University of Pennsylvania, researchers at the University of Kentucky have found that they can differentiate between subtypes of dementia inducing brain disease. "For the first time we created criteria that could differentiate between frontotemporal dementia (FTD) and a common Alzheimer's 'mimic' called LATE disease," explained Dr. Peter Nelson of the Sanders-Brown Center on Aging at the University of Kentucky. He says they validated the criteria rigorously. The study was recently published in BRAIN: A Journal of Neurology. The first author of the paper was John L. Robinson from University of Pennsylvania and the corresponding author was Dr. Nelson.

This work comes in the wake of a large group effort organized and funded by the National Institute on Aging, part of the National Institutes of Health (NIH), that helped define LATE, by an international team of experts including a strong contingent from U. Kentucky. LATE stands for "limbic-predominant age-related TDP-43 encephalopathy". LATE is a disease with symptoms like Alzheimer's disease, those symptoms are referred to as "dementia", but is caused by different underlying processes in the brain.

LATE is important because it affects millions of people, approximately 40% of people over the age of 85. How was it recognized? Researchers around the world noticed that a large number of people who died in advanced age had symptoms of dementia without the telltale features of Alzheimer's disease ("plaques and tangles") in their brains at autopsy. Emerging research indicated that the protein TDP-43 contributed to that phenomenon.

"Dozens of different viruses and bacteria can cause pneumonia," explained Dr. Nelson. "So why would we think there is just one cause of dementia?"

With that question in mind, Nelson and colleagues set out to define diagnostic criteria and other guidelines for advancing future research into this newly-named dementia.

"We used to think that aging-related memory and thinking decline meant one thing: a disease called Alzheimer's disease. Now we know that the disease we were calling Alzheimer's disease is actually many different conditions, often in combination. This raises some questions: is it important to classify and differentiate those conditions? And, if so, how do we go about that? Cancer gives us inspiration, because in some ways that field of research is decades ahead of dementia research. In cancer, they found that different cancers are very different and respond differently to therapies, so it's worthwhile figuring out the complexity. We are now focusing likewise on the different diseases that cause dementia. A type of protein deposits in the brain, called TDP-43, is very harmful to the brain and contributes to a dementia syndrome with memory loss and thinking problems," said Nelson.

As researchers stated in this recent study, "There is a general agreement that millions of persons worldwide are affected by age-related TDP-43 proteinopathy." However, they say there are some serious gaps in the classification guidelines for the specific neurodegenerative disorders. That is why they wanted to closely compare frontotemporal dementia (FTD) and the mimic disease LATE. Ultimately, they found that the two do in fact have differentiating pathologic features.

"Until you can define a disease it's very hard to look for a cure. Now we have a better basis to help work toward a therapy," said Nelson.

With these findings now established Dr. Nelson is looking forward to working on the first clinical trial of LATE with his colleague at UK Dr. Greg Jicha. "This is a very exciting opportunity to test a medicine that could stop the disease in its tracks, and to treat our research volunteers here at U. Kentucky," explained Nelson.

Those working on the study say they are extremely grateful to the research volunteers, their families, and clinicians, as well as the other researchers that made this work possible. As well as the taxpayer/NIH support that made the work possible.

Credit: 
University of Kentucky

Hitting the nail on the head: overcoming therapeutic resistance in lung cancer

image: Dr. Robert Gemmill, a Hollings Cancer Center researcher at the Medical University of South Carolina

Image: 
Sarah Pack, Medical University of South Carolina

A protein highly expressed in lung cancer cells drives resistance to targeted therapies, report researchers at the Medical University of South Carolina in the Journal of Thoracic and Cardiovascular Surgery. In preclinical experiments, the researchers showed that inhibiting the protein caused the death of non-small cell lung cancer cells that had become resistant to therapy.

The MUSC Team was led by Chadrick E. Denlinger, M.D., who was then surgical director of the Lung Transplant Program at MUSC Health, and MUSC Hollings Cancer Center researcher Robert Gemmill, Ph.D., who is a professor emeritus in the Department of Medicine. Denlinger is now division chief of thoracic surgery at Indiana University but continues his collaboration with Gemmill.

Lung cancer accounts for a quarter of all cancer deaths, and non-small cell lung cancer makes up 84% of all lung cancer cases. Targeted therapies can be effective for a time against selected lung cancers, but resistance to these therapies soon develops.

A cancer cell is like a small factory with many moving parts working towards one common goal: survival and reproduction of the tumor at the expense of the patient.

A type of targeted drug, called a tyrosine kinase inhibitor, or TKI, works by inhibiting a specific, vital piece of machinery within the cell factory on which it is dependent. However, the factory has many fail-safes in place and can quickly rely on another piece of cellular machinery to continue to grow and survive, even in the presence of the TKI. The ability of a cancer cell to adapt to a new strategy to survive is called "genetic resistance."

When researchers developed TKIs for the treatment of cancers such as non-small cell lung adenocarcinoma (NSCLC), they had hoped they would become the "magic bullet" to treat the disease successfully.

"One of the benefits of TKIs is that they're much less toxic and are fairly beneficial -- we see a dramatic response and the tumors shrink," said Denlinger. "But a limitation is that these effects don't last very long before the cancer cells evolve new techniques to become resistant to the drug."

Due to such resistance, the survival outcomes for patients receiving TKIs are no better than those for patients receiving conventional chemotherapy. Consequently, the need to find treatments that can overcome that resistance is urgent.

Gemmill's group, which includes Cecile Nasarre, Ph.D., Anastasios Dimou, M.D., and a summer undergraduate, Rose Pagano, recently linked drug resistance in lung cancers to the expression of a cell surface co-receptor Neuropilin 2 (NRP2). Gemmill received pilot project funds from the South Carolina Clinical & Translational Research Institute for his work with NRP2.

"One of the earliest things we discovered was that the NRP2 variant protein, NRP2b, dramatically increased in lung cancer patients who became resistant to therapy," remarked Gemmill. "This gave us the first clue that it becomes upregulated in resistant tumors."

The investigators then performed a series of experiments in which they "knocked down" NRP2b from lung cancer cell lines that were capable of developing TKI resistance.

"When we knock down NRP2b, we lose the surviving drug-tolerant cells," said Gemmill. "And by reducing that population, we believe we will reduce the ability of the tumor to develop genetic resistance."

Next, they explored how NRP2b could be contributing to drug resistance in lung cancer cells. They started with GSK3, a molecule that's involved in many different activities within the cell and has been reported previously to interact with NRP2b during neuronal development. The investigators performed experiments to determine whether NRP2b interacts with GSK3B.

"You can think about GSK3B as a hammer," said Gemmill. "And this hammer has the job of hammering many different nails that are present in the cell. NRP2b is like the hand of the carpenter that directs that hammer to particular nails. NRP2b is using GSK3B as a hammer to drive very specific nails, and we want to stop that because those nails are driving tumor progression."

To better understand the specific nails that NRP2b and GSK3B are driving in lung cancer, the investigators performed experiments in which they measured how well lung cancer cells can migrate and survive in the presence of TKIs in the absence of these two players. With these experiments, they found that NRP2b needs GSK3B to promote cancer cell migration, an essential step in cancer progression, and drug resistance.

Now that the investigators have identified a mechanism by which cancer cells are becoming resistant to treatment, their next step will involve developing inhibitors. More specifically, they will try to develop inhibitors that interfere with the carpenter (NRP2) grabbing the hammer (GSK3B).

"Importantly, these inhibitors should not interfere with other functions of GSK3B, which will reduce potentially harmful off-target effects in a healthy cell," said Denlinger.

Currently, the team is working to test the toxicity and effectiveness of prototype drugs that could specifically disrupt the interaction between GSK3B and NRP2b. They are collaborating on this work with MUSC College of Pharmacy researchers Patrick M. Woster, Ph.D., chair of the Department of Drug Discovery & Biomedical Sciences, and associate professor Yuri K. Peterson, Ph.D.

"Ultimately we could find a way to improve therapy for cancer patients," said Denlinger. "A therapy that could extend the influence of TKIs and potentially reduce metastatic spread and extend the lives of patients."

Credit: 
Medical University of South Carolina

Texas A&M researchers create a contagion model to predict flooding in urban areas

Inspired by the same modeling and mathematical laws used to predict the spread of pandemics, researchers at Texas A&M University have created a model to accurately forecast the spread and recession process of floodwaters in urban road networks. With this new approach, researchers have created a simple and powerful mathematical approach to a complex problem.

"We were inspired by the fact that the spread of epidemics and pandemics in communities has been studied by people in health sciences and epidemiology and other fields, and they have identified some principles and rules that govern the spread process in complex social networks," said Dr. Ali Mostafavi, associate professor in the Zachry Department of Civil and Environmental Engineering. "So we ask ourselves, are these spreading processes the same for the spread of flooding in cities? We tested that, and surprisingly, we found that the answer is yes."

The findings of this study were recently published in Nature Scientific Reports.

The contagion model, Susceptible-Exposed-Infected-Recovered (SEIR), is used to mathematically model the spread of infectious diseases. In relation to flooding, Mostafavi and his team integrated the SEIR model with the network spread process in which the probability of flooding of a road segment depends on the degree to which the nearby road segments are flooded.

In the context of flooding, susceptible is a road that can be flooded because it is in a flood plain; exposed is a road that has flooding due to rainwater or overflow from a nearby channel; infected is a road that is flooded and cannot be used; and recovered is a road where the floodwater has receded.

The research team verified the model's use with high-resolution historical data of road flooding in Harris County during Hurricane Harvey in 2017. The results show that the model can monitor and predict the evolution of flooded roads over time.

"The power of this approach is it offers a simple and powerful mathematical approach and provides great potential to support emergency managers, public officials, residents, first responders and other decision makers for flood forecast in road networks," Mostafavi said.

The proposed model can achieve decent precision and recall for the spatial spread of the flooded roads.

"If you look at the flood monitoring system of Harris County, it can show you if a channel is overflowing now, but they're not able to predict anything about the next four hours or next eight hours. Also, the existing flood monitoring systems provide limited information about the propagation of flooding in road networks and the impacts on urban mobility. But our models, and this specific model for the road networks, is robust at predicting the future spread of flooding," he said. "In addition to flood prediction in urban networks, the findings of this study provide very important insights about the universality of the network spread processes across various social, natural, physical and engineered systems; this is significant for better modeling and managing cities, as complex systems."

The only limitation to this flood prediction model is that it cannot identify where the initial flooding will begin, but Mostafavi said there are other mechanisms in place such as sensors on flood gauges that can address this.

"As soon as flooding is reported in these areas, we can use our model, which is very simple compared to hydraulic and hydrologic models, to predict the flood propagation in future hours. The forecast of road inundations and mobility disruptions is critical to inform residents to avoid high-risk roadways and to enable emergency managers and responders to optimize relief and rescue in impacted areas based on predicted information about road access and mobility. This forecast could be the difference between life and death during crisis response," he said.

Civil engineering doctoral student and graduate research assistant Chao Fan led the analysis and modeling of the Hurricane Harvey data, along with Xiangqi (Alex) Jiang, a graduate student in computer science, who works in Mostafavi's UrbanResilience.AI Lab.

"By doing this research, I realize the power of mathematical models in addressing engineering problems and real-world challenges.

This research expands my research capabilities and will have a long-term impact on my career," Fan said. "In addition, I am also very excited that my research can contribute to reducing the negative impacts of natural disasters on infrastructure services."

Credit: 
Texas A&M University

Machines rival expert analysis of stored red blood cell quality

image: Figure A: Expert-supervised, deep learning-based automation of the conventional method for assessing RBC quality.

Figure B: Weakly-supervised, deep learning-based method in which neural networks learn without experts.

Image: 
Minh Doan, Joseph Sebastian, Tracey Turner, Jason Acker, Michael Kolios, Anne Carpenter

Each year, nearly 120 million units* of donated blood flow from donor veins into storage bags at collection centres around the world. The fluid is packed, processed and reserved for later use. But once outside the body, stored red blood cells (RBCs) undergo continuous deterioration. By day 42 in most countries, the products are no longer usable.

For years, labs have used expert microscopic examinations to assess the quality of stored blood. How viable is a unit by day 24? How about day 37? Depending on what technicians' eyes perceive, answers may vary. This manual process is laborious, complex and subjective.

Now, after three years of research, a study published in the Proceedings of the National Academy of Sciences unveils two new strategies to automate the process and achieve objective RBC quality scoring -- with results that match and even surpass expert assessment.

The methodologies showcase the potential in combining artificial intelligence with state-of-the-art imaging to solve a longstanding biomedical problem. If standardized, it could ensure more consistent, accurate assessments, with increased efficiency and better patient outcomes.

Trained machines match expert human assessment

The interdisciplinary collaboration spanned five countries, twelve institutions and nineteen authors, including universities, research institutes, and blood collection centres in Canada, USA, Switzerland, Germany and the UK. The research was led by computational biologist Anne Carpenter of the Broad Institute of Harvard and MIT, physicist Michael Kolios of Ryerson University's Department of Physics, and Jason Acker of the Canadian Blood Services.

They first investigated whether a neural network could be taught to "see" in images of RBCs the same six categories of cell degradation as human experts could. To generate the vast quantity of images required, imaging flow cytometry played a crucial role. Joseph Sebastian, co-author and Ryerson undergraduate then working under Kolios, explains.

"With this technique, RBCs are suspended and flowed through the cytometer, an instrument that takes thousands of images of individual blood cells per second. We can then examine each RBC without handling or inadvertently damaging them, which sometimes happens during microscopic examinations."

The researchers used 40,900 cell images to train the neural networks on classifying RBCs into the six categories -- in a collection that is now the world's largest, freely available database of RBCs individually annotated with the various categories of deterioration.

When tested, the machine learning algorithm achieved 77% agreement with human experts. Although a 23% error rate might sound high, perfectly matching an expert's judgment in this test is impossible: even human experts agree only 83% of the time. Thus, this fully-supervised machine learning model could be effective to replace tedious visual examination by humans with little loss of accuracy.

Even so, the team wondered: could a different strategy push the upper limits of accuracy further?

Machines surpass human vision, detect cellular subtleties

In the study's second part, the researchers avoided human input altogether and devised an alternative, "weakly-supervised" deep learning model in which neural networks learned about RBC degradation on their own.

Instead of being taught the six visual categories used by experts, the machines learned solely by analyzing over one million images of RBCs, unclassed and ordered only by blood storage duration time. Eventually, the machines correctly discerned features in single RBCs that correspond to the descent from healthy to unhealthy cells.

"Allowing the computer to teach itself the progression of stored red blood cells as they degrade is a really exciting development," says Carpenter, "particularly because it can capture more subtle changes in cells that humans don't recognize."

When tested against other relevant tests such as a biochemical assay, the weakly-supervised trained machines predicted RBC quality better than the current six-category assessment method used by experts.

Deep learning strategies: Blood quality and beyond

Further training is still needed before the model is ready for clinical testing, but the outlook is promising. The fully-supervised machine learning model could soon automate and streamline the current manual method, minimizing sample handling, discrepancies and procedural errors in blood quality assessments.

The second, alternative weakly-supervised framework may further eliminate human subjectivity from the process. Objective, accurate blood quality predictions would allow doctors to better personalize blood products to patients. Beyond stored blood, the time-based deep learning strategy may be transferable to other applications involving chronological progression, such as the spread of cancer.

"People used to ask what the alternative is to the manual process," says Kolios. "Now, we've developed an approach that integrates cutting-edge developments from several disciplines, including computational biology, transfusion medicine, and medical physics. It's a testament to how technology and science are now interconnecting to solve today's biomedical problems."

Credit: 
Ryerson University - Faculty of Science

NASA tracking Tropical Storm Laura near Cuba

image: On Aug. 24 at 1:30 p.m. EDT, NASA's Terra satellite provided a visible image of Tropical Storm Laura centered north of the Cayman Islands.

Image: 
NASA Worldview

As Tropical Storm Laura continues to move through the Caribbean Sea NASA satellites are providing forecasters with visible, infrared and microwave data. Laura continued to move through the Caribbean Sea on a march toward the Gulf of Mexico.

Warnings and Watches on August 24, 2020

NOAA's National Hurricane Center (NHC) issued many warnings and watches on Sunday, Aug. 24.  A Tropical Storm Warning is in effect for Little Cayman and Cayman Brac; the Cuban provinces of Camaguey, Las Tunas, Ciego De Avila, Sancti Spiritus, Villa Clara, Cienfuegos, Matanzas, Mayabeque, La Habana, Artemisa, Pinar del Rio, and the Isle of Youth and for the Florida Keys from Craig Key to Key West and the Dry Tortugas.

Hurricane and Storm Surge Watches will likely be required for portions of the U.S. northwest Gulf coast area by evening on Aug. 24.

NASA's Infrared Data Reveals Heavy Rainmakers

Very powerful storms with heavy rainmaking capability reach high into the atmosphere and those have very cold cloud top temperatures. Infrared imagery from NASA's Terra satellite measured those temperatures and found powerful storms in Tropical Storm Laura drenching parts of Jamaica.

Tropical cyclones are made of up hundreds of thunderstorms, and infrared data can show where the strongest storms are located. That is because infrared data provides temperature information, and the strongest thunderstorms that reach highest into the atmosphere have the coldest cloud top temperatures.

On Aug. 23 at 11:45 p.m. EDT (Aug. 24 at 0345 UTC),the Moderate Resolution Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Terra satellite used infrared light to analyze the strength of storms within Laura.  The most powerful thunderstorms were in a small area around Laura's center where cloud top temperatures were as cold as or colder than minus 80 degrees Fahrenheit (minus 62.2 Celsius), just north of Jamaica. Strong storms with cloud top temperatures as cold as minus 70 degrees  Fahrenheit (minus 56.6. degrees Celsius) surrounded those areas and were affecting Jamaica. They were also dropping large amounts of rain.

Forecasters at NOAA's NHC use NASA's infrared data in their forecast. Laura is expected to produce rainfall total accumulations through today of 4 to 6 inches, with maximum amounts of 10 inches in Jamaica, Cuba and the Cayman Islands. Across the Greater Antilles this heavy rainfall could lead to life-threatening flash and urban flooding, and the potential for mudslides.

NASA's Visible Look at Laura

On Aug. 24 at 1:30 p.m. EDT, the MODIS instrument aboard NASA's Terra satellite provided a visible image of Tropical Storm Laura in the Caribbean Sea. At that time, the center of circulation appeared to be south of Cuba and north of the Cayman Islands. Laura had moved past Jamaica.

Forecasters at the National Hurricane Center looking at visible imagery noted, "Laura's satellite presentation has degraded somewhat since yesterday, however, there has been a recent increase in convection near the center, and a large band over the southern periphery of the circulation.  It appears that the combination of land interaction, moderate northerly [vertical wind] shear, and some dry air has caused the change in structure."

Laura's Status on July 26, 2020

At 11 a.m. EDT (1500 UTC), the center of Tropical Storm Laura was located near latitude 21.2 degrees north and longitude 80.6 degrees west. That is about 65 miles (105 km) east-southeast of Cayo Largo, Cuba.

Laura was moving toward the west-northwest near 20 mph (31 kph), and this general motion with some decrease in forward speed is expected over the next day or so. NOAA and Air Force reconnaissance aircraft indicate that the maximum sustained winds are near 60 mph (95 kph) with higher gusts. Little change in strength is forecast today, but strengthening is expected when the storm moves over the Gulf of Mexico, and Laura is forecast to become a hurricane on Tuesday, with additional strengthening forecast on Wednesday. The estimated minimum central pressure based on reconnaissance aircraft data is 1002 millibars.

Laura's Forecast from NHC

NHC Senior Hurricane Forecaster Dan Brown noted in the 11 a.m. EDT discussion, "Laura is forecast to pass over the very warm water of the extreme northwestern Caribbean Sea just south of the coast of Cuba today, and some modest strengthening is possible before the center moves over the western portion of Cuba this evening. Laura is then forecast to emerge over the southeastern Gulf of Mexico overnight where a combination of warm sea surface temperatures and a favorable upper-level environment are expected to allow for steady strengthening."

Credit: 
NASA/Goddard Space Flight Center

New approach to soft material flow may yield way to new materials, disaster prediction

image: A new study from engineers at the University of Illinois, Urbana-Champaign uses simple experiments to explain how a better understanding of flowing motion of soft materials will help design new materials and could help predict some natural disasters.

Image: 
Photo courtesy U.S. Geology Survey

CHAMPAIGN, Ill. -- How does toothpaste stay in its tube and not ooze out when we remove the cap? What causes seemingly solid ground to suddenly break free into a landslide? Defining exactly how soft materials flow and seize has eluded researchers for years, but a new study explains this complex motion using relatively simple experiments. The ability to define - and eventually predict - soft material flow will benefit people dealing with everything from spreadable cheese to avalanches.

The study, which was performed at the University of Illinois, Urbana-Champaign, is published in the Proceedings of the National Academy of Science.

"We are finding that soft material flow is more of a gradual transition rather than the abrupt change the current models suggest," said chemical and biomolecular engineering professor Simon Rogers, who led the study and is an affiliate of the Beckman Institute for Advanced Science and Technology at the U. of I.

The team developed a new testing protocol that allows researchers to measure the individual solidlike and liquidlike behaviors of these materials separately - something never done before, said Gavin Donley, a graduate student and lead author of the study.

In the lab, the team subjected a variety of different soft materials - a polymer microgel, xanthan gum, a glasslike material and a filled polymer solution - to shear stress and measured the individual solidlike and liquidlike strain responses using a device called a rheometer.

"Our experiments show us a much more detailed and nuanced view of soft material flow," Donley said. "We see a continuous transition between the solid and liquid states, which tells us that the traditional models that describe an abrupt change in behavior are oversimplified. Instead, we see two distinct behaviors that reflect energy dissipation via solid and fluid mechanisms."

The team's immediate goal is to turn this experimental observation into a theoretical model that predicts soft material motion, Rogers said.

"The existing models are insufficient to describe the phenomena that we have observed," he said. "Our new experiments are more time-consuming, but they give us remarkable clarity and understanding of the process. This will allows us to push soft materials research forward in a slightly different direction than before. It could help predict the behaviors of novel materials, of course, but also help with civil engineering challenges like mudslides, dam breaks and avalanches."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Storing information in antiferromagnetic materials

image: The microscopic moments in antiferromagnetic materials have alternated orientation, in contrast to the ones of ferromagnets.

Image: 
ill./©: Lorenzo Baldrati, JGU

Researchers at Mainz University were able to show that information can be stored in antiferromagnetic materials and to measure the efficiency of the writing operation

We all store more and more information, while the end devices are supposed to get smaller and smaller. However, due to continuous technological improvement, conventional electronics based on silicon is rapidly reaching its limits - for example limits of physical nature such as the bit size or the number of electrons required to store information. Spintronics, and antiferromagnetic materials in particular, offers an alternative. It is not only electrons that are used to store information, but also their spin containing magnetic information. In this way, twice as much information can be stored in the same room. So far, however, it has been controversial whether it is even possible to store information electrically in antiferromagnetic materials.

Physicists unveil the potential of antiferromagnetic materials

Researchers at Johannes Gutenberg University Mainz (JGU), in collaboration with Tohoku University in Sendai in Japan, have now been able to prove that it works: "We were not only able to show that information storage in antiferromagnetic materials is fundamentally possible, but also to measure how efficiently information can be written electrically in insulating antiferromagnetic materials," said Dr. Lorenzo Baldrati, Marie Sklowdoska-Curie Fellow in Professor Mathias Kläui's group at JGU. For their measurements, the researchers used the antiferromagnetic insulator Cobalt oxide CoO - a model material that paves the way for applications. The result: Currents are much more efficient than magnetic fields to manipulate antiferromagnetic materials. This discovery opens the way toward applications ranging from smart cards that cannot be erased by external magnetic fields to ultrafast computers - thanks to the superior properties of antiferromagnets over ferromagnets. The research paper has recently been published in Physical Review Letters. In further steps, the researchers at JGU want to investigate how quickly information can be saved and how "small" the memory can be written to.

Active German-Japanese exchange

"Our longstanding collaboration with the leading university in the field of spintronics, Tohoku University, has generated another exciting piece of work", emphasized Professor Mathias Kläui. "With the support of the German Exchange Service, the Graduate School of Excellence Materials Science in Mainz, and the German Research Foundation, we initiated a lively exchange between Mainz and Sendai, working with theory groups at the forefront of this topic. We have opportunities for first joint degrees between our universities, which is noticed by students. This is a next step in the formation of an international team of excellence in the burgeoning field of antiferromagnetic spintronics."

Credit: 
Johannes Gutenberg Universitaet Mainz