Tech

Caribbean coral reefs under siege from aggressive algae

image: Orange peyssonnelid algal crusts spreading over a lobe of Orbicella annularis at 14-meter depth on the Tektite reef on the southern shore of St. John, U.S. Virgin Islands.

Image: 
Image courtesy of Peter Edmunds.

Baltimore, MD--Human activity endangers coral health around the world. A new algal threat is taking advantage of coral's already precarious situation in the Caribbean and making it even harder for reef ecosystems to grow.

Just-published research in Scientific Reports details how an aggressive, golden-brown, crust-like alga is rapidly overgrowing shallow reefs, taking the place of coral that was damaged by extreme storms and exacerbating the damage caused by ocean acidification, disease, pollution, and bleaching.

For the past four years, the University of Oxford's Bryan Wilson, Carnegie's Chen?Ming Fan, and California State University Northridge's Peter Edmunds have been studying the biology and ecology of peyssonnelid algal crusts, or PAC, in the U.S. Virgin Islands, which are out-competing coral larvae for limited surface space and then growing over the existing reef architecture, greatly damaging these fragile ecosystems.

"This alga seems to be something of an ecological winner in our changing world," described lead author Wilson, noting that the various other threats to coral communities make them more susceptible to the algal crusts.

Edmunds first took note of the crusts' invasive growth in the wake of category 5 hurricanes Irma and Maria when they were rapidly taking over spaces that had been blasted clean by the storms.

Corals are marine invertebrates that build large exoskeletons from which reefs are constructed. To grow new reef structures, free-floating baby corals first have to successfully attach to a stable surface. They prefer to settle on the crusty surface created by a specific type of friendly algae that grows on the local rocks. These coralline crustose algae, or CCA, acts as guideposts for the coral larvae, producing biochemical signals along with their associated microbial community, which entice the baby coral to affix itself.

What puzzled the researchers is that both the destructive PAC and the helpful CCA grow on rocks and create a crust, but PAC exclude coral settlement and CCA entices it. What drives this difference?

The team set out to determine how the golden-brown PAC affects Caribbean coral reefs, and found that the PAC harbors a microbial community that is distinct from the one associated with CCA, which is known to attract corals.

"These PAC crusts have biochemical and structural defenses that they deploy to deter grazing from fish and other marine creatures," explained Fan. "It is possible that these same mechanisms, which make them successful at invading the marine bio-space, also deter corals."

More research is needed to elucidate the tremendous success that the algal crusts are having in taking over Caribbean reef communities and to look for ways to mitigate the risk that they pose.

"There is a new genomic and evolutionary frontier to explore to help us understand the complexity of organismal interactions on the reef, both mutualistic and antagonistic," added Fan.

Edmunds concluded: "The coral and their ecosystem are so fragile as it is. They are under assault by environmental pollution and global warming. We have made their lives so fragile, yet they are sticking in there. And now this gets thrown into the mix. We don't know if this is the straw that breaks the camel's back, but we need to find out."

Credit: 
Carnegie Institution for Science

Discoveries highlight new possibilities for magnesium batteries

image: Researchers from the University of Houston and the Toyota Research Institute of North America have reported a breakthrough in the development of magnesium batteries, allowing them to deliver a power density comparable to that of lithium-ion batteries.

Image: 
University of Houston

Magnesium batteries have long been considered a potentially safer and less expensive alternative to lithium-ion batteries, but previous versions have been severely limited in the power they delivered.

Researchers from the University of Houston and the Toyota Research Institute of North America (TRINA) report in Nature Energy that they have developed a new cathode and electrolyte - previously the limiting factors for a high-energy magnesium battery - to demonstrate a magnesium battery capable of operating at room temperature and delivering a power density comparable to that offered by lithium-ion batteries.

As the need for grid-scale energy storage and other applications becomes more pressing, researchers have sought less expensive and more readily available alternatives to lithium.

Magnesium ions hold twice the charge of lithium, while having a similar ionic radius. As a result, magnesium dissociation from electrolytes and its diffusion in the electrode, two essential processes that take place in classical intercalation cathodes, are sluggish at room temperature, leading to the low power performance.

One approach to addressing these challenges is to improve the chemical reactions at elevated temperatures. The other circumvents the difficulties by storing magnesium cation in its complex forms. Neither approach is practical.

Yan Yao, Cullen Professor of Electrical and Computer Engineering at the University of Houston and co-corresponding author for the paper, said the groundbreaking results came from combining both an organic quinone cathode and a new tailored boron cluster-based electrolyte solution.

"We demonstrated a heterogeneous enolization redox chemistry to create a cathode which is not hampered by the ionic dissociation and solid-state diffusion challenges that have prevented magnesium batteries from operating efficiently at room temperature," Yao said. "This new class of redox chemistry bypasses the need of solid-state intercalation while solely storing magnesium, instead of its complex forms, creating a new paradigm in magnesium battery electrode design."

Yao, who is also a principle investigator with the Texas Center for Superconductivity at UH (TcSUH), is a leader in the development of multivalent metal-ion batteries. His group recently published a review article in Nature Energy on the roadmap to better multivalent batteries.

TRINA researchers have made tremendous advancements in the magnesium battery field, including developing highly recognized, efficient electrolytes based on boron cluster anions. However, these electrolytes had limitations in supporting high battery cycling rates.

"We had hints that electrolytes based on these weakly coordinating anions in principle could have the potential to support very high cycling rates, so we worked on tweaking their properties," said Rana Mohtadi, a Principal Scientist in the materials research department at TRINA and co-corresponding author. "We tackled this by turning our attention to the solvent in order to reduce its binding to the magnesium ions and improve the bulk transport kinetics."

"We were fascinated that the magnesium plated from the modified electrolyte remained smooth even under ultrahigh cycling rates. We believe this unveils a new facet in magnesium battery electrochemistry."

The work is in part a continuation of earlier efforts described in 2018 in Joule and involved many of the same researchers. In addition to Yao and Mohtadi, coauthors include first authors Hui Dong, formerly a member of Yao's lab and now a post-doctoral researcher at the University of Texas at Austin, and Oscar Tutusaus of TRINA; Yanliang Liang and Ye Zhang of UH and TcSUH; and Zachary Lebens-Higgins and Wanli Yang of the Lawrence Berkeley National Laboratory. Lebens-Higgins also is affiliated with the Binghamton University.

"The new battery is nearly two orders of magnitude higher than the power density achieved by previous magnesium batteries," Dong said. "The battery was able to continue operating for over 200 cycles with around 82% capacity retention, showing high stability. We can further improve cycling stability by tailoring the properties of the membrane with enhanced intermediate trapping capability."

Tutusaus said the work suggests the next steps toward high-performance magnesium batteries.

"Our results set the direction for developing high-performance cathode materials and electrolyte solutions for magnesium batteries and unearth new possibilities for using energy-dense metals for fast energy storage," he said.

Credit: 
University of Houston

Hackensack University Medical Center cancer specialist demonstrates safety of novel agent

Hackensack, N.J., November 30, 2020 - A novel therapy designed to help stimulate the body's immune system response against cancer appears to be safe to use as alone or in combination with immune checkpoint inhibitors, according to an early phase clinical trial led by Martin Gutierrez, M.D., chief medical oncologist of the thoracic division at Hackensack University Medical Center's John Theurer Cancer Center, a member of the Georgetown Lombardi Comprehensive Cancer Center Consortium.

The findings, published in the online edition of Clinical Cancer Research on November 4, (https://clincancerres.aacrjournals.org/content/early/2020/11/04/1078-0432.CCR-20-1830) suggest that BMS-986178, an investigational OX40 agonist, has an acceptable safety profile in patients with advanced solid tumors, whether used as monotherapy or in combination with the checkpoint inhibitors nivolumab (Opdivo®) and/or ipilimumab (Yervoy®). The rationale for using an OX40 agonist in this setting is based on its binding to the OX40 protein receptor found on memory T-cells, which can trigger a signal associated with production of additional T-cells. The newly published results appear to clear the way for the continued development of BMS-986178, starting with a Phase 2 breast cancer study.

"We were pleasantly surprised to find that T-cell stimulation with an OX40 agonist did not exacerbate the inflammatory response and other off-target effects of checkpoint inhibitors," said Dr. Gutierrez, the study's lead investigator and first author of the Clinical Cancer Research paper. "Indeed, the adverse events we observed in the monotherapy and combination cohorts were manageable, suggesting that BMS-986178 is safe to use in combination with checkpoint inhibitor therapy, and possibly also with a cancer vaccine."

Bristol-Myers Squibb is developing BMS-986178 as a potential treatment for patients with solid tumors. BMS-986178 binds with a high affinity to the OX40 receptor, a member of the tumor necrosis factor receptor super family (TNFRSF), which includes several proteins with key roles in T-cell development and survival, immune activation, and anti-tumor immune responses.

"While nivolumab and ipilimumab have established a niche as viable treatments for several solid tumor types, many patients develop resistance to these checkpoint inhibitors, underscoring the need for novel immuno-oncology strategies," explained Dr. Gutierrez. "In theory, adding an OX40 agonist to checkpoint inhibitor therapy would modulate the immunosuppression that occurs in the tumor microenvironment while enhancing the T-cell response. We tested whether we could safely combine these two immunomodulatory approaches."

Dr. Gutierrez and colleagues conducted an open-label Phase 1/2a study of BMS-986178 (at doses ranging from 20-320 mg), both as monotherapy and in combination with nivolumab (240-480 mg) and/or ipilimumab (1-3 mg/kg of body weight), in patients with non-small cell lung cancer, renal cell carcinoma, bladder cancer, and other advanced solid tumors. Twenty patients were treated with BMS-986178 monotherapy, and 145 received various combination regimens.

After follow-up for as long as 103 weeks, the most common treatment-related adverse events (TRAEs) included fatigue, itching, rash, rise in body temperature, diarrhea, and infusion-related reactions. Overall, serious (Grade 3-4) TRAEs occurred in 1 of 20 patients (5%) receiving BMS-986178 monotherapy, 6 of 79 (8%) receiving BMS-986178 plus nivolumab, 0 of 2 receiving nivolumab monotherapy, 6 of 41 (15%) receiving BMS-986178 plus ipilimumab, and 3 of 23 (13%) receiving BMS-986178 in combination with both checkpoint inhibitors.

No deaths occurred in the study. There were no dose-limiting toxicities (side effects serious enough to prevent an increase in dose or level of treatment) observed with monotherapy. The maximum tolerated dose - the highest dose of a drug that does not cause unacceptable side effects - was not reached with escalating doses of either BMS-986178 monotherapy or any of the combination regimens.

The investigators did not observe objective tumor responses with BMS-986178 monotherapy. Objective response rates ranged from 0% to 13% among patients receiving combination therapy.

"We were somewhat surprised that the efficacy signals observed in preclinical trials did not translate to more robust efficacy in this first-in-human trial," commented Dr. Gutierrez. "But our findings underscore the complexity of the immune system, and the importance of maintaining a delicate balance between the T-cell compartment and antigen-presenting cells in the tumor microenvironment.

"Our findings suggest we may be able to use OX40 activation as part of a priming intervention early in the course of certain cancers," Dr. Gutierrez continued. "For example, we could use a cancer vaccine to prime T-cell stimulation, and then use an OX40 agonist to enhance T-cell activity." As a next step, Dr. Gutierrez and colleagues plan to pursue such an approach in a Phase 2 study involving patients with triple-negative breast cancer, a type of breast cancer in which the tumor cells lack estrogen receptors, progesterone receptors, or large amounts of HER2/neu protein on their surface and have a more aggressive course than other breast cancers.

"Dr. Gutierrez, leading our Experimental Drugs program through phase I, has been at the forefront of immunotherapy, particularly in the use of checkpoint inhibitors that unleash the immune system and are now approved across many cancer subtypes," said Andre Goy, M.D., M.S., chairman and director of JTCC and physician in chief for Oncology at Hackensack Meridian Health. "Our goal at JTCC is to explore additional combinations to re-engage the immune system to fight against cancer, and to increase the number of patients who can benefit from such potentially game-changing therapies, which are typically very durable, even after patients fail to respond to multiple lines of therapies," added Dr. Goy, who is a Principal Investigator on a Bristol Myers Squibb research study that is unrelated to this study.

Credit: 
Hackensack Meridian Health

Older adults with dementia exhibit financial 'symptoms' up to six years before diagnosis

A new study led by researchers at the Johns Hopkins Bloomberg School of Public Health and the Federal Reserve Board of Governors found that Medicare beneficiaries who go on to be diagnosed with dementia are more likely to miss payments on bills as early as six years before a clinical diagnosis.

The study also found that beneficiaries diagnosed with dementia who had a lower educational status missed payments on bills beginning as early as seven years before a clinical diagnosis as compared to 2.5 years prior to a diagnosis for beneficiaries with higher educational status.

The study, which included researchers from the University of Michigan Medical School, also found that these missed payments and other adverse financial outcomes lead to increased risk of developing subprime credit scores starting 2.5 years before a dementia diagnosis. Subprime credit scores fall in the fair and lower range.

The findings, published online November 30 in JAMA Internal Medicine, suggest that financial symptoms such as missing payments on routine bills could be used as early predictors of dementia and highlight the benefits of earlier detection.

"Currently there are no effective treatments to delay or reverse symptoms of dementia," says lead author Lauren Hersch Nicholas, PhD, associate professor in the Department of Health Policy and Management at the Bloomberg School. "However, earlier screening and detection, combined with information about the risk of irreversible financial events, like foreclosure and repossession, are important to protect the financial well-being of the patient and their families."

The analysis found that the elevated risk of payment delinquency with dementia accounted for 5.2 percent of delinquencies among those six years prior to diagnosis, reaching a maximum of 17.9 percent nine months after diagnosis. Rates of elevated payment delinquency and subprime credit risk persisted for up to 3.5 years after beneficiaries received dementia diagnoses, suggesting an ongoing need for assistance managing money.

Dementia, identified as diagnostic codes for Alzheimer's Disease and related dementias in the study, is a progressive brain disorder that slowly diminishes memory and cognitive skills and limits the ability to carry out basic daily activities, including managing personal finances. About 14.7 percent of American adults over the age of 70 are diagnosed with the disease. The onset of dementia can lead to costly financial errors, irregular bill payments, and increased susceptibility to financial fraud.

For their study, the researchers linked de-identified Medicare claims and credit report data. They analyzed information on 81,364 Medicare beneficiaries living in single-person households, with 54,062 never receiving a dementia diagnosis between 1999 and 2014 and 27,302 with a dementia diagnosis during the same period. The researchers compared financial outcomes spanning 1999 to 2018 of those with and without a clinical diagnosis of dementia for up to seven years prior to a diagnosis and four years following a diagnosis. The researchers focused on missing payments for one or more credit accounts that were at least 30 days past due, and subprime credit scores, indicative of an individual's risk of defaulting on loans based on credit history.

To determine whether the financial symptoms observed were unique to dementia, the researchers also compared financial outcomes of missed payments and subprime credit scores to other health outcomes including arthritis, glaucoma, heart attacks, and hip fractures. They found no association of increased missed payments or subprime credit scores prior to a diagnosis for arthritis, glaucoma, or a hip fracture. No long-term associations were found with heart attacks.

"We don't see the same pattern with other health conditions," says Nicholas. "Dementia was the only medical condition where we saw consistent financial symptoms, especially the long period of deteriorating outcomes before clinical recognition. Our study is the first to provide large-scale quantitative evidence of the medical adage that the first place to look for dementia is in the checkbook."

Credit: 
Johns Hopkins Bloomberg School of Public Health

The surprising grammar of touch

A new study demonstrates that grammar is evident and widespread in a system of communication based on reciprocal, tactile interaction, thus reinforcing the notion that if one linguistic channel, such as hearing, or vision, is unavailable, structures will find another way to create formal categories. There are thousands of people across the US and all over the world who are DeafBlind. Very little is known about the diverse ways they use and acquire language, and what effects those processes have on the structure of language itself. This research suggests a way forward in analyzing those articulatory and perceptual patterns--a project that will broaden scientific understanding of what is possible in human language.

This research focuses on language usage that has become conventional across a group of DeafBlind signers in the United States and shows that those who communicate via reciprocal, tactile channels--a practice known as "Protactile," --make regular use of tactile grammatical structures. The study, "Feeling Phonology: The Conventionalization of Phonology in Protactile Communities in the United States" by Terra Edwards (Saint Louis University) and Diane Brentari (University of Chicago), will be published in the December, 2020 issue of the scholarly journal Language. A link to the article may be found at https://www.linguisticsociety.org/sites/default/files/04_96.4Edwards.pdf. A discussion about the research and its implications for DeafBlind communities with Protactile experts John Lee Clark and Jelica B. Nuccio can be accessed here (free registration required): https://dbinterpreting.wou.edu/login/index.php

The article focuses on the basic units used to produce and perceive protactile expressions as well as patterns in how those units are, and are not, combined. Over the past 60 years, there has been a slow, steady paradigm shift in the field of linguistics toward understanding this level of linguistic structure, or "phonology" as the abstract component of a grammar, which organizes basic units without specific reference to communication modality. This article contributes to that shift, calling into question the very definition of phonology. The authors ask: Can the tactile modality sustain phonological structure? The results of the study suggest that it can.

In order to uncover the emergence of new grammatical structure in protactile language, pairs of DeafBlind research participants were asked to describe three objects to one another: a lollipop, a jack (the kind children use to play the game 'jacks') and a complex wooden toy with movable arms, magnets, and magnetized pieces. The research team videorecorded their descriptions and then transcribed and annotated the videos, looking for patterns. They found that the early stages of the conventionalization of protactile phonology involve assigning specific grammatical roles to the hands (and arms) of Signer 1 (the conveyer of information) and Signer 2 (the receiver of information). It is the clear and consistent articulatory forms used by each of the four hands that launches the grammar in this case and allows for the rapid exchange of information. Analyzing these patterns offers new insights into how the conventionalization of a phonological system can play out in the tactile modality.

Credit: 
Linguistic Society of America

Covid-19 shutdowns disproportionately affected low-income black households

image: Princeton University researchers now report that low-income Black households experienced greater job loss, more food and medicine insecurity, and higher indebtedness in the early months of #COVID19 compared to white or Latinx low-income households.

Image: 
Egan Jimenez, Princeton University

PRINCETON, N.J.--The alarming rate at which Covid-19 has killed Black Americans has highlighted the deeply embedded racial disparities in the U.S. health care system.

Princeton researchers now report that low-income Black households also experienced greater job loss, more food and medicine insecurity, and higher indebtedness in the early months of the pandemic compared to white or Latinx low-income households.

Published in the journal Socius, the paper provides the first systematic, descriptive estimates of the early impacts of Covid-19 on low-income Americans. The findings paint a picture of a deepening crisis: between March and mid-June 2020, an increasing number of low-income families reported insecurity. Then they took on more debt to manage their expenses.

The paper used data from "Fresh EBT," a budgeting app for families who receive Supplemental Nutrition Assistance Program (SNAP) benefits, to provide the first systematic, descriptive estimates of the early impacts of Covid-19 on low-income Americans.

"Media coverage has focused on the racially disparate effects of Covid-19 as a disease, but we were interested in the socioeconomic effects of the virus, and whether it tracked a similar pattern," said study co-author Adam Goldstein, assistant professor of sociology and public affairs at Princeton's School of Public and International Affairs.

"It became clear that while all low-income households struggled in the early months of the pandemic, Black households in America were disproportionately affected. Even among low-income populations, there is a marked racial disparity in people's vulnerability to this crisis," said study co-author Diana Enriquez, a doctoral candidate in Princeton's Department of Sociology.

Enriquez and Goldstein set out to determine the economic impacts of Covid-19 on Americans of lesser means and the racial disparities within that socioeconomic group. They investigated a set of factors related to families' ability to satisfy basic needs including job loss, debt, housing instability, and food and medicine insecurity.

The researchers directly surveyed people who utilize the SNAP and Temporary Assistance for Needy Families (TANF) benefits. Study participants, who were already low-income and benefits-eligible before Covid-19, were surveyed through between the end of March and mid-June. Goldstein and Enriquez chose this time period because shutdowns were already beginning to affect Americans' economic livelihoods, but their economic status had not yet been completely transformed.

People were queried about their current and perceived status related to employment, housing, food and medicine accessibility, and debt load. For example, respondents were asked if they had stable housing, and if they believed their housing would be stable after that 30-day period.

They found that people who receive government assistance experienced pronounced effects in all areas except housing. Nearly 35% of all respondents reported losing their jobs by mid-June.

Financial strain and debt accrual also worsened significantly: 67% of people said they skipped paying a bill at the beginning of the shutdown. In each survey wave between the end of April and mid-June, 77% of households reported missing a bill or rent payment. And, despite being covered by SNAP, 54% of people said they skipped meals, relied on family or friends for food, or visited a food pantry due to the Covid-19 shutdown. By the end of the month, this figure rose to 64%.

When the researchers looked at the data by race, it became clear that low-income Black households fared worse than low-income White households on average. Low-income Latinx respondents fared worse than White households on some indicators, but not on others.

At the beginning of the April 2020, 30% of Black respondents reported that they or someone in their household had lost work during the shutdown. By the end of the month, that number increased to 48%. Likewise, 80% of Black households also reported taking on more debt to cover their bills by the end of April. In mid-June, rates of new debt were similar for Black and Latinx households (more than 80%), while approximately 70% of White households reported new debt.

"The survey results really reinforce the extent to which the Covid-19 crisis has kneecapped those households who were already in a tenuous position near the poverty line. Research shows that these types of debts and unpaid bills -- even small ones -- can compound over time and trap low-income households in a cycle of financial distress," Goldstein said.

"Even in a miraculous scenario where the pandemic ends in a few months and low-wage workers are rehired, tens of millions of households will still find themselves stuck in a financial hole without additional infusions of economic relief," he said.

The authors outline the study's limitations and possible future research avenues. First, the researchers focused on the prevalence of these insecurities, not their severity. They did not measure how many meals were being skipped, for example, or the compounding effects of additional debt. This, as well as other forms of insecurity like access to healthcare or treatment for Covid-19, could be addressed in future work.

Credit: 
Princeton School of Public and International Affairs

Microfluidic system with cell-separating powers may unravel how novel pathogens attack

image: An image of the in-droplet cell separation microfluidic chip, showing the microfluidic channels and electrodes. Enlarged view shows a host cell and pathogenic bacteria cells being separated to top and bottom within a single water-in-oil microdroplet.

Image: 
Dr. Arum Han/Texas A&M University College of Engineering

To develop effective therapeutics against pathogens, scientists need to first uncover how they attack host cells. An efficient way to conduct these investigations on an extensive scale is through high-speed screening tests called assays.

Researchers at Texas A&M University have invented a high-throughput cell separation method that can be used in conjunction with droplet microfluidics, a technique whereby tiny drops of fluid containing biological or other cargo can be moved precisely and at high speeds. Specifically, the researchers successfully isolated pathogens attached to host cells from those that were unattached within a single fluid droplet using an electric field.

"Other than cell separation, most biochemical assays have been successfully converted into droplet microfluidic systems that allow high-throughput testing," said Arum Han, professor in the Department of Electrical and Computer Engineering and principal investigator of the project. "We have addressed that gap, and now cell separation can be done in a high-throughput manner within the droplet microfluidic platform. This new system certainly simplifies studying host-pathogen interactions, but it is also very useful for environmental microbiology or drug screening applications."

The researchers reported their findings in the August issue of the journal Lab on a Chip.

Microfluidic devices consist of networks of micron-sized channels or tubes that allow for controlled movements of fluids. Recently, microfluidics using water-in-oil droplets have gained popularity for a wide range of biotechnological applications. These droplets, which are picoliters (or a million times less than a microliter) in volume, can be used as platforms for carrying out biological reactions or transporting biological materials. Millions of droplets within a single chip facilitate high-throughput experiments, saving not just laboratory space but the cost of chemical reagents and manual labor.

Biological assays can involve different cell types within a single droplet, which eventually need to be separated for subsequent analyses. This task is extremely challenging in a droplet microfluidic system, Han said.

"Getting cell separation within a tiny droplet is extremely difficult because, if you think about it, first, it's a tiny 100-micron diameter droplet, and second, within this extremely tiny droplet, multiple cell types are all mixed together," he said.

To develop the technology needed for cell separation, Han and his team chose a host-pathogen model system consisting of the salmonella bacteria and the human macrophage, a type of immune cell. When both these cell types are introduced within a droplet, some of the bacteria adhere to the macrophage cells. The goal of their experiments was to separate the salmonella that attached to the macrophage from the ones that did not.

For cell separation, Han and his team constructed two pairs of electrodes that generated an oscillating electric field in close proximity to the droplet containing the two cell types. Since the bacteria and the host cells have different shapes, sizes and electrical properties, they found that the electric field produced a different force on each cell type. This force resulted in the movement of one cell type at a time, separating the cells into two different locations within the droplet. To separate the mother droplet into two daughter droplets containing one type of cells, the researchers also made a downstream Y-shaped splitting junction.

Han said although these experiments were carried with a host and pathogen whose interaction is well-established, their new microfluidic system equipped with in-drop separation is most useful when the pathogenicity of bacterial species is unknown. He added that their technology enables quick, high-throughput screening in these situations and for other applications where cell separation is required.

"Liquid handling robotic hands can conduct millions of assays but are extremely costly. Droplet microfluidics can do the same in millions of droplets, much faster and much cheaper," Han said. "We have now integrated cell separation technology into droplet microfluidic systems, allowing the precise manipulation of cells in droplets in a high-throughput manner, which was not possible before."

Credit: 
Texas A&M University

Magnetic vortices come full circle

image: Reconstructed vortex rings inside a magnetic micropillar.

Image: 
Claire Donnelly

Magnets often harbour hidden beauty. Take a simple fridge magnet: Somewhat counterintuitively, it is 'sticky' on one side but not the other. The secret lies in the way the magnetisation is arranged in a well-defined pattern within the material. More intricate magnetization textures are at the heart of many modern technologies, such as hard disk drives. Now, an international team of scientists at the Paul Scherrer Institute PSI, ETH Zurich, the University of Cambridge, the Donetsk Institute for Physics and Engineering and the Institute for Numerical Mathematics RAS in Moscow report the discovery of unexpected magnetic structures inside a tiny pillar made of the magnetic material gadolinium cobalt. As they write in a paper published today in the journal Nature Physics [1], the researchers observed sub-micrometre loop-shaped configurations, which they identified as magnetic vortex rings. Far beyond their aesthetic appeal, these textures might point the way to further complex three-dimensional structures arising in the bulk of magnets, and could one day form the basis for novel technological applications.

Mesmerising insights

Determining the magnetisation arrangement within a magnet is extraordinarily challenging, in particular for structures at the micro- and nanoscale, for which studies have been typically limited to looking at a shallow layer just below the surface. That changed in 2017 when researchers at PSI and ETH Zurich introduced a novel X-ray method for the nanotomography of bulk magnets, which they demonstrated in experiments at the Swiss Light Source SLS [2]. That advance opened up a unique window into the inner life of magnets, providing a tool for determining three-dimensional magnetic configurations at the nanoscale within micrometre-sized samples.

Utilizing these capabilities, members of the original team, together with international collaborators, now ventured into new territory. The stunning loop shapes they observed appear in the same gadolinium cobalt micropillar samples in which they had before detected complex magnetic configurations consisting of vortices -- the sort of structures seen when water spirals down from a sink -- and their topological counterparts, antivortices. That was a first, but the presence of these textures has not been surprising in itself. Unexpectedly, however, the scientists also found loops that consist of pairs of vortices and antivortices. That observation proved to be puzzling initially. With the implementation of novel sophisticated data-analysis techniques they eventually established that these structures are so-called vortex rings -- in essence, doughnut-shaped vortices.

A new twist on an old story

Vortex rings are familiar to everyone who has seen smoke rings being blown, or who watched dolphins producing loop-shaped air bubbles, for their own amusement as much as to that of their audience. The newly discovered magnetic vortex rings are captivating in their own right. Not only does their observation verify predictions made some two decades ago, settling the question whether such structures can exist. They also offered surprises. In particular, magnetic vortex rings have been predicted to be a transient phenomenon, but in the experiments now reported, these structures turned out to be remarkably stable.

The stability of magnetic vortex rings should have important practical implications. For one, they could potentially move through magnetic materials, as smoke rings move stably though air, or air-bubble rings through water. Learning how to control the rings within the volume of the magnet can open interesting prospects for energy-efficient 3D data storage and processing. There is interest in the physics of these new structures, too, as magnetic vortex rings can take forms not possible for their smoke and bubble counterparts. The team has already observed some unique configurations, and going forward, their further exploration promises to bring to light yet more magnetic beauty.

Credit: 
Paul Scherrer Institute

Study identifies countries and states with greatest age biases

image: Two studies from Michigan State University pinpoint where in the world you'll find the most age bias, and where the "golden years" are considered truly golden.

Image: 
Creative Commons via PxHere

Elders are more respected in Japan and China and not so much in more individualistic nations like the United States and Germany, say Michigan State University researchers who conclude in a pair of studies that age bias varies among countries and even states.

"Older adults are one of the only stigmatized groups that we all become part of some day. And that's always struck me as interesting -- that we would treat so poorly a group of people that we're destined to become someday," said William Chopik, assistant professor of psychology and author of the studies. "Making more equitable environments for older adults are even in younger people's self-interests."

While aging is looked at as something that's inevitable and a part of everyone's life, it's viewed very differently around the world and in different environments - which could be detrimental for people's health and well-being.

For both studies, Chopik and colleagues gauged public sentiment and biases toward aging by administering the Implicit Association Test -- which measures the strength of a person's subconscious associations -- on over 800,000 total participants in each study from the Project Implicit database.

The first study examined which countries around the world showed the greatest implicit bias against older adults. Published in Personality and Social Psychology Bulletin, the study is the largest study of its kind and was co-authored by Lindsay Ackerman, a post-Baccalaureate researcher in MSU's psychology department.

"In some countries and cultures, older adults fair better, so a natural question we had was whether the people living in different countries might think about older adults and aging differently. And, maybe that explains why societies are so different in the structures put in place to support older adults," Chopik said.

Collectivistic countries like Japan, China, Korea, India and Brazil -- which tend to focus on group cohesion and harmony -- had much less of a bias toward older people than individualistic countries. Individualistic countries like United States, Germany, Ireland, South Africa and Australia tend to stress independence and forging one's own identity. In addition to having greater age biases, the findings also revealed that individualistic countries are more focused on maintaining active, youthful appearances.

"Countries that showed high bias also showed an interesting effect when you asked people how old they felt. In ageist cultures, people tended to report feeling particularly younger than their actual age," Chopik said. "We interpreted this as something called age-group dissociation -- or, feeling motivated to distance yourself from that group. People do this is by identifying with younger age groups, lying about their age and even saying that they feel quantitatively younger than they actually are."

The second study honed in on individual states across the U.S. to see which demonstrated the most age bias, as well as how this bias was associated health outcomes. As the first and only study of its kind, the findings were published in European Journal of Social Psychology and the paper was co-authored by Hannah L. Giasson, a post-doctoral scholar from Stanford University.

The states with the highest age bias were mostly in the Southern and the Northeastern U.S. Additionally, many of most-biased states tended to have the worst outcomes and life expectancies for older adults.

"We found a strange pattern in which some popular retirement destinations tended to be higher in age bias, like Florida and the Carolinas," Chopik said. "Possibly, this could be due to the friction that occurs when there are large influxes and migrations of older adults to regions that are not always best suited to welcome them."

Additionally, states with higher age bias also tended to have higher Medicare costs, lower community engagement and less access to care. Chopik explained that one reason for the added health expenditures is because older adults with more illnesses cause a higher demand for health resources. The other reason is that those states might be worse at managing and administering support and funds for older adults. States -- and how they treat older adults -- likely affect how easily people can acquire these funds and services, he said.

"Both of our studies demonstrate how local environments affect people's attitudes and the lives of older adults. We grow up in our environments and they shape us in pretty important ways and in ways we don't even realize," Chopik said. "Being exposed to policies and attitudes at a country level can shape how you interact with older adults. At the state level in the United States, how you treat older adults has important implications for them -- for example, their health and how long older people live -- and even the economy, like how much money we spend on older adults' health care."

Credit: 
Michigan State University

Stanford engineers combine light and sound to see underwater

image: An artist's rendition of the photoacoustic airborne sonar system operating from a drone to sense and image underwater objects.

Image: 
Kindea Labs

Stanford University engineers have developed an airborne method for imaging underwater objects by combining light and sound to break through the seemingly impassable barrier at the interface of air and water.

The researchers envision their hybrid optical-acoustic system one day being used to conduct drone-based biological marine surveys from the air, carry out large-scale aerial searches of sunken ships and planes, and map the ocean depths with a similar speed and level of detail as Earth's landscapes. Their "Photoacoustic Airborne Sonar System" is detailed in a recent study published in the journal IEEE Access.

"Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth's landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water," said study leader Amin Arbabian, an associate professor of electrical engineering in Stanford's School of Engineering. "Our goal is to develop a more robust system which can image even through murky water."

Subhead: Energy loss

Oceans cover about 70 percent of the Earth's surface, yet only a small fraction of their depths have been subjected to high-resolution imaging and mapping.

The main barrier has to do with physics: Sound waves, for example, cannot pass from air into water or vice versa without losing most - more than 99.9 percent - of their energy through reflection against the other medium. A system that tries to see underwater using soundwaves traveling from air into water and back into air is subjected to this energy loss twice - resulting in a 99.9999 percent energy reduction.

Similarly, electromagnetic radiation - an umbrella term that includes light, microwave and radar signals - also loses energy when passing from one physical medium into another, although the mechanism is different than for sound. "Light also loses some energy from reflection, but the bulk of the energy loss is due to absorption by the water," explained study first author Aidan Fitzpatrick, a Stanford graduate student in electrical engineering. Incidentally, this absorption is also the reason why sunlight can't penetrate to the ocean depth and why your smartphone - which relies on cellular signals, a form of electromagnetic radiation - can't receive calls underwater.

The upshot of all of this is that oceans can't be mapped from the air and from space in the same way that the land can. To date, most underwater mapping has been achieved by attaching sonar systems to ships that trawl a given region of interest. But this technique is slow and costly, and inefficient for covering large areas.

Subhead: An invisible jigsaw puzzle

Enter the Photoacoustic Airborne Sonar System (PASS), which combines light and sound to break through the air-water interface. The idea for it stemmed from another project that used microwaves to perform "non-contact" imaging and characterization of underground plant roots. Some of PASS's instruments were initially designed for that purpose in collaboration with the lab of Stanford electrical engineering professor Butrus Khuri-Yakub.

At its heart, PASS plays to the individual strengths of light and sound. "If we can use light in the air, where light travels well, and sound in the water, where sound travels well, we can get the best of both worlds," Fitzpatrick said.

To do this, the system first fires a laser from the air that gets absorbed at the water surface. When the laser is absorbed, it generates ultrasound waves that propagate down through the water column and reflect off underwater objects before racing back toward the surface.

The returning sound waves are still sapped of most of their energy when they breach the water surface, but by generating the sound waves underwater with lasers, the researchers can prevent the energy loss from happening twice.

"We have developed a system that is sensitive enough to compensate for a loss of this magnitude and still allow for signal detection and imaging," Arbabian said.

The reflected ultrasound waves are recorded by instruments called transducers. Software is then used to piece the acoustic signals back together like an invisible jigsaw puzzle and reconstruct a three-dimensional image of the submerged feature or object.

"Similar to how light refracts or 'bends' when it passes through water or any medium denser than air, ultrasound also refracts," Arbabian explained. "Our image reconstruction algorithms correct for this bending that occurs when the ultrasound waves pass from the water into the air."

Subhead: Drone ocean surveys

Conventional sonar systems can penetrate to depths of hundreds to thousands of meters, and the researchers expect their system will eventually be able to reach similar depths.

To date, PASS has only been tested in the lab in a container the size of a large fish tank. "Current experiments use static water but we are currently working toward dealing with water waves," Fitzpatrick said. "This is a challenging but we think feasible problem."

The next step, the researchers say, will be to conduct tests in a larger setting and, eventually, an open-water environment.

"Our vision for this technology is on-board a helicopter or drone," Fitzpatrick said. "We expect the system to be able to fly at tens of meters above the water."

See video: https://youtu.be/2YyAnxQkeuk

Credit: 
Stanford University School of Engineering

Nonlinear beam cleaning in spatiotemporally mode-locked lasers

image: Conceptual outline of dispersion-managed cavity design.

Image: 
Te?in et al., doi 10.1117/1.AP.2.5.056005

In the last few decades, only temporal modes have been considered for mode-locked fiber lasers using single-mode fibers. Mode-locked single-mode fiber lasers offer advantages due to their high-gain doping, intrinsically single-spatial mode, and compact setups. However, in terms of power levels, mode-locked fiber lasers suffer from high nonlinearity, which is introduced by the small core size of the single-mode fibers. Researchers from École Polytechnique Fédérale de Lausanne, Switzerland (EPFL) recently developed a new approach for generating high-energy, ultrashort pulses with single-mode beam quality: nonlinear beam cleaning in a multimode laser cavity.

Spatiotemporal mode-locking

The traditional approach to overcome the power level problem is to first generate low-power ultrashort pulses (so-called laser oscillators), which is then followed by a cascade of amplifiers to increase the power levels. But external amplification increases cost and complexity.

Recently, multimode fibers, particularly graded-index multimode fibers, have attracted attention due to their low modal dispersion and periodic self-focusing of the light inside. Spatial beam cleaning, wavelength conversion, and spatiotemporal mode-locking have been demonstrated with graded-index multimode fibers.

Spatiotemporal mode-locking is a newer approach to generating ultrashort pulses. It creates a balance between spatial and temporal effects within a multimode laser cavity, which supports multiple paths to guide light. The large multimode core diameter of the fiber decreases the nonlinearity of the cavity and allows the system to reach high pulse energies without external amplification. However, due to its multimode nature, high power spatiotemporally mode-locked lasers suffer from a low-quality output beam.

Single-mode beam quality via nonlinear beam cleaning

EPFL researchers demonstrated nonlinear beam cleaning in a multimode laser cavity--a first ever demonstration--which enables generation of high-energy, ultrashort pulses with single-mode beam quality. Their report, published in the peer-reviewed, open access journal Advanced Photonics, shows that engineered intracavity temporal pulse properties enable a route to generate a high-quality beam when mode-locking is achieved.

Their design allows the generation of sub-100 femtosecond pulses with high pulse energy (>20 nJ) and beam quality of M2 value (less than 1.13 without external amplification) in a compact and low-cost form. The team investigated the complex cavity dynamics by mode-resolved simulations and confirmed nonlinear beam cleaning numerically and experimentally.

Lead author Ugur Tegin notes that his team's work presents a new way to harness and control spatiotemporal nonlinear dynamics for ultrashort pulse generation. The results of this research show that good beam quality, high pulse energy, and sub-100 fs pulse duration from a fiber laser can be constructed with commercially available and standard components. The reported method can be extended to fibers with a larger core size for further power scaling while preserving the beam quality of sub-100 fs pulses.

Credit: 
SPIE--International Society for Optics and Photonics

Forest fires, cars, power plants join list of risk factors for Alzheimer's disease

A new study led by researchers at UC San Francisco has found that among older Americans with cognitive impairment, the greater the air pollution in their neighborhood, the higher the likelihood of amyloid plaques - a hallmark of Alzheimer's disease. The study adds to a body of evidence indicating that pollution from cars, factories, power plants and forest fires joins established dementia risk factors like smoking and diabetes.

In the study, which appears in JAMA Neurology on Nov.30, 2020, the researchers looked at the PET scans of more than 18,000 seniors whose average age was 75. The participants had dementia or mild cognitive impairment and lived in zip codes dotted throughout the nation. The researchers found that those in the most polluted areas had a 10 percent increased probability of a PET scan showing amyloid plaques, compared to those in the least polluted areas.

When applied to the U.S. population, with an estimated 5.8 million people over 65 with Alzheimer's disease, high exposure to microscopic airborne particles may be implicated in tens of thousands of cases.

"This study provides additional evidence to a growing and convergent literature, ranging from animal models to epidemiological studies, that suggests air pollution is a significant risk factor for Alzheimer's disease and dementia," said senior author Gil Rabinovici, MD, of the UCSF Memory and Aging Center, Department of Neurology and the Weill Institute for Neurosciences.

Amyloid Plaques Not Indicative of All Dementias

The 18,178 participants had been recruited for the IDEAS study (Imaging Dementia - Evidence for Amyloid Scanning), which had enrolled Medicare beneficiaries whose mild cognitive impairment or dementia had been diagnosed following comprehensive evaluation. Not all of the participants were later found to have positive PET scans - 40 percent showed no evidence of plaques on the scan, suggesting non-Alzheimer's diagnoses like frontotemporal or vascular dementias, which are not associated with the telltale amyloid plaques.

Air pollution in the neighborhood of each participant was estimated with Environmental Protection Agency data that measured ground-level ozone and PM2.5, atmospheric particulate matter that has a diameter of less than 2.5 micrometers. The researchers also divided locations into quartiles according to the concentration of PM2.5. They found that the probability of a positive PET scan rose progressively as concentrations of pollutants increased, and predicted a difference of 10 percent probability between the least and most polluted areas.

"Exposure in our daily lives to PM2.5, even at levels that would be considered normal, could contribute to induce a chronic inflammatory response," said first author Leonardo Iaccarino, PhD, also of the UCSF Memory and Aging Center, Department of Neurology and the Weill Institute of Neurosciences. "Over time, this could impact brain health in a number of ways, including contributing to an accumulation of amyloid plaques."

Overall concentrations of PM2.5 would not be considered very high for it to have a significant association with amyloid plaques, amounting to annual averages in San Francisco during the study time, added Rabinovici.

"I think it's very appropriate that air pollution has been added to the modifiable risk factors highlighted by the Lancet Commission on dementia," he said, referring to the journal's decision this year to include air pollution, together with excessive alcohol intake and traumatic brain injury, to their list of risk factors.

The study complements previous large-scale studies that tie air pollution to dementia and Parkinson's disease, and adds novel findings by including a cohort with mild cognitive impairment - a frequent precursor to dementia - and using amyloid plaques as a biomarker of disease. Other studies have linked air pollution to adverse effects on cognitive, behavioral and psychomotor development in children, including a UCSF-University of Washington study that looked at its impact on the IQ of the offspring of pregnant women.

Credit: 
University of California - San Francisco

Why spending a long time on your phone isn't bad for mental health

image: Surprisingly, the amount of time spent on the smartphone was not related to poor mental health

Image: 
Lancaster University

General smartphone usage is a poor predictor of anxiety, depression or stress say researchers, who advise caution when it comes to digital detoxes.

The study published in Technology, Mind, and Behavior was led by Heather Shaw and Kristoffer Geyer from Lancaster University with Dr David Ellis and Dr Brittany Davidson from the University of Bath and Dr Fenja Ziegler and Alice Smith from the University of Lincoln.

They measured the time spent on smartphones by 199 iPhone users and 46 Android users for one week. Participants were also asked about their mental and physical health, completing clinical scales that measure anxiety and depression symptoms. They also completed a scale which measured how problematic they perceived their smartphone usage to be.

Surprisingly, the amount of time spent on the smartphone was not related to poor mental health.

Lead author Heather Shaw of Lancaster University's Department of Psychology said: "A person's daily smartphone pickups or screen time did not predict anxiety, depression, or stress symptoms. Additionally, those who exceeded clinical 'cut off points' for both general anxiety and major depressive disorder did not use their phone more than those who scored below this threshold."

Instead, the study found that mental health was associated with concerns and worries felt by participants about their own smartphone usage.

This was measured through their scores on a problematic usage scale where they were asked to rate statements such as "Using my smartphone longer than I had intended", and "Having tried time and again to shorten my smartphone use time but failing all the time".

Heather Shaw said: "It is important to consider actual device use separately from people's concerns and worries about technology. This is because the former doesn't show noteworthy relationships with mental health, whereby the latter does."

Previous studies have focussed on the potentially detrimental impact of 'screen time', but the study shows that people's attitudes or worries are likely to drive these findings.

Dr David Ellis, from the University of Bath's School of Management, said: "Mobile technologies have become even more essential for work and day-to-day life during the COVID-19 pandemic. Our results add to a growing body of research that suggests reducing general screen time will not make people happier. Instead of pushing the benefits of digital detox, our research suggests people would benefit from measures to address the worries and fears that have grown up around time spent using phones."

Watch the video here.

Credit: 
Lancaster University

Lower current leads to highly efficient memory

image: Field-like torque tries to align the magnetization with the plane of the material, but to work as a memory device the magnetization needs to be perpendicular to this.

Image: 
© 2020 Ohya et al.

Researchers are a step closer to realizing a new kind of memory that works according to the principles of spintronics which is analogous to, but different from, electronics. Their unique gallium arsenide-based ferromagnetic semiconductor can act as memory by quickly switching its magnetic state in the presence of an induced current at low power. Previously, such current-induced magnetization switching was unstable and drew a lot of power, but this new material both suppresses the instability and lowers the power consumption too.

The field of quantum computing often gets covered in the technical press; however, another emerging field along similar lines tends to get overlooked, and that is spintronics. In a nutshell, spintronic devices could replace some electronic devices and offer greater performance at far low power levels. Electronic devices use the motion of electrons for power and communication. Whereas spintronic devices use a transferable property of stationary electrons, their angular momentum, or spin. It's a bit like having a line of people pass on a message from one to the other rather than have the person at one end run to the other. Spintronics reduces the effort needed to perform computational or memory functions.

Spintronic-based memory devices are likely to become common as they have a useful feature in that they are nonvolatile, meaning that once they are in a certain state, they maintain that state even without power. Conventional computer memory, such as DRAM and SRAM made of ordinary semiconductors, loses its state when it's powered off. At the core of experimental spintronic memory devices are magnetic materials that can be magnetized in opposite directions to represent the familiar binary states of 1 or 0, and this switching of states can occur very, very quickly. However, there has been a long and arduous search for the best materials for this job, as magnetizing spintronic materials are no simple matter.

"Magnetizing a material is analogous to rotating a mechanical device," said Associate Professor Shinobu Ohya from the Center for Spintronics Research Network at the University of Tokyo. "There are rotational forces at play in rotating systems called torques; similarly there are torques, called spin-orbit torques, in spintronic systems, albeit they are quantum-mechanical rather than classical. Among spin-orbit torques, 'anti-damping torque' assists the magnetization switching, whereas 'field-like torque' can resist it, raising the level of the current required to perform the switch. We wished to suppress this."

Ohya and his team experimented with different materials and various forms of those materials. At small scales, anti-damping torque and field-like torque can act very differently depending on physical parameters such as current direction and thickness. The researchers found that with thin films of a gallium arsenide-based ferromagnetic semiconductor just 15 nanometers thick, about one-seven-thousandth the thickness of a dollar bill, the undesirable field-like torque became suppressed. This means the magnetization switching occurred with the lowest current ever recorded for this kind of process.

Credit: 
University of Tokyo

Cortex over reflex: Study traces circuits where executive control overcomes instinct

image: Researchers traced circuits from the ACC to the visual cortex (VC). VC neurons labeled green trace back to the caudal ACC, while neurons labeled red trace to the rostral ACC.

Image: 
Sur Lab/MIT Picower Institute

When riding your bike to the store you might have two very different reasons to steer: plain old reflex when you something dart into your path, or executive control when you see street signs that indicate the correct route. A new study by MIT neuroscientists shows how the brain is wired for both by tracking the specific circuits involved and their effect on visually cued actions.

The research, published in Nature Communications, demonstrates in mice that neurons in the anterior cingulate cortex (ACC) area of the prefrontal cortex, a region at the front of the brain associated with understanding rules and implementing plans, projects connections into an evolutionarily older region called the superior colliculus (SC). The SC carries out basic commands for reactive, reflexive movement. A key finding of the study is that the purpose of the ACC's connections to the SC is to override the SC when executive control is necessary.

"The ACC provides inhibitory control of this ancient structure," said senior author Mriganka Sur, Newton Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. "This inhibitory control is a dynamic entity depending on the task and its rules. This is how a reflex is modulated by cortical control."

Lead author Rafiq Huda, an assistant professor of cell biology and neuroscience at Rutgers University and a former postdoc in Sur's lab, added that by looking at specific circuits between the ACC and both the SC and the visual cortex (VC), the researchers could resolve uncertainty about how the cortex regulates more basic brain regions during decision-making.

"There has been an ongoing debate about what exactly is the role of the cortex in sensorimotor decisions," Huda said. "We were able to provide some answers by looking at the level of different ACC projection pathways, which would not have been possible by looking at all of ACC at once. Our work provides evidence for the possibility that inhibitory control of subcortical structures like the SC is a unifying principle for how the ACC, and the prefrontal cortex generally, modulates decision making behavior."

Sense and spin

To make their findings, the team first traced circuits going into and out of the ACC from both the VC and the SC, confirming that the ACC was in a prime position to integrate and process information about what the mice saw and what to do about it. Throughout the study they chose to focus on these structures on the left side of the brain.

After tracing these left side ACC-SC and ACC-VC circuits, the team then trained mice to play a video game that required both sensation (seeing a cue on one side of the screen or the other) and action (spinning a trackball to move the cue). One group of mice had to move the cue inward toward screen's center. The other group had to move the cue outward toward the screen's edge. In this way, cues could be on either side visually and different groups of mice had to move them according to different rules.

As mice worked, the scientists observed the activity of neurons in the various regions to learn how they responded during each task. Then the researchers manipulated the neurons' activity using optogenetics, a technique in which cells are genetically engineered to become controllable by flashes of light. These manipulations allowed the scientists to see how inhibiting neural activity within and between the regions would change behavior.

Under natural conditions the SC would reflexively direct movement of the mouse's head, for instance swiveling toward a stimulus to center it in view. But the scientists needed to keep the head still to make their observations, so they devised a way for mice to steer the stimulus on the screen with their paws on a trackball. In the paper they show that these two actions are equivalent for mice to move a cue within their field of view.

Optogenetically inactivating the circuits between the ACC and VC on the brain's left side proved that the ACC-VC connection was essential for the mice to process cues on the right side of their field of view. This was equally true for both groups, regardless of which way they were supposed to move a cue when they saw it.

The manipulations involving the SC proved especially intriguing.

In the group of mice that saw a stimulus on the right and were supposed to move the cue inward to the screen's middle, when the scientists inactivated neurons within the left SC, they found that mice struggled compared to unmanipulated mice. In other words, under normal conditions, the left SC helped to move a stimulus on the right side into the middle of the field of view.

When the scientists instead inactivated input from the ACC to the SC, mice did the task correctly more often than unmanipulated mice. When the same mice saw a stimulus on the left and had to move it inwards, they did the task wrong more often. The job of ACC inputs, it seemed, was to override the SC's inclination. When that override was disabled, the SC's preference for moving a righthand cue into the middle was unchecked. But the ability of the mouse to move a lefthand stimulus to the middle was undermined.

"Those results suggest that the SC and the ACC-SC pathway facilitate opposite actions," the authors wrote. "Importantly these findings also suggest that the ACC-SC pathway does so by modulating the innate response bias of the SC."

The scientists also tested the effect of ACC-SC inactivation in the second group of mice, whose job was to move the cue outward. There they saw that inactivation increased incorrect responses on right cue trials. This result makes sense in the context of rules overriding reflex. If the reflex ingrained in the left-brain SC is to bring a righthand cue into the middle of the field of view (by swiveling the head right), then only a functioning ACC-SC override could compel it to successfully move the cue further to the right, and therefore further to the periphery of the field of view, when the task rule required it.

Sur said the findings accentuate the importance of the prefrontal cortex (in this case, specifically the ACC) in endowing mammals with the intelligence to follow rules rather than reflexes, when needed. It also suggests that developmental deficits or injury in the ACC could contribute to psychiatric disorders.

"Understanding the role of the prefrontal cortex, or even a segment, is crucial to understanding how executive control can be developed, or may fail to develop, under conditions of dysfunction," Sur said.

Credit: 
Picower Institute at MIT