Culture

Researchers for the first time identify neurons in the human visual cortex that respond to faces

image: A new study published in Neurology identifies for the first time the neurons in the human visual cortex that selectively respond to faces. The study was carried out by Dr. Vadim Axelrod, head of the Consciousness and Cognition Laboratory at the Gonda (Goldschmied) Multidisciplinary Brain Research Center at Bar-Ilan University, in collaboration with a team from Institut du Cerveau et de la Moelle Épinière and Pitié-Salpêtrière Hospital (team leader: Professor Lionel Naccache).

The researchers showed that the neurons in the visual cortex (in the vicinity of the Fusiform Face Area) responded much more strongly to faces than to city landscapes or objects. A high response was found both for faces of famous people (e.g., Charles Aznavour, Nicolas Sarkozy, Catherine Deneuve, Louis De Funes) and for faces unfamiliar to the participant in the experiment. In an additional experiment, the neurons exhibited face-selectivity to human and animal faces that appeared within a movie (a clip from Charlie Chaplin's "The Circus").

The present results provide unique insights into human brain functioning at the cellular level during face processing. These findings also help bridge the understanding of face mechanisms across species (i.e., monkeys and humans).

Image: 
Dr. Vadim Axelrod <P>Credit for the individual photos in the image are written at the bottom of the image.

Imagine a world where everyone has the same face. That would be a very different world than the one we know. In our world, in which faces are different, faces convey essential information. For example, most of us can recognize a celebrity's face even if it only appears for a fraction of second or the face of an old college friend even after decades of not seeing him. Many of us can sense the mood of a significant other just based on facial expression. Often, we can establish whether a person is trustworthy by just looking at his or her face. Despite intensive research, how the brain achieves all of these behaviors is still a great mystery.

A new study published in Neurology, the medical journal of the American Academy of Neurology (issue of January 22, 2019), identifies for the first time the neurons in the human visual cortex that selectively respond to faces. The study was carried out by Dr. Vadim Axelrod, head of the Consciousness and Cognition Laboratory at the Gonda (Goldschmied) Multidisciplinary Brain Research Center at Bar-Ilan University, in collaboration with a team from Institut du Cerveau et de la Moelle Épinière and Pitié-Salpêtrière Hospital (team leader: Prof. Lionel Naccache).

The researchers showed that the neurons in the visual cortex (in the vicinity of the Fusiform Face Area) responded much more strongly to faces than to city landscapes or objects (see examples: https://youtu.be/QYJCB60FhHE). A high response was found both for faces of famous people (e.g., Charles Aznavour, Nicolas Sarkozy, Catherine Deneuve, Louis De Funes) and for faces unfamiliar to the participant in the experiment. In an additional experiment, the neurons exhibited face-selectivity to human and animal faces that appeared within a movie (a clip from Charlie Chaplin's The Circus).

"In the early 1970s Prof. Charles Gross and colleagues discovered the neurons in the visual cortex of macaque monkeys that responded to faces. In humans, face-selective activity has been extensively investigated, mainly using non-invasive tools such as functional magnetic resonance imaging (fMRI) and electrophysiology (EEG)," explains the paper's lead author, Dr. Axelrod. "Strikingly, face-neurons in posterior temporal visual cortex have never been identified before in humans. In our study, we had a very rare opportunity to record neural activity in a single patient while micro-electrodes were implanted in the vicinity of the Fusiform Face Area ? the largest and likely the most important face-selective region of the human brain."

Probably the best-known neurons that respond to faces have been the so-called "Jennifer Aniston cells" ? the neurons in the medial temporal lobe that respond to different images of a specific person (e.g., Jennifer Aniston in the original study published in Nature by Quiroga and colleagues in 2005). "But the neurons in the visual cortex that we reported here are very different from the neurons in the medial temporal lobe," emphasizes Dr. Axelrod. "First, the neurons in the visual cortex respond vigorously to any type of face, regardless of the person's identity. Second, they respond much earlier. Specifically, while in our case, a strong response could be observed within 150 milliseconds of showing the image, the "Jennifer Aniston cells" usually take 300 milliseconds or more to respond."

The present results provide unique insights into human brain functioning at the cellular level during face processing. These findings also help bridge the understanding of face mechanisms across species (i.e., monkeys and humans). "It is really exciting," Dr. Axelrod says, "that after almost half a century since the discovery of face-neurons in macaque monkeys, it is now possible to demonstrate similar neurons in humans."

Credit: 
Bar-Ilan University

Asking patients about sexual orientation, gender identity

Patients are open to being asked about their sexual orientation and gender identity in primary care, which can help make health care more welcoming, although the stage should be set for these questions and they should include a range of options, found a study published in CMAJ (Canadian Medical Association Journal).

"Understanding the social determinants of health, including gender identity and sexual orientation, is important for providing better health care as these are linked with outcomes," says Dr. Andrew Pinto, The Upstream Lab, St. Michael's Hospital, Toronto, Ontario. "Many transgender and gender-diverse people have negative experiences in health care that affect their health."

During the study, conducted at the St. Michael's Hospital Academic Family Health Team between Dec. 1, 2013, and Mar. 31, 2016, researchers asked questions on sexual orientation and gender identity. They offered a survey to 15 221 patients. A total of 90% of 14 247 respondents answered questions about sexual orientation and gender identity. The researchers also interviewed 27 patients of diverse age, gender identity, education level, language and immigration status to fully understand their reactions to these questions. Several themes emerged:

Patients appreciated the variety of options on the surveys to indicate gender identity and sexual orientation

Some LGBTQ2S (lesbian, gay, bisexual, queer, two-spirited) patients were uncomfortable answering the questions, as they recalled previous negative experiences related to gender or sexual orientation. Some cisgender and heterosexual patients were also uncomfortable.

Despite a variety of responses provided on the survey, some patients did not see their identities reflected and suggested additional terms to include in future surveys.

"Our findings can inform health care organizations that wish to characterize their patients through routine collection of sociodemographic data," says Dr. Pinto. "Questions on gender identity and sexual orientation should include a range of flexible response options and definitions for clarity."

Credit: 
Canadian Medical Association Journal

Fossilized slime of 100-million-year-old hagfish shakes up vertebrate family tree

image: Tethymyxine tapirostrum, is a 100-million-year-old, 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon, believed to be the first detailed fossil of a hagfish.

Image: 
Tetsuto Miyashita, University of Chicago.

Paleontologists at the University of Chicago have discovered the first detailed fossil of a hagfish, the slimy, eel-like carrion feeders of the ocean. The 100-million-year-old fossil helps answer questions about when these ancient, jawless fish branched off the evolutionary tree from the lineage that gave rise to modern-day jawed vertebrates, including bony fish and humans.

The fossil, named Tethymyxine tapirostrum,is a 12-inch long fish embedded in a slab of Cretaceous period limestone from Lebanon. It fills a 100-million-year gap in the fossil record and shows that hagfish are more closely related to the blood-sucking lamprey than to other fishes. This means that both hagfish and lampreys evolved their eel-like body shape and strange feeding systems after they branched off from the rest of the vertebrate line of ancestry about 500 million years ago.

"This is a major reorganization of the family tree of all fish and their descendants. This allows us to put an evolutionary date on unique traits that set hagfish apart from all other animals," said Tetsuto Miyashita, PhD, a Chicago Fellow in the Department of Organismal Biology and Anatomy at UChicago who led the research. The findings are published this week in the Proceedings of the National Academy of Sciences.

The slimy dead giveaway

Modern-day hagfish are known for their bizarre, nightmarish appearance and unique defense mechanism. They don't have eyes, or jaws or teeth to bite with, but instead use a spiky tongue-like apparatus to rasp flesh off dead fish and whales at the bottom of the ocean. When harassed, they can instantly turn the water around them into a cloud of slime, clogging the gills of would-be predators.

This ability to produce slime is what gave away the Tethymyxine fossil. Miyashita used an imaging technology called synchrotron scanning at Stanford University to identify chemical traces of soft tissue that were left behind in the limestone when the hagfish fossilized. These soft tissues are rarely preserved, which is why there are so few examples of ancient hagfish relatives to study.

The scanning picked up a signal for keratin, the same material that makes up fingernails in humans. Keratin, as it turns out, is a crucial part of what makes the hagfish slime defense so effective. Hagfish have a series of glands along their bodies that produce tiny packets of tightly-coiled keratin fibers, lubricated by mucus-y goo. When these packets hit seawater, the fibers explode and trap the water within, turning everything into shark-choking slop. The fibers are so strong that when dried out they resemble silk threads; they're even being studied as possible biosynthetic fibers to make clothes and other materials.

Miyashita and his colleagues found more than a hundred concentrations of keratin along the body of the fossil, meaning that the ancient hagfish probably evolved its slime defense when the seas included fearsome predators such as plesiosaurs and ichthyosaurs that we no longer see today.

"We now have a fossil that can push back the origin of the hagfish-like body plan by hundreds of millions of years," Miyashita said. "Now, the next question is how this changes our view of the relationships between all these early fish lineages."

Shaking up the vertebrate family tree

Features of the new fossil help place hagfish and their relatives on the vertebrate family tree. In the past, scientists have disagreed about where they belonged, depending on how they tackled the question. Those who rely on fossil evidence alone tend to conclude that hagfish are so primitive that they are not even vertebrates. This implies that all fishes and their vertebrate descendants had a common ancestor that -- more or less -- looked like a hagfish.

But those who work with genetic data argue that hagfish and lampreys are more closely related to each other. This suggests that modern hagfish and lampreys are the odd ones out in the family tree of vertebrates. In that case, the primitive appearance of hagfish and lampreys is deceptive, and the common ancestor of all vertebrates was probably something more conventionally fish-like.

Miyashita's work reconciles these two approaches, using physical evidence of the animal's anatomy from the fossil to come to the same conclusion as the geneticists: that the hagfish and lampreys should be grouped separately from the rest of fishes.

"In a sense, this resets the agenda of how we understand these animals," said Michael Coates, PhD, professor of organismal biology and anatomy at UChicago and a co-author of the new study. "Now we have this important corroboration that they are a group apart. Although they're still part of vertebrate biodiversity, we now have to look at hagfish and lampreys more carefully, and recognize their apparent primitiveness as a specialized condition.

Paleontologists have increasingly used sophisticated imaging techniques in the past few years, but Miyashita's research is one of a handful so far to use synchrotron scanning to identify chemical elements in a fossil. While it was crucial to detect anatomical structures in the hagfish fossil, he believes it can also be a useful tool to help scientists detect paint or glue used to embellish a fossil or even outright forge a specimen. Any attempt to spice up a fossil specimen leaves chemical fingerprints that light up like holiday decorations in a synchrotron scan.

"I'm impressed with what Tetsuto has marshaled here," Coates said. "He's maxed out all the different techniques and approaches that can be applied to this fossil to extract information from it, to understand it and to check it thoroughly."

Credit: 
University of Chicago Medical Center

Greenland ice melting four times faster than in 2003, study finds

COLUMBUS, Ohio - Greenland is melting faster than scientists previously thought--and will likely lead to faster sea level rise--thanks to the continued, accelerating warming of the Earth's atmosphere, a new study has found.

Scientists concerned about sea level rise have long focused on Greenland's southeast and northwest regions, where large glaciers stream iceberg-sized chunks of ice into the Atlantic Ocean. Those chunks float away, eventually melting. But a new study published Jan. 21 in the Proceedings of the National Academy of Sciences found that the largest sustained ice loss from early 2003 to mid-2013 came from Greenland's southwest region, which is mostly devoid of large glaciers.

"Whatever this was, it couldn't be explained by glaciers, because there aren't many there," said Michael Bevis, lead author of the paper, Ohio Eminent Scholar and a professor of geodynamics at The Ohio State University. "It had to be the surface mass--the ice was melting inland from the coastline."

That melting, which Bevis and his co-authors believe is largely caused by global warming, means that in the southwestern part of Greenland, growing rivers of water are streaming into the ocean during summer. The key finding from their study: Southwest Greenland, which previously had not been considered a serious threat, will likely become a major future contributor to sea level rise.

"We knew we had one big problem with increasing rates of ice discharge by some large outlet glaciers," he said. "But now we recognize a second serious problem: Increasingly, large amounts of ice mass are going to leave as meltwater, as rivers that flow into the sea."

The findings could have serious implications for coastal U.S. cities, including New York and Miami, as well as island nations that are particularly vulnerable to rising sea levels.

And there is no turning back, Bevis said.

"The only thing we can do is adapt and mitigate further global warming--it's too late for there to be no effect," he said. "This is going to cause additional sea level rise. We are watching the ice sheet hit a tipping point."

Climate scientists and glaciologists have been monitoring the Greenland ice sheet as a whole since 2002, when NASA and Germany joined forces to launch GRACE. GRACE stands for Gravity Recovery and Climate Experiment, and involves twin satellites that measure ice loss across Greenland. Data from these satellites showed that between 2002 and 2016, Greenland lost approximately 280 gigatons of ice per year, equivalent to 0.03 inches of sea level rise each year. But the rate of ice loss across the island was far from steady.

Bevis' team used data from GRACE and from GPS stations scattered around Greenland's coast to identify changes in ice mass. The patterns they found show an alarming trend--by 2012, ice was being lost at nearly four times the rate that prevailed in 2003. The biggest surprise: This acceleration was focused in southwest Greenland, a part of the island that previously hadn't been known to be losing ice that rapidly.

Bevis said a natural weather phenomenon--the North Atlantic Oscillation, which brings warmer air to West Greenland, as well as clearer skies and more solar radiation--was building on man-made climate change to cause unprecedented levels of melting and runoff. Global atmospheric warming enhances summertime melting, especially in the southwest. The North Atlantic Oscillation is a natural--if erratic--cycle that causes ice to melt under normal circumstances. When combined with man-made global warming, though, the effects are supercharged.

"These oscillations have been happening forever," Bevis said. "So why only now are they causing this massive melt? It's because the atmosphere is, at its baseline, warmer. The transient warming driven by the North Atlantic Oscillation was riding on top of more sustained, global warming."

Bevis likened the melting of Greenland's ice to coral bleaching: Once the ocean's water hits a certain temperature, coral in that region begins to bleach. There have been three global coral bleaching events. The first was caused by the 1997-98 El Niño, and the other two events by the two subsequent El Niños. But El Niño cycles have been happening for thousands of years--so why have they caused global coral bleaching only since 1997?

"What's happening is sea surface temperature in the tropics is going up; shallow water gets warmer and the air gets warmer," Bevis said. "The water temperature fluctuations driven by an El Niño are riding this global ocean warming. Because of climate change, the base temperature is already close to the critical temperature at which coral bleaches, so an El Niño pushes the temperature over the critical threshold value. And in the case of Greenland, global warming has brought summertime temperatures in a significant portion of Greenland close to the melting point, and the North Atlantic Oscillation has provided the extra push that caused large areas of ice to melt".

Before this study, scientists understood Greenland to be one of the Earth's major contributors to sea-level rise--mostly because of its glaciers. But these new findings, Bevis said, show that scientists need to be watching the island's snowpack and ice fields more closely, especially in and near southwest Greenland.

GPS systems in place now monitor Greenland's ice margin sheet around most of its perimeter, but the network is very sparse in the southwest, so it is necessary to densify the network there, given these new findings.

"We're going to see faster and faster sea level rise for the foreseeable future," Bevis said. "Once you hit that tipping point, the only question is: How severe does it get?"

Credit: 
Ohio State University

Youthful cognitive ability strongly predicts mental capacity later in life

Early adult general cognitive ability (GCA) -- the diverse set of skills involved in thinking, such as reasoning, memory and perception -- is a stronger predictor of cognitive function and reserve later in life than other factors, such as higher education, occupational complexity or engaging in late-life intellectual activities, report researchers in a new study publishing January 21 in PNAS.

Higher education and late-life intellectual activities, such as doing puzzles, reading or socializing, have all been associated with reduced risk of dementia and sustained or improved cognitive reserve. Cognitive reserve is the brain's ability to improvise and find alternate ways of getting a job done and may help people compensate for other changes associated with aging.

An international team of scientists, led by scientists at University of California San Diego School of Medicine, sought to address a "chicken or egg" conundrum posed by these associations. Does being in a more complex job help maintain cognitive abilities, for example, or do people with greater cognitive abilities tend to be in more complex occupations?

The researchers evaluated more than 1,000 men participating in the Vietnam Era Twin Study of Aging. Although all were veterans, nearly 80 percent of the participants reported no combat experience. All of the men, now in their mid-50s to mid-60s, took the Armed Forces Qualification Test at an average age of 20. The test is a measure GCA. As part of the study, researchers assessed participants' performance in late midlife, using the same GCA measure, plus assessments in seven cognitive domains, such as memory, abstract reasoning and verbal fluency.

They found that GCA at age 20 accounted for 40 percent of the variance in the same measure at age 62, and approximately 10 percent of the variance in each of the seven cognitive domains. After accounting for GCA at age 20, the authors concluded, other factors had little effect. For example, lifetime education, complexity of job and engagement in intellectual activities each accounted for less than 1 percent of variance at average age 62.

"The findings suggest that the impact of education, occupational complexity and engagement in cognitive activities on later life cognitive function likely reflects reverse causation," said first author William S. Kremen, PhD, professor in the Department of Psychiatry at UC San Diego School of Medicine. "In other words, they are largely downstream effects of young adult intellectual capacity."

In support of that idea, researchers found that age 20 GCA, but not education, correlated with the surface area of the cerebral cortex at age 62. The cerebral cortex is the thin, outer region of the brain (gray matter) responsible for thinking, perceiving, producing and understanding language.

The authors emphasized that education is clearly of great value and can enhance a person's overall cognitive ability and life outcomes. Comparing their findings with other research, they speculated that the role of education in increasing GCA takes place primarily during childhood and adolescence when there is still substantial brain development.

However, they said that by early adulthood, education's effect on GCA appears to level off, though it continues to produce other beneficial effects, such as broadening knowledge and expertise.

Kremen said remaining cognitively active in later life is beneficial, but "our findings suggest we should look at this from a lifespan perspective. Enhancing cognitive reserve and reducing later life cognitive decline may really need to begin with more access to quality childhood and adolescent education."

The researchers said additional investigations would be needed to fully confirm their inferences, such as a single study with cognitive testing at different times throughout childhood and adolescence.

Credit: 
University of California - San Diego

Managing gender dysphoria in adolescents: A practical guide for family physicians

As a growing number of adolescents identify as transgender, a review aims to help primary care physicians care for this vulnerable group and its unique needs. The review, published in CMAJ (Canadian Medical Association Journal), looks at emerging evidence for managing gender dysphoria, including social and medical approaches for youth who are transitioning.

"[T]he hallmark of care will remain a thoughtful, affirming, well-reasoned individualized approach that attempts to maximize support for this vulnerable population, as youth and their caregivers make complex and difficult decisions," writes Dr. Joseph Bonifacio, Department of Pediatrics, St. Michael's Hospital, Toronto, Ontario, with coauthors.

Gender dysphoria is "the distress experienced by an individual when their gender identity and their gender assigned at birth are discordant." Although precise numbers are unknown, studies from other countries indicate that 1.2% to 4.1% of adolescents identify as a different gender identify from their birth gender, a rate higher than in the adult population.

A recent Canadian study found that less than half of transgender youth are comfortable discussing their health care needs with their family doctor.

"Ideally, the approach to youth with gender dysphoria revolves around collaborative decision-making among the youth, family or guardians, and care providers," writes Dr. Bonifacio, with coauthors. "The youth's voice is always paramount."

The review follows the Endocrine Society's guideline recommendation that medication to suppress puberty, which allows youth to explore their changing gender identity, should not be used before puberty.

"Some youth find that their dysphoria abates as puberty starts, making it important to allow initial pubertal changes to occur," writes Dr. Bonifacio. "On the other hand, some youth may find their gender dysphoria increases with puberty, corroborating their need for further care."

As this is a relatively new field, there are gaps in the research base, such as the number of nonbinary youth who identify outside male-female genders and data on adolescents requesting surgery. The authors also note that ethnocultural diversity is underrepresented in study populations and in their clinics in the large city of Toronto, and needs to be better understood.

"[A]ccessing optimal individualized care may be difficult for certain populations, making it important that generalists are supported to increase their capacity to care for youth with gender dysphoria and to liaise with other professions to support families," write the authors.

The review also includes quick-reference boxes of definitions, criteria to diagnose gender dysphoria and resources for children, caregivers and clinicians.

Credit: 
Canadian Medical Association Journal

Mice pass on brain benefits of enriched upbringing to offspring

image: Figure 1. Ocular dominance (OD)-plasticity is preserved in primary visual cortex (V1) of old enriched environment (EE)-mice

Image: 
Kalogeraki, Yusifov, and Löwel, <em>eNeuro</em> (2019)

Mice growing up in a basic cage maintain lifelong visual cortex plasticity if their parents were raised in an environment that promoted social interaction and physical and mental stimulation, according to a multigenerational study published in eNeuro. The research suggests life experience may be transmitted from one generation to the next through a combination of changes in gene expression and parental caretaking behavior.

Blocking visual input to one eye of adult mice leads to a rewiring of the visual cortex to prioritize input from the open eye. Siegrid Löwel and colleagues first confirmed that this plasticity declines over time in mice housed in standard cages while it is preserved throughout life in mice raised in an enriched environment -- in this case a large, two-story cage with separate living and eating areas connected by a ladder, regularly changed mazes, and a slide.

The researchers then bred the mice to create three experimental groups of offspring, all of which were raised in standard cages. Despite being raised in the same impoverished environment, mice whose parents -- particularly mothers -- were raised in the enriched environment maintained lifelong plasticity in the visual cortex. These findings emphasize the importance of documenting rearing conditions of experimental animals across generations.

Credit: 
Society for Neuroscience

Implantable device curbs seizures and improves cognition in epileptic rats

image: GDNF releasing cells reduce epilepsy induced cell death. In a normal hippocampus (A) no overt sign of cell death can be observed, whereas many dying cells (stained in green) are observed in the epileptic hippocampus (B). GDNF releasing cells effectively attenuate cell death (C).

Image: 
Giovanna Paolone

A protein-secreting device implanted into the hippocampus of epileptic rats reduces seizures by 93 percent in three months, finds preclinical research published in JNeurosci. These results support ongoing development of this technology and its potential translation into a new treatment for epilepsy.

Motivated by an unmet need for effective and well-tolerated epilepsy therapies, Giovanna Paolone and colleagues of the University of Ferrara, Italy and of Gloriana Therapeutics, Inc. (Providence, RI) investigated the effects of the Gloriana targeted cellular delivery system for glial cell line-derived neurotrophic factor (GDNF) -- a protein recent research suggests may help suppress epileptic activity.

In addition to quickly and progressively reducing seizures in male rats -- by 75 percent within two weeks -- the researchers found their device improved rats' anxiety-like symptoms and their performance on an object recognition task, indicating improvement in cognition.

The treatment also corrected abnormalities in brain anatomy associated with epilepsy. These effects persisted even after the device was removed, indicating this approach may modify the disease progression.

Credit: 
Society for Neuroscience

Brain training app improves users' concentration, study shows

image: Decoder brain training game on Peak.

Image: 
Peak

A new 'brain training' game designed by researchers at the University of Cambridge improves users' concentration, according to new research published today. The scientists behind the venture say this could provide a welcome antidote to the daily distractions that we face in a busy world.

In their book, The Distracted Mind: Ancient Brains in a High-Tech World, Adam Gazzaley and Larry D. Rosen point out that with the emergence of new technologies requiring rapid responses to emails and texts and working on multiple projects simultaneously, young people, including students, are having more problems with sustaining attention and frequently become distracted. This difficulty in focussing attention and concentrating is made worse by stress from a global environment that never sleeps and also frequent travel leading to jetlag and poor quality sleep.

"We've all experienced coming home from work feeling that we've been busy all day, but unsure what we actually did," says Professor Barbara Sahakian from the Department of Psychiatry. "Most of us spend our time answering emails, looking at text messages, searching social media, trying to multitask. But instead of getting a lot done, we sometimes struggle to complete even a single task and fail to achieve our goal for the day. Then we go home, and even there we find it difficult to 'switch off' and read a book or watch TV without picking up our smartphones. For complex tasks we need to get in the 'flow' and stay focused."

In recent years, as smartphones have become ubiquitous, there has been a growth in the number of so-called 'brain training' apps that claim to improve cognitive skills such as memory, numerical skills and concentration.

Now, a team from the Behavioural and Clinical Neuroscience Institute at the University of Cambridge, has developed and tested 'Decoder', a new game that is aimed at helping users improve their attention and concentration. The game is based on the team's own research and has been evaluated scientifically.

In a study published today in the journal Frontiers in Behavioural Neuroscience Professor Sahakian and colleague Dr George Savulich have demonstrated that playing Decoder on an iPad for eight hours over one month improves attention and concentration. This form of attention activates a frontal-parietal network in the brain.

In their study, the researchers divided 75 healthy young adults into three groups: one group received Decoder, one control group played Bingo for the same amount of time and a second control group received no game. Participants in the first two groups were invited to attend eight one-hour sessions over the course of a month during which they played either Decoder or Bingo under supervision.

All 75 participants were tested at the start of the trial and then after four weeks using the CANTAB Rapid Visual Information Processing test (RVP). CANTAB RVP has been demonstrated in previously published studies to be a highly sensitive test of attention/concentration.

During the test, participants are asked to detect sequences of digits (e.g. 2-4-6, 3-5-7, 4-6-8). A white box appears in the middle of screen, of which digits from 2 to 9 appear in a pseudo-random order, at a rate of 100 digits per minute. Participants are instructed to press a button every time they detect a sequence. The duration of the test is approximately five minutes.

Results from the study showed a significant difference in attention as measured by the RVP. Those who played Decoder were better than those who played Bingo and those who played no game. The difference in performance was significant and meaningful as it was comparable to those effects seen using stimulants, such as methylphenidate, or nicotine. The former, also known as Ritalin, is a common treatment for Attention Deficit Hyperactivity Disorder (ADHD).

To ensure that Decoder improved focussed attention and concentration without impairing the ability to shift attention, the researchers also tested participants' ability on the Trail Making Test. Decoder performance also improved on this commonly used neuropsychological test of attentional shifting. During this test, participants have to first attend to numbers and then shift their attention to letters and then shift back to numbers. Additionally, participants enjoyed playing the game, and motivation remained high throughout the 8 hours of gameplay.

Professor Sahakian commented: "Many people tell me that they have trouble focussing their attention. Decoder should help them improve their ability to do this. In addition to healthy people, we hope that the game will be beneficial for patients who have impairments in attention, including those with ADHD or traumatic brain injury. We plan to start a study with traumatic brain injury patients this year."

Dr Savulich added: "Many brain training apps on the market are not supported by rigorous scientific evidence. Our evidence-based game is developed interactively and the games developer, Tom Piercy, ensures that it is engaging and fun to play. The level of difficulty is matched to the individual player and participants enjoy the challenge of the cognitive training."

The game has now been licensed through Cambridge Enterprise, the technology transfer arm of the University of Cambridge, to app developer Peak, who specialise in evidence-based 'brain training' apps. This will allow Decoder to become accessible to the public. Peak has developed a version for Apple devices and is releasing the game today as part of the Peak Brain Training app. Peak Brain Training is available from the App Store for free and Decoder will be available to both free and pro users as part of their daily workout. The company plans to make a version available for Android devices later this year.

"Peak's version of Decoder is even more challenging than our original test game, so it will allow players to continue to gain even larger benefits in performance over time," says Professor Sahakian. "By licensing our game, we hope it can reach a wide audience who are able to benefit by improving their attention."

Xavier Louis, CEO of Peak, adds: "At Peak we believe in an evidenced-based approach to brain training. This is our second collaboration with Professor Sahakian and her work over the years shows that playing games can bring significant benefits to brains. We are pleased to be able to bring Decoder to the Peak community, to help people overcome their attention problems."

Credit: 
University of Cambridge

Enhanced NMR reveals chemical structures in a fraction of the time

MIT researchers have developed a way to dramatically enhance the sensitivity of nuclear magnetic resonance spectroscopy (NMR), a technique used to study the structure and composition of many kinds of molecules, including proteins linked to Alzheimer's and other diseases.

Using this new method, scientists should be able to analyze in mere minutes structures that would previously have taken years to decipher, says Robert Griffin, the Arthur Amos Noyes Professor of Chemistry. The new approach, which relies on short pulses of microwave power, could allow researchers to determine structures for many complex proteins that have been difficult to study until now.

"This technique should open extensive new areas of chemical, biological, materials, and medical science which are presently inaccessible," says Griffin, the senior author of the study.

MIT postdoc Kong Ooi Tan is the lead author of the paper, which appears in Sciences Advances on Jan. 18. Former MIT postdocs Chen Yang and Guinevere Mathies, and Ralph Weber of Bruker BioSpin Corporation, are also authors of the paper.

Enhanced sensitivity

Traditional NMR uses the magnetic properties of atomic nuclei to reveal the structures of the molecules containing those nuclei. By using a strong magnetic field that interacts with the nuclear spins of hydrogen and other isotopically labelled atoms such as carbon or nitrogen, NMR measures a trait known as chemical shift for these nuclei. Those shifts are unique for each atom and thus serve as fingerprints, which can be further exploited to reveal how those atoms are connected.

The sensitivity of NMR depends on the atoms' polarization -- a measurement of the difference between the population of "up" and "down" nuclear spins in each spin ensemble. The greater the polarization, the greater sensitivity that can be achieved. Typically, researchers try to increase the polarization of their samples by applying a stronger magnetic field, up to 35 tesla.

Another approach, which Griffin and Richard Temkin of MIT's Plasma Science and Fusion Center have been developing over the past 25 years, further enhances the polarization using a technique called dynamic nuclear polarization (DNP). This technique involves transferring polarization from the unpaired electrons of free radicals to hydrogen, carbon, nitrogen, or phosphorus nuclei in the sample being studied. This increases the polarization and makes it easier to discover the molecule's structural features.

DNP is usually performed by continuously irradiating the sample with high-frequency microwaves, using an instrument called a gyrotron. This improves NMR sensitivity by about 100-fold. However, this method requires a great deal of power and doesn't work well at higher magnetic fields that could offer even greater resolution improvements.

To overcome that problem, the MIT team came up with a way to deliver short pulses of microwave radiation, instead of continuous microwave exposure. By delivering these pulses at a specific frequency, they were able to enhance polarization by a factor of up to 200. This is similar to the improvement achieved with traditional DNP, but it requires only 7 percent of the power, and unlike traditional DNP, it can be implemented at higher magnetic fields.

"We can transfer the polarization in a very efficient way, through efficient use of microwave irradiation," Tan says. "With continuous-wave irradiation, you just blast microwave power, and you have no control over phases or pulse length."

Saving time

With this improvement in sensitivity, samples that would previously have taken nearly 110 years to analyze could be studied in a single day, the researchers say. In the Sciences Advances paper, they demonstrated the technique by using it to analyze standard test molecules such as a glycerol-water mixture, but they now plan to use it on more complex molecules.

One major area of interest is the amyloid beta protein that accumulates in the brains of Alzheimer's patients. The researchers also plan to study a variety of membrane-bound proteins, such as ion channels and rhodopsins, which are light-sensitive proteins found in bacterial membranes as well as the human retina. Because the sensitivity is so great, this method can yield useful data from a much smaller sample size, which could make it easier to study proteins that are difficult to obtain in large quantities.

Credit: 
Massachusetts Institute of Technology

Targeting 'hidden pocket' for treatment of stroke and seizure

image: A 93-series chemical compound joins with a neuron's NMDA receptor. Compounds like this have a high affinity for the receptor due to a unique motif that is drawn into a hidden pocket (illustrated by the dotted line) when in an acidic environment.

Image: 
Furukawa Lab/CSHL

Cold Spring Harbor, NY -- The ideal drug is one that only affects the exact cells and neurons it is designed to treat, without unwanted side effects. This concept is especially important when treating the delicate and complex human brain. Now, scientists at Cold Spring Harbor Laboratory have revealed a mechanism that could lead to this kind of long-sought specificity for treatments of strokes and seizures.

According to Professor Hiro Furukawa, the senior scientist who oversaw this work, "it really comes down to chemistry."

When the human brain is injured, such as during a stroke, parts of the brain begin to acidify. This acidification leads to the rampant release of glutamate.

"We suddenly get more glutamate all over the place that hits the NMDA receptor and that causes the NMDA receptor to start firing quite a lot," explains Furukawa.

In a healthy brain, the NMDA (N-methyl, D-aspartate) receptor is responsible for controlling the flow of electrically charged atoms, or ions, in and out of a neuron. The "firing" of these signals is crucial for learning and memory formation. However, overactive neurons can lead to disastrous consequences. Abnormal NMDA receptor activities have been observed in various neurological diseases and disorders, such as stroke, seizure, depression, and Alzheimer's disease, and in individuals born with genetic mutations.

Furukawa's team, in collaboration with scientists at Emory University, looked for a way to prevent over-firing NMDA receptors without affecting normal regions of the brain.

Previous work had identified promising compounds, called the 93-series, suited to this purpose. Eager to join with the NMDA receptor in an acidic environment, these compounds downregulate the receptor activity, even in the presence of glutamate, thereby preventing excessive neuronal firing.

However, the 93-series compounds sometimes cause the unwanted consequence of inhibiting the NMDA receptors in healthy parts of the brain. That's why Furukawa and his colleagues set out to determine how they could improve upon the unique features of the 93-series.

Their latest results are detailed in Nature Communications.

Using a method known as X-ray crystallography, the researchers were able to see that a motif on the 93-series compound slots into a tiny, never-before-noticed pocket within the NMDA receptor. Experimentation showed that this pocket is particularly sensitive to the pH around it.

"Now that we see the pH-sensitive pocket within NMDA receptors, we can suggest a different scaffold," Furukawa explained. "We can redesign the 93-series chemical compound--let's call it 94-series--in such a way that it can more effectively fit to that pocket and a higher pH sensitivity can be obtained. So, we're basically just starting our effort to do that."

Credit: 
Cold Spring Harbor Laboratory

Genetic variants implicated in development of schizophrenia

Genetic variants which prevent a neurotransmitter receptor from working properly have been implicated in the development of schizophrenia, according to research by the UCL Genetics Institute.

The N-methyl-D-aspartate receptor (NMDAR) is a protein which normally carries signals between brain cells in response to a neurotransmitter called glutamate. Previous research has shown that symptoms of schizophrenia can be caused by drugs which block NMDAR or by antibodies which attack it.

Genetic studies have also suggested that molecules associated with NMDAR might be involved in the development of schizophrenia.

"These results, and others which are emerging, really focus attention on abnormalities in NMDAR functioning as a risk factor for schizophrenia. Given all the pre-existing evidence it seems tempting to conclude that genetic variants which by one means or another reduce NMDAR activity could increase the risk of schizophrenia," said Professor David Curtis (UCL Genetics, Evolution & Environment), the psychiatrist who carries out the research.

For the current study, published today in Psychiatric Genetics, the DNA sequences of over 4,000 people with schizophrenia and 5,000 controls were used to study variants in the three genes which code for NMDAR (GRIN1, GRIN2A and GRIN2B) and a fourth (FYN), for a protein called Fyn which controls NMDAR functioning.

By comparing variants to the normal DNA sequence, it was possible to predict the specific rare variants which would either prevent each gene from being read or which would produce a change in the sequence of amino acids it coded for such that the protein product would not function correctly.

The investigation revealed an excess of such disruptive and damaging variants in FYN, GRIN1 and GRIN2B among the people with schizophrenia.

While the numbers of variants involved are too small for firm conclusions to be drawn, the results are consistent with previous evidence that impaired NMDAR functioning can produce symptoms of schizophrenia. They also support the hypothesis that rare genetic variants which lead to abnormal NMDAR function could increase the risk of developing schizophrenia in 0.5% of cases.

"For many years we've been aware that drugs such as phencyclidine, which blocks the receptor, can cause symptoms just like those which occur in schizophrenia. More recently it's been recognised that sometimes people produce antibodies which attack this receptor and again they have similar symptoms," said Professor Curtis.

Large genetic studies have increasingly accumulated evidence suggesting that there is an association between schizophrenia and genes associated with NMDAR but these typically involve very large numbers of genes in a rather non-specific way.

The UCL researchers focused closely on just four genes and used computer programs to predict the effects of rare variants in these genes. When they did this, they found that more of the variants predicted to impair functioning are found in the people with schizophrenia than people without schizophrenia.

For example, variants in the gene for Fyn were seen in 14 schizophrenia cases and three controls. When the team looked at the predicted effect on the protein, they saw that all three of the variants in controls affected a region with no known function whereas 10 of the variants in schizophrenia cases occurred in functional domains of the protein.

As the variants are rare, the researchers plan on following up by studying a larger sample set. Professor Curtis is part of a collaboration which will look at DNA sequence data from over 30,000 subjects with schizophrenia. They also plan on studying the effects of these specific variants in model systems such as cultures of nerve cells to precisely characterise their effects on the cell function.

Professor Curtis concluded, "Currently available medications for schizophrenia are not directed at NMDAR. However if we can conclusively demonstrate ways in which its function is abnormal then this should further stimulate attempts to develop new drugs which target this system, hopefully leading to safer and more effective treatments."

Credit: 
University College London

Bioethicists call for oversight of consumer 'neurotechnologies' with unproven benefits

PHILADELPHIA -The marketing of direct-to-consumer "neurotechnologies" can be enticing: apps that diagnose a mental state, and brain devices that improve cognition or "read" one's emotional state. However, many of these increasingly popular products aren't fully supported by science and have little to no regulatory oversight, which poses potential health risks to the public. In a new piece published in the journal Science this week, two bioethicists from Penn Medicine and the University of British Columbia suggest the creation of a working group that would further study, monitor, and provide guidance for this growing industry - which is expected to top $3 billion by 2020.

"There's a real thirst for knowledge about the efficacy of these products from the public, which remains unclear because of this lack of oversight and gap in knowledge," said lead author Anna Wexler, PhD, an instructor in the department of Medical Ethics and Health Policy at the Perelman School of Medicine at the University of Pennsylvania. "We believe a diverse, dedicated group would help back up or refute claims made by companies, determine what's safe, better understand their use among consumers, and address possible ethical concerns."

The group, made up of researchers, ethicists, funders, and industry experts, among others, the authors wrote, would serve as a clearinghouse for regulatory agencies, such as the U.S. Food and Drug Administration (FDA) and the Federal Trade Commission (FTC), third-party organizations that monitor advertising claims, industry, social and medical scientists, funding agencies, and the public at large.

While some of these techniques are used in clinical and research laboratory settings - for example, electroencephalography (EEG) devices are used to diagnose and treat epilepsy -- many consumer-grade versions of neurotechnology devices are only loosely based in science. It is unclear whether the laboratory data collected to test them is applicable to consumer-grade products, leading many in the scientific world to question the efficacy of, and advocate for increased regulation of these readily available techniques and products.

For example, some consumer neurostimulation devices may pose dangers, such as skin burns. There are also potential psychological harms from many consumer EEG devices that purport to "read" one's emotional state.

"If a consumer EEG device erroneously shows that an individual is in a stressed state, this may cause him or her to become stressed or to enact this stressed state, resulting in unwarranted psychological harm," the authors wrote. Also, a smartphone wellness app that diagnoses symptoms of depression does so without medical support structures, such as a psychologist or mental health counselor.

The devices have thrived in part because of minimal regulatory oversight. Many fall outside of FDA jurisdiction because they are categorized as "low-risk" wellness products, paving an easier path to the market. Also, investors interested in financing these devices have publicly stated that it would be difficult to invest in them if they required an FDA approval, the authors said, which would mean rigorous testing and time.

Currently, most of the regulatory burden for consumer neurotechnology falls to the FTC, which has the authority to act on claims of false advertising. However, with thousands of health and wellness apps and devices, that oversight is ill-suited to monitor and regulate the industry effectively, they said.

The authors' proposal is two-fold: create an independent working group that would survey the main domains of direct-to-consumer neurotechnologies and provide succinct appraisals of potential harms and probable efficacy. Rather than evaluating each and every product or providing overarching framing questions, the proposed working group's appraisals would outline the evidence base and potential risks, and identify gaps in current knowledge.

This working group would be responsible for disseminating those appraisals to the public and partnering with organizations well positioned to communicate with key consumer groups.

"Given that government agencies and private enterprises are actively funding research into new methods of modulating brain function," the authors wrote, "the present generation of [direct-to-consumer] neurotechnologies may be only the tip of the iceberg--making it all the more imperative to create an independent body to monitor developments in this domain."

Credit: 
University of Pennsylvania School of Medicine

Salad, soda and socioeconomic status: Mapping a social determinant of health in Seattle

image: Seattle map with statistical overlays

Image: 
UW School of Public Health

Seattle residents who live in waterfront neighborhoods tend to have healthier diets compared to those who live along Interstate-5 and Aurora Avenue, according to new research on social disparities from the University of Washington School of Public Health. The study used local data to model food consumption patterns by city block. Weekly servings of salad and soda served as proxies for diet quality.

The dramatic geographic disparities between salad eaters and soda drinkers were driven by house prices, according to the study. The lowest property values were associated with less salad and more soda; the opposite was true of the highest property values, after adjusting for demographics.

This is the first study to model eating patterns and diet quality at the census-block level, the smallest geographic unit used by the U.S. Census Bureau. The paper, published Jan. 9 in the journal Social Science and Medicine - Population Health, provides a new area-based tool to identify communities most in need of interventions to increase fruit and vegetable consumption.

"Our dietary choices and health are determined to a very large extent by where we live," said the study's lead author, Adam Drewnowski, professor of epidemiology and director of the Nutritional Sciences Program and Center for Public Health Nutrition at the School. "In turn, where we live can be determined by education, incomes and access to both material and social resources. We need a closer look at the socioeconomic determinants of health."

Researchers geo-localized dietary data of nearly 1,100 adult participants of the Seattle Obesity Study based on their home addresses, and linked them to residential property values obtained from the King County tax assessor. Information on age, gender and race/ethnicity as well as education and annual household income were gathered via telephone surveys. Participants were also asked how often they ate salad and/or drank soda. Healthy Eating Index scores, a measure of diet quality, were calculated for each participant. Scores range from 0 to 100 with higher scores indicating better diet quality.

People who ate more salad tended to have higher Healthy Eating Index scores associated with more healthy eating behaviors. People who drank more soda tended to have lower scores.

While the disparities of soda consumption by neighborhood were clear, there was no significant difference by age, income or education. However, researchers did find that Black and Hispanic residents reported more frequent soda consumption than White residents. Women tended to eat more salad than men, as did adults age 55 and older. Adults with some college education or more consumed salad more often every week than those with only a high school education or less. Also, people earning $50,000 or more ate more salad per week than those earning less than $50,000 annually. There was no significant difference in salad consumption by race or ethnicity.

"Salad and soda are the two hallmarks of a healthy versus an unhealthy diet," Drewnowski said. "We now show that they tend to be consumed by different people with different education and incomes, living in different neighborhoods in Seattle."

Researchers selected salad and soda because they were used as the proxy of diet quality in the Behavioral Risk Factors Surveillance Study. They are also frequent targets for obesity prevention policy intervention. A good example of this is Seattle's so-called soda tax, which took effect in January 2018.

"Interventions towards promoting healthier diets tend to focus on taxing soda, which is perceived as too cheap, and reducing the price of fresh produce, which is perceived as too expensive," Drewnowski said. "Initiatives to replace soda with salad come across the issue of socioeconomic status and income purchasing power, and those are very complex issues."

As more states and municipalities seek to develop targeted interventions for better health, they will need place-based tools to identify high-risk or high-need communities, according to the study.

The Seattle Obesity Study was a population based study of 2,001 male and female residents of King County, Washington. The aim was to examine the role of access to foods in influencing dietary choices and, thereby, contributing to disparities in obesity. The study recently expanded to include Pierce and Yakima counties. Drewnowski and his team plan to conduct similar research in those regions.

"We look forward to seeing those results," Drewnowski said. "Yakima has a large population of Hispanics and the closest supermarket is 20 miles away; not to mention obesity looks very different in Yakima than it does in Seattle."

Credit: 
University of Washington

Dynamic coevolutionary relationship between birds and their feather mites

image: Species of mite rouessartia bifurcata.

Image: 
Heather Proctor (University of Alberta)

A genetic study uncovers that birds maintain a dynamic coevolutionary relationship with their feather mites. The study has involved the participation of the Estación Biológica de Doñana from the CSIC, and its results have just been published in the journal Molecular Ecology.

"This study shows that the host-switching - or colonization - of symbionts (in this case, bird-feather mites) towards other host species is a frequent phenomenon even for highly host-specific symbionts, suggesting a dynamic coevolutionary and codiversification scenario," explains Roger Jovani, CSIC researcher from the Estación Biológica de Doñana.

In this work, it has been carried out a massive genetic study of the associations between the birds and their feather mites (25,500 mites analysed in 1,100 birds approximately) with the aim of identifying mites in unexpected host species. "The results show that, surprisingly, a 7.4% of the hosts and a 4.8% of the mites were part of the unexpected associations", says Jovani.

The researcher explains that "furthermore, we have found no random patterns behind the unexpected associations which prove the relevance of ecological factors within the regulation of these dynamics: a higher frequency within the modules of the ecological mite-bird network and similar sizes among host species that shared unexpected mite species.

This work modifies the traditional belief that argued that the relationship of the symbionts with their hosts was highly stable at an ecological scale. "For example, in the case of host-specific symbionts (those associated with very few host species,), the colonization or host-switching was thought to be a very rare phenomenon, and therefore the coevolutionary and cospeciation processes should be the main responsibles of their coevolutionary dynamics," says Jorge Doña, lead author of the study, current head of the R+D at AllGenetics and Postdoctoral Research Affiliate at the University of Illinois (USA).

Bird-feather mites are permanent symbionts of birds (they live their whole life cycle on their host). They are highly host-specific symbionts. Each species of these mites inhabits one or a few bird species; and therefore it was supposed that they maintained a highly stable relationship with their hosts at an ecological end evolutionary scale.

"However, recent studies at an evolutionary scale have inferred that the speciation process resulting from host-switching (speciation when reaching a new host species) and not the cospeciation (symbiont speciation after its host's speciation) is the main process behind the diversification of these symbionts," adds Jovani.

The relevance of the symbionts

The symbionts (that maintain a close parasistic, commensal or mutualistic relationship with their hosts) constitute the largest and most diverse group of organisms on the planet and are crucial for the ecosystem stability. In this sense, the study of the ecological and evolutionary aspects of the symbionts is vital to understand processes such as the emergence of transmissible diseases, pests in crops or the effect of climate change on the biodiversity.

Credit: 
Spanish National Research Council (CSIC)