Using advanced brain imaging techniques, researchers at Georgetown University Medical Center have watched how humans use both lower and higher brain processes to learn novel tasks, an advance they say may help speed up the teaching of new skills as well as offer strategies to retrain people with perceptual deficits due to autism.
In the March 15 issue of Neuron, the research team provides the first human evidence for a two-stage model of how a person learns to place objects into categories − discerning, for example, that a green apple, and not a green tennis ball, belongs to "food." They describe it as a complex interplay between neurons that process stimulus shape ("bottom-up") and more sophisticated brain areas that discriminate between these shapes to categorize and "label" that information ("top-down").
A human can't function without the ability to sort between objects and organize them in fluid ways, said the study's lead author, Maximilian Riesenhuber, Ph.D., the principal investigator for the Laboratory for Computational Cognitive Neuroscience. "We make sense of the world by learning to recognize objects as members of categories such as 'food,' 'friend,' or 'foe,' but it has not been clear how the human brain does this," he said.
The researchers theorized that a very simple yet efficient way of doing this kind of learning would be for the brain to first learn how objects vary in shape, and then, in a second stage, to learn which shapes go with which labels, allowing the brain to sort an object into different labeled "bins" when necessary. For example, a green apple and a green tennis ball are both green and round, but only an apple can be eaten and only a green tennis ball belongs to a sport.
In this study, the research team asked human volunteers to undertake a series of tasks presented to them on a computer screen. All of them involved cars that were generated with a computer graphics morphing system, allowing the researchers to generate thousands of cars with subtle shape differences. "In the beginning, all the cars looked very similar to the participants because they did not have any experience with them," said Riesenhuber. "It's like if a person had never seen faces before, they would all look similar at first."
In the first experiment, the participants looked at series of cars presented at different parts of the screen and performed simple position judgments on the images, while their brain activity was being measured using an advanced functional Magnetic Resonance Imaging (fMRI) technique that made it possible to more directly probe neuronal tuning than in previous studies. Investigators found that cars activated a particular region in participants' brains, the lateral occipital cortex, which had also been found by other studies to be important for object recognition.
Then the volunteers were given several hours of training using images of the cars. In these sessions, participants had to learn how to group the cars into two distinct categories. This was easy at first, Riesenhuber said, because the cars were obviously not alike, but then the researchers began to "tighten the screws" by making the two categories increasingly more similar.
"Over the course of the training, the participants got better at finer and finer category discriminations," Riesenhuber said. "This represents a crucial step in category learning where small differences in shape can have a big impact on category labels – as in the tennis ball and apple example – and where big differences in shape – such as between an apple and a banana – can have no impact on the label, such as when categorizing both as 'fruit'."
Now that the volunteers had learned how to categorize small shape changes, they were shown the cars from the first experiment while again being scanned, allowing the researchers to compare how training had enhanced the brain's ability to process car shapes. They found again that cars selectively activated an area in lateral occipital cortex, but that now neurons in that area appeared to be finely tuned to small car shape differences.
In a third scan, the investigators finally asked subjects to categorize the same car images shown in the other scans. This time, two areas of the brain, the now familiar area in lateral occipital cortex as well as an area in lateral prefrontal cortex, were found to be active when processing the images. "The lateral prefrontal cortex is known to be the center of cognitive control," Riesenhuber said. "That is where the brain connects physical input to an action or response, deciding what task to do and how to respond to a stimulus."
In essence, fMRI was showing that both the higher and lower brain regions had worked together to learn a task, he said.
These findings might be helpful in understanding disorders that involve differences in the interaction of bottom-up and top-down information in the brain, such as autism or schizophrenia, Riesenhuber said. It also suggests how the learning of visual skills can be enhanced by directly monitoring neuronal activity. "This could be useful, for instance, to speed up learning to detect targets in unfamiliar imaging modalities, such as baggage X rays or radar images," he said.
Co-authors include, from Georgetown University, first author Xiong Jiang, Ph.D., Evan Bradley, B.S., Regina Rini, B.A., and John VanMeter, Ph.D.; and Thomas Zeffiro, M.D., Ph.D., from Massachusetts General Hospital.
Written from a news release by Georgetown University Medical Center.