Tech

Small changes, big gains: Low-cost techniques dramatically boost learning in STEM classes

image: Low-cost, active teaching techniques -- particularly group work and worksheets -- substantially improve learning in university science classes, according to a new study involving 3,700 University of British Columbia (UBC) biology students.

Image: 
Paul H. Joseph, University of British Columbia

Low-cost, active teaching techniques--particularly group work and worksheets--substantially improve learning in university science classes, according to a new study involving 3,700 University of British Columbia (UBC) biology students.

"Many university STEM classes continue to rely on conventional lectures, despite substantial research that suggests active teaching techniques like peer instruction and group discussion are more effective," said UBC researcher Patricia Schulte, senior author of the study, published this week in PLOS ONE.

"But this confirms that group work significantly enhances how well students grasp and retain concepts. And strikingly, having students go through worksheets in groups--an easily implemented, low cost classroom technique--resulted in particularly strong improvements in scores."

Increasing class time dedicated to group work just 10 per cent (five minutes in a 50-minute class) correlated with roughly a three per cent improvement in student performance. That equates to almost one letter grade, depending on the institution. Using in-class worksheets--a wide variety of structured handouts that contain a few questions or tasks related to a concept--resulted in even more significant increases in student scores.

In general, classes had to spend half or more of their time in groupwork to see significant boost in learning.

The study is the first large-scale, direct observation of classes across a curriculum that examines the impact of different active learning approaches. The researchers observed classroom practices across 31 lecture sections in the biology program at UBC, classifying classes by the degree of group work conducted in each. They administered tests to more than 3,700 students at the beginning and end of term to assess the extent of their learning, independent of regular course exams. Most students were in first and second year.

Most of the previous work on active learning is based on instructor self-reports, qualitative surveys, or indirect observations of active learning, rather than direct assessments of teaching practices and independent assessments of learning.

Credit: 
University of British Columbia

Estimate of the national burden of HPV-positive oropharyngeal head and neck cancers

Over the last two decades, there has been a rise in head and neck cancers in the oropharynx, a region in the back of the throat that includes the tonsils and the base of the tongue. The rise of this type of cancer has been linked to the human papillomavirus (HPV), a very common sexually transmitted disease. Investigators from the Dana-Farber/Brigham and Women's Cancer Center (DFBWCC) have conducted the largest and most comprehensive study to date on the incidence of HPV-positive oropharyngeal head and neck squamous cell carcinoma (OPSCC) in the United States population, finding that 75 percent of oropharynx cancers are related to HPV. They report that the U.S. incidence of HPV-related throat cancer is 4.6 per 100,000 people, peaking in those aged 60-64. Their results are published in Cancer Epidemiology, Biomarkers and Prevention.

"Patients with HPV-related oropharynx cancers are now one of the most common patients we see in the DFBWCC Head and Neck Oncology Program," said corresponding author Danielle Margalit, MD, MPH, a radiation oncologist at the Brigham/Dana-Farber. "Through our study, we now have a clearer picture of the extent to which oropharynx cancer affects Americans. This reinforces just how important it is to educate patients about the HPV vaccine, the importance of quitting smoking as well as safe sex practices -- particularly oral sex, which is how we believe oral HPV is mostly contracted."

HPV-positive OPSCC is distinct from HPV-negative OPSCC in its risk factors and in its prognosis. While risk factors for HPV-negative OPSCC include drinking and smoking, the largest risk factor for HPV-positive OPSCC is having multiple sexual partners, although anyone who is sexually active is at risk of being exposed to HPV. HPV-positive OPSCC tends to be more treatable than HPV-negative OPSCC, and those who have it tend to live longer.

The team, led by first author Brandon Mahal MD, looked at people within the Surveillance, Epidemiology, and End Results (SEER) Head and Neck with HPV Status database, created by the National Cancer Institute (NCI). This database provides a representative sample of the U.S. population by age and region. The investigators analyzed those with known HPV-positive and known HPV-negative OPSCC, as well as people with OPSCC who had unknown HPV status. They then estimated the incidence of HPV-positive OPSCC for 100,000 people, using software provided by the SEER database. The team also looked at which demographic groups had the highest incidence of HPV-positive OPSCC.

The study found the incidence of HPV-positive OPSCC to be 4.61 cases out of 100,000 people. It was most common in white men under the age of 65, for whom it represents the sixth most common type of cancer. While previous studies have estimated that 70 percent of OPSCC cases were caused by HPV, this study found it to be closer to 75 percent.

The group of OPSCC patients with unknown HPV status in the database provides one limitation to the study, according to the researchers. Also, the data were taken from a 2013-2014 cycle, so the incidence rates might increase or decrease once the database updates.

"From a public health perspective, the best way to address the rise in HPV-positive OPSCC is through preventative measures," said Margalit. "The HPV vaccine targets the type of HPV that causes the majority of OPSCC and is expected to decrease the cases of HPV-positive OPSCC in the future. The vaccine is recommended for children through age 26 and for some older adults as well. OPSCC requires intensive treatment once it develops, and, unfortunately, we're not seeing everyone get the vaccines that could prevent it."

Credit: 
Brigham and Women's Hospital

New MRI computing technique can spot scar muscles of heart without damaging kidneys

image: Displacement comparison at the end-systolic frame and final frame. The three patients (V6, V10, V16) with different left-ventricle walls are shown. Point-to-surface distance is a measure to estimate the distance of a point from the reference surface.

Image: 
WMG University of Warwick

Traditional MRI scans use the metal gadolinium, which resonates areas of the heart muscles that are not functioning efficiently, however gadolinium affect the Kidney function

The new 3D MRI computing technique calculates strain in heart muscles showing which muscles are not functioning enough without damaging other organs - researchers at WMG, University of Warwick have found

The new technique is less stressful for the patient

3D MRI computing can measure strain in the heart using image registration method. Traditional method involves giving the patient a dose of gadolinium which can affect the kidney, researchers at WMG, University of Warwick have found.

MRIs are used to diagnose cardiac disease such as cardiomyopathy, heart attacks, irregular heartbeats and other heart disease.

Traditionally when a patient goes for an MRI scan they are given a dose of gadolinium, which reacts the magnetic field of the scanner to produce an image of the protons in the metal realigning with the magnetic field. The faster the protons realign, the brighter the image features and can show where the dead muscles are in the heart and what the diagnosis is.

The dose of gadolinium can have detrimental effects to other parts of the body, particularly the risk of kidney failure.

A new 3D MRI computing technique developed by scientists in WMG at the University of Warwick, published today, 28th August, in the Journal Scientific Reports titled 'Hierarchical Template Matching for 3D Myocardial Tracking and Cardiac Strain Estimation' focuses on Hierarchical Template Matching (HTM) technique. Which involves:

A numerically stable technique of LV myocardial tracking

A 3D extension of local weighted mean function to transform MRI pixels

A 3D extension of Hierarchical Template Matching model for myocardial tracking problems

Therefore meaning there is no need for gadolinium reducing the risk of damage to other organs.

Professor Mark Williams, from WMG at the University of Warwick comments:

"Using 3D MRI computing technique we can see in more depth what is happening to the heart, more precisely to each heart muscles, and diagnose any issues such as remodelling of heart that causes heart failure. The new method avoids the risk of damaging the kidney opposite to what traditional methods do by using gadolinium."

Jayendra Bhalodiya, who conducted the research from WMG, University of Warwick adds:

"This new MRI technique also takes away stress from the patient, as during an MRI the patient must be very still in a very enclosed environment meaning some people suffer from claustrophobia and have to stop the scan, often when they do this they have to administer another dose of the damaging gadolinium and start again. This technique doesn't require a dosage of anything, as it tracks the heart naturally."

Credit: 
University of Warwick

Nuclear winter would threaten nearly everyone on Earth

If the United States and Russia waged an all-out nuclear war, much of the land in the Northern Hemisphere would be below freezing in the summertime, with the growing season slashed by nearly 90 percent in some areas, according to a Rutgers-led study.

Indeed, death by famine would threaten nearly all of the Earth's 7.7 billion people, said co-author Alan Robock, a Distinguished Professor in the Department of Environmental Sciences at Rutgers University-New Brunswick.

The study in the Journal of Geophysical Research-Atmospheres provides more evidence to support The Treaty on the Prohibition of Nuclear Weapons passed by the United Nations two years ago, Robock said. Twenty-five nations have ratified the treaty so far, not including the United States, and it would take effect when the number hits 50.

Lead author Joshua Coupe, a Rutgers doctoral student, and other scientists used a modern climate model to simulate the climatic effects of an all-out nuclear war between the United States and Russia. Such a war could send 150 million tons of black smoke from fires in cities and industrial areas into the lower and upper atmosphere, where it could linger for months to years and block sunlight. The scientists used a new climate model from the National Center for Atmospheric Research with higher resolution and improved simulations compared with a NASA model used by a Robock-led team 12 years ago.

The new model represents the Earth at many more locations and includes simulations of the growth of the smoke particles and ozone destruction from the heating of the atmosphere. Still, the climate response to a nuclear war from the new model was nearly identical to that from the NASA model.

"This means that we have much more confidence in the climate response to a large-scale nuclear war," Coupe said. "There really would be a nuclear winter with catastrophic consequences."

In both the new and old models, a nuclear winter occurs as soot (black carbon) in the upper atmosphere blocks sunlight and causes global average surface temperatures to plummet by more than 15 degrees Fahrenheit.

Because a major nuclear war could erupt by accident or as a result of hacking, computer failure or an unstable world leader, the only safe action that the world can take is to eliminate nuclear weapons, said Robock, who works in the School of Environmental and Biological Sciences.

Credit: 
Rutgers University

How your brain remembers motor sequences

image: Hierarchical organization of motor sequence

Image: 
National Institute of Information and Communications Technology

Abstract

Ever wondered what was going on in the brain of John Coltrane when he played the famous solo on his album Giant Steps? Researchers at the National Institute of Information and Communications Technology (NICT), Japan, and Western University, Canada, have succeeded in visualizing how information is represented in a widespread area in the human cerebral cortex during a performance of skilled finger movement sequences.

Contrary to the common assumption, the researchers found that overlapping regions in the premotor and parietal cortices represent the sequences in multiple levels of motor hierarchy (e.g., chunks of a few finger movements, or chunks of a few chunks), whereas the individual finger movements (i.e., the lowest level in the hierarchy) were uniquely represented in the primary motor cortex. These results uncovered the first detailed map of cortical sequence representation in the human brain. The results may also provide some clue for locating new candidate brain areas as signal sources for motor BCI application or developing more sophisticated algorithm to reconstruct complex motor behavior.

The results were published online as Yokoi and Diedrichsen "Neural Organization of Hierarchical Motor Sequence Representations in the Human Neocortex" in Neuron on July 22, 2019.

Achievements

The best way to remember/produce long and complex motor sequences is to divide them into several smaller pieces recursively. For example, a musical piece may be remembered as a sequence of smaller chunks, with each chunk representing a group of often co-occurring notes. Such hierarchical organization has long been thought to underlie our control of motor sequences from the highly skillful actions, like playing music, to daily behavior, like making a cup of tea. Yet, very little is known about how these hierarchies are implemented in our brain.

In a new study published in a journal Neuron, Atsushi Yokoi, Center for Information and Neural Networks (CiNet), NICT, and Jörn Diedrichsen, Brain and Mind Institute, Western Univ., provide the first direct evidence of how hierarchically organized sequences are represented through the population activity across the human cerebral cortex.

The researchers measured the fine-grained fMRI activity patterns, while human participants produced 8 different remembered sequences of 11 finger presses. "Remembering 8 different sequences of 11 finger presses is a tough task, so you will definitely need to organize them hierarchically," says Diedrichsen, the study's senior author and a Western Research Chair for Motor Control and Computational Neuroscience at the Western University, Canada. "To study a hierarchy, you would really need the sequences to have this much of complexity. And currently it's very hard to train animals to learn such sequences," added Yokoi, the study's lead author who is a former postdoctoral researcher at the Diedrichsen's group since both were at the Institute of Cognitive Neuroscience, University College London in UK, and now a Researcher at the CiNet, NICT, Japan.

Through a series of careful behavioural analyses, the researchers could show that participants encoded the sequences in terms of a three-level hierarchy; (1) individual finger presses, (2) chunks consisting of two or three finger presses, and (3) entire sequences consisting of four chunks. They could then characterize the fMRI activity patterns with respect to these hierarchies using machine learning techniques.

As expected, the patterns in primary motor cortex, the area that controls finger movements, seemed to only depend on each individual finger moved, independent of it positioning in the sequence. Activity in higher-order motor areas, such as premotor and parietal cortices, clearly could be shown to encode the sequential context at the level of chunks or entire sequence. Thus, in contrast to primary motor cortex, these areas "know" what was played before and what comes after the ongoing finger press.

For the first time the study now allowed insights into the organization of these higher-order representations. Surprisingly, different levels of sequence information overlapped greatly. An unsupervised clustering approach further subdivided these areas into distinct clusters, each had a different mixing ratio of the representations, just like how one's iPhone storage is used. These results uncovered the first detailed map of cortical sequence representation in the human brain.

Study's impact

One common assumption in the cognitive neuroscience has been that each level in the functional hierarchy would mirror the anatomical hierarchy, from the higher, association cortices (e.g., premotor or parietal cortices) down to the primary sensorimotor cortices. The mysterious coexistence of a clear anatomical separation (i.e., individual finger vs. other representations) and an overlap (i.e., chunk and sequence representations) sheds new light on the classical question of the correspondence between functional and anatomical hierarchies. "It can be said the brain represents motor sequences in partly hierarchical, yet partly flat ways."

"Although its functional role is still unclear, the anatomical overlap between chunk and sequence representations may suggest these representations in upper movement hierarchy may influence with each other to support flexible sequence production. This needs to be tested in the future study," Yokoi concluded.

Future prospects

The study also suggests possible loci from which we can record brain signal to control neural prosthetics to make fluent movement sequences in potential BCI applications. The researchers also hope that it could also contribute in developing a new decoding algorithm that effectively combines the information in different hierarchies to reproduce movements.

Credit: 
National Institute of Information and Communications Technology (NICT)

How blood sugar levels affect risks in type 1 diabetes

image: Marcus Lind, professor of diabetology, Sahlgrenska Academy, University of Gothenburg.

Image: 
Photo by Anette Juhlin

A major new study on the association between blood glucose levels and risks of organ impairment in people with type 1 diabetes can make a vital contribution to diabetes care, in the researchers' view.

The Swedish study now published in BMJ (British Medical Journal) covers more than 10,000 adults and children with type 1 diabetes. Using the Swedish Diabetes Register, the researchers have been able to monitor the study participants for 8-20 years.

The researchers analyzed existing risks at various long-term blood glucose (sugar) levels, averaged over a two- to three-month period. The results of the study are particularly interesting given that there is no international consensus on the optimal blood glucose level to aim for.

For many years, a biomarker known as HbA1c has been used to measure mean blood glucose levels. In Sweden, the target HbA1c value in people with type 1 diabetes is 52 mmol/mol or below, and 47 or lower in children. Elsewhere in the world, the guidelines range from 48 to 58 mmol/mol, and are often higher in children than in adults.

The study makes it clear that a value above 52 mmol/mol is associated with an elevated risk of mild changes to the eyes and kidneys. Vision-threatening eye damage occurs mainly at substantially higher values. Staying at 52 or below thus reduces the risk of organs being affected, but a value below 48 showed no further risk reduction.

"We were unable to see that fewer instances of organ damage occurred at these lower levels. As for loss of consciousness and cramp, which are unusual, low blood glucose caused a 30 percent rise in risk. Patients with low HbA1c need to make sure they don't have excessively low glucose levels, fluctuations or efforts in managing their diabetes," says Marcus Lind, professor of diabetology and first author.

In the study, Marcus Lind -- a professor at Sahlgrenska Academy, University of Gothenburg, and senior consultant at the NU Hospital Group in Uddevalla, Sweden -- shared primary responsibility with Johnny Ludvigsson, a senior professor at Linköping University, whose specialty is childhood diabetes.

"Knowing more about the association between blood glucose level and risk is extremely important since the health care services, the community, patients and their parents make heavy use of resources in attaining a particular blood glucose level," Ludvigsson says.

"Attaining a low HbA1c value may, in some cases, require children to be woken up several times a night, plus extra glucose monitoring and strict attention to diet and physical activity day after day, which can be extremely burdensome."

Credit: 
University of Gothenburg

Smarter experiments for faster materials discovery

image: This animation shows a comparison between a traditional grid measurement (left) of a sample with a measurement steered by the newly-developed decision-making algorithm (right). This comparison shows that the algorithm can identify the edges and inner part of the sample and focuses the measurement in these regions to gain more knowledge about the sample.

Image: 
Brookhaven National Laboratory

UPTON, NY-- A team of scientists from the U.S. Department of Energy's Brookhaven National Laboratory and Lawrence Berkeley National Laboratory designed, created, and successfully tested a new algorithm to make smarter scientific measurement decisions. The algorithm, a form of artificial intelligence (AI), can make autonomous decisions to define and perform the next step of an experiment. The team described the capabilities and flexibility of their new measurement tool in a paper published on August 14, 2019 in Scientific Reports.

From Galileo and Newton to the recent discovery of gravitational waves, performing scientific experiments to understand the world around us has been the driving force of our technological advancement for hundreds of years. Improving the way researchers do their experiments can have tremendous impact on how quickly those experiments yield applicable results for new technologies.

Over the last decades, researchers have sped up their experiments through automation and an ever-growing assortment of fast measurement tools. However, some of the most interesting and important scientific challenges--such as creating improved battery materials for energy storage or new quantum materials for new types of computers--still require very demanding and time-consuming experiments.

By creating a new decision-making algorithm as part of a fully automated experimental setup, the interdisciplinary team from two of Brookhaven's DOE Office of Science user facilities--the Center for Functional Nanomaterials (CFN) and the National Synchrotron Light Source II (NSLS-II)--and Berkeley Lab's Center for Advanced Mathematics for Energy Research Applications (CAMERA) offers the possibility to study these challenges in a more efficient fashion.

The challenge of complexity

The goal of many experiments is to gain knowledge about the material that is studied, and scientists have a well-tested way to do this: They take a sample of the material and measure how it reacts to changes in its environment.

A standard approach for scientists at user facilities like NSLS-II and CFN is to manually scan through the measurements from a given experiment to determine the next area where they might want to run an experiment. But access to these facilities' high-end materials-characterization tools is limited, so measurement time is precious. A research team might only have a few days to measure their materials, so they need to make the most out of each measurement.

"The key to achieving a minimum number of measurements and maximum quality of the resulting model is to go where uncertainties are large," said Marcus Noack, a postdoctoral scholar at CAMERA and lead author of the study. "Performing measurements there will most effectively reduce the overall model uncertainty."

As Kevin Yager, a co-author and CFN scientist, pointed out, "The final goal is not only to take data faster but also to improve the quality of the data we collect. I think of it as experimentalists switching from micromanaging their experiment to managing at a higher level. Instead of having to decide where to measure next on the sample, the scientists can instead think about the big picture, which is ultimately what we as scientists are trying to do."

"This new approach is an applied example of artificial intelligence," said co-author Masafumi Fukuto, a scientist at NSLS-II. "The decision-making algorithm is replacing the intuition of the human experimenter and can scan through the data and make smart decisions about how the experiment should proceed."

More information for less?

In practice, before starting an experiment, the scientists define a set of goals they want to get out of the measurement. With these goals set, the algorithm looks at the previously measured data while the experiment is ongoing to determine the next measurement. On its search for the best next measurement, the algorithm creates a surrogate model of the data, which is an educated guess as to how the material will behave in the next possible steps, and calculates the uncertainty--basically how confident it is in its guess--for each possible next step. Based on this, it then selects the most uncertain option to measure next. The trick here is by picking the most uncertain step to measure next, the algorithm maximizes the amount of knowledge it gains by making that measurement. The algorithm not only maximizes the information gain during the measurement, it also defines when to end the experiment by figuring out the moment when any additional measurements would not result in more knowledge.

"The basic idea is, given a bunch of experiments, how can you automatically pick the next best one?" said James Sethian, director of CAMERA and a co-author of the study. "Marcus has built a world which builds an approximate surrogate model on the basis of your previous experiments and suggests the best or most appropriate experiment to try next."

How we got here

To make autonomous experiments a reality, the team had to tackle three important pieces: the automation of the data collection, real-time analysis, and, of course, the decision-making algorithm.

"This is an exciting part of this collaboration," said Fukuto. "We all provided an essential piece for it: The CAMERA team worked on the decision-making algorithm, Kevin from CFN developed the real-time data analysis, and we at NSLS-II provided the automation for the measurements."

The team first implemented their decision-making algorithm at the Complex Materials Scattering (CMS) beamline at NSLS-II, which the CFN and NSLS-II operate in partnership. This instrument offers ultrabright x-rays to study the nanostructure of various materials. As the lead beamline scientist of this instrument, Fukuto had already designed the beamline with automation in mind. The beamline offers a sample-exchanging robot, automatic sample movement in various directions, and many other helpful tools to ensure fast measurements. Together with Yager's real-time data analysis, the beamline was--by design--the perfect fit for the first "smart" experiment.

The first "smart" experiment

The first fully autonomous experiment the team performed was to map the perimeter of a droplet where nanoparticles segregate using a technique called small-angle x-ray scattering at the CMS beamline. During small-angle x-ray scattering, the scientists shine bright x-rays at the sample and, depending on the atomic to nanoscale structure of the sample, the x-rays bounce off in different directions. The scientists then use a large detector to capture the scattered x-rays and calculate the properties of the sample at the illuminated spot. In this first experiment, the scientists compared the standard approach of measuring the sample with measurements taken when the new decision-making algorithm was calling the shots. The algorithm was able to identify the area of the droplet and focused on its edges and inner parts instead of the background.

"After our own initial success, we wanted to apply the algorithm more, so we reached out to a few users and proposed to test our new algorithm on their scientific problems," said Yager. "They said yes, and since then we have measured various samples. One of the most interesting ones was a study on a sample that was fabricated to contain a spectrum of different material types. So instead of making and measuring an enormous number of samples and maybe missing an interesting combination, the user made one single sample that included all possible combinations. Our algorithm was then able to explore this enormous diversity of combinations efficiently," he said.

What's next?

After the first successful experiments, the scientists plan to further improve the algorithm and therefore its value to the scientific community. One of their ideas is to make the algorithm "physics-aware"--taking advantage of anything already known about material under study--so the method can be even more effective. Another development in progress is to use the algorithm during synthesis and processing of new materials, for example to understand and optimize processes relevant to advanced manufacturing as these materials are incorporated into real-world devices. The team is also thinking about the larger picture and wants to transfer the autonomous method to other experimental setups.

"I think users view the beamlines of NSLS-II or microscopes of CFN just as powerful characterization tools. We are trying to change these capabilities into a powerful material discovery facility," Fukuto said.

Credit: 
DOE/Brookhaven National Laboratory

Novel therapy studied for inherited breast cancer

SAN ANTONIO (Aug. 27, 2019) -- UT Health San Antonio researchers have discovered a novel way to kill cancers that are caused by an inherited mutation in BRCA1, the type of cancer for which actress Angelina Jolie had preventive double mastectomy and reconstructive surgery in 2013.

"This represents a new treatment for inherited breast and ovarian cancer, which are higher in our region," said Robert A. Hromas, M.D., FACP, professor and dean of the Joe R. and Teresa Lozano Long School of Medicine at UT Health San Antonio. Dr. Hromas is senior investigator on the research, published in the journal Proceedings of the National Academy of Sciences. (Reference: "MiR223-3p promotes synthetic lethality in BRCA1-deficient cancers," Aug. 8, 2019.)

A tiny molecule called microRNA (miR) 223-3p prevents normal cells from making mistakes while repairing their DNA. However, cancers with BRCA1 mutations repress miR223-3p to permit their cells to divide. Adding back miR223-3p forces the BRCA1-mutant cancer cells to die, said study co-author Patrick Sung, D. Phil. Dr. Sung, who joined UT Health San Antonio in 2019 from Yale, is a BRCA1 cancer expert who occupies the Robert A. Welch Distinguished Chair in Biochemistry.

Exploiting the cancer's Achilles' heel

MiR223-3p acts like a light switch, turning off proteins that BRCA1-mutant cancers need to divide properly. Without these key cell division proteins, BRCA1-mutant tumors commit suicide, Dr. Hromas said.

"It's kind of a cool way of thinking about treatment," Dr. Hromas said. "We are using the very nature of these BRCA1-deficient cancer cells against them. We are attacking the very mechanism by which they became a cancer in the first place."

There is evidence that restoring miR223-3p before cells convert to cancer can even prevent BRCA1-related disease, he said.

BRCA gene mutations affect 1 in every 400 people in the United States -- an estimated 825,000. After Ashkenazi Jews, Hispanics have the second-highest prevalence of BRCA1 disease-causing mutations. The disease's burden in San Antonio and South Texas is therefore among the highest in the country.

Credit: 
University of Texas Health Science Center at San Antonio

Enhancing materials for hi-res patterning to advance microelectronics

image: (Left to right): Ashwanth Subramanian, Ming Lu, Kim Kisslinger, Chang-Yong Nam, and Nikhil Tiwale in the Electron Microscopy Facility at Brookhaven Lab's Center for Functional Nanomaterials. The scientists used scanning electron microscopes to image high-resolution, high-aspect-ratio silicon nanostructures they etched using a "hybrid" organic-inorganic resist.

Image: 
Brookhaven National Laboratory

UPTON, NY--To increase the processing speed and reduce the power consumption of electronic devices, the microelectronics industry continues to push for smaller and smaller feature sizes. Transistors in today's cell phones are typically 10 nanometers (nm) across--equivalent to about 50 silicon atoms wide--or smaller. Scaling transistors down below these dimensions with higher accuracy requires advanced materials for lithography--the primary technique for printing electrical circuit elements on silicon wafers to manufacture electronic chips. One challenge is developing robust "resists," or materials that are used as templates for transferring circuit patterns into device-useful substrates such as silicon.

Now, scientists from the Center for Functional Nanomaterials (CFN)--a U.S. Department of Energy (DOE) Office of Science User Facility at Brookhaven National Laboratory--have used the recently developed technique of infiltration synthesis to create resists that combine the organic polymer poly(methyl methacrylate), or PMMA, with inorganic aluminum oxide. Owing to its low cost and high resolution, PMMA is the most widely used resist in electron-beam lithography (EBL), a kind of lithography in which electrons are used to create the pattern template. However, at the resist thicknesses that are necessary to generate the ultrasmall feature sizes, the patterns typically start to degrade when they are etched into silicon, failing to produce the required high aspect ratio (height to width).

As reported in a paper published online on July 8 in the Journal of Materials Chemistry C, these "hybrid" organic-inorganic resists exhibit a high lithographic contrast and enable the patterning of high-resolution silicon nanostructures with a high aspect ratio. By changing the amount of aluminum oxide (or a different inorganic element) infiltrated into PMMA, the scientists can tune these parameters for particular applications. For example, next-generation memory devices such as flash drives will be based on a three-dimensional stacking structure to increase memory density, so an extremely high aspect ratio is desirable; on the other hand, a very high resolution is the most important characteristic for future processor chips.

"Instead of taking an entirely new synthesis route, we used an existing resist, an inexpensive metal oxide, and common equipment found in almost every nanofabrication facility," said first author Nikhil Tiwale, a postdoctoral research associate in the CFN Electronic Nanomaterials Group.

Though other hybrid resists have been proposed, most of them require high electron doses (intensities), involve complex chemical synthesis methods, or have expensive proprietary compositions. Thus, these resists are not optimal for the high-rate, high-volume manufacture of next-generation electronics.

Advanced nanolithography for high-volume manufacturing

Conventionally, the microelectronics industry has relied upon optical lithography, whose resolution is limited by the wavelength of light that the resist gets exposed to. However, EBL and other nanolithography techniques such as extreme ultraviolet lithography (EUVL) can push this limit because of the very small wavelength of electrons and high-energy ultraviolet light. The main difference between the two techniques is the exposure process.

"In EBL, you need to write all of the area you need to expose line by line, kind of like making a sketch with a pencil," said Tiwale. "By contrast, in EUVL, you can expose the whole area in one shot, akin to taking a photograph. From this point of view, EBL is great for research purposes, and EUVL is better suited for high-volume manufacturing. We believe that the approach we demonstrated for EBL can be directly applied to EUVL, which companies including Samsung have recently started using to develop manufacturing processes for their 7 nm technology node."

In this study, the scientists used an atomic layer deposition (ALD) system--a standard piece of nanofabrication equipment for depositing ultrathin films on surfaces--to combine PMMA and aluminum oxide. After placing a substrate coated with a thin film of PMMA into the ALD reaction chamber, they introduced a vapor of an aluminum precursor that diffused through tiny molecular pores inside the PMMA matrix to bind with the chemical species inside the polymer chains. Then, they introduced another precursor (such as water) that reacted with the first precursor to form aluminum oxide inside the PMMA matrix. These steps together constitute one processing cycle.

The team then performed EBL with hybrid resists that had up to eight processing cycles. To characterize the contrast of the resists under different electron doses, the scientists measured the change in resist thickness within the exposed areas. Surface height maps generated with an atomic force microscope (a microscope with an atomically sharp tip for tracking the topography of a surface) and optical measurements obtained through ellipsometry (a technique for determining film thickness based on the change in the polarization of light reflected from a surface) revealed that the thickness changes gradually with a low number of processing cycles but rapidly with additional cycles--i.e., a higher aluminum oxide content.

"The contrast refers to how fast the resist changes after being exposed to the electron beam," explained Chang-Yong Nam, a materials scientist in the CFN Electronic Nanomaterials Group, who supervised the project and conceived the idea in collaboration with Jiyoung Kim, a professor in the Department of Materials Science and Engineering at the University of Texas at Dallas. "The abrupt change in the height of the exposed regions suggests an increase in the resist contrast for higher numbers of infiltration cycles--almost six times higher than that of the original PMMA resist."

The scientists also used the hybrid resists to pattern periodic straight lines and "elbows" (intersecting lines) in silicon substrates, and compared the etch rate of the resists with substrates.

"You want silicon to be etched faster than the resist; otherwise the resist starts to degrade," said Nam. "We found that the etch selectivity of our hybrid resist is higher than that of costly proprietary resists (e.g., ZEP) and techniques that use an intermediate "hard" mask layer such as silicon dioxide to prevent pattern degradation, but which require additional processing steps."

Going forward, the team will study how the hybrid resists respond to EUV exposure. They have already started using soft x-rays (energy range corresponding to the wavelength of EUV light) at Brookhaven's National Synchrotron Light Source II (NSLS-II), and hope to use a dedicated EUV beamline operated by the Center for X-ray Optics at Lawrence Berkeley National Lab's Advanced Light Source (ALS) in collaboration with industry partners.

"The energy absorption by the organic layer of EUVL resists is very weak," said Nam. "Adding inorganic elements, such as tin or zirconium, can make them more sensitive to EUV light. We look forward to exploring how our approach can address the resist performance requirements of EUVL."

Credit: 
DOE/Brookhaven National Laboratory

Water harvester makes it easy to quench your thirst in the desert

image: The latest version of UC Berkeley's water harvester blows ambient air over a cartridge filled with MOF, which is visible inside the plexiglass box. The MOF pulls water from arid air, which is then removed from the MOF by mild heating. The concentrated water vapor is blown out through the tube at right to a condenser. This process produces water for drinking using only solar panels and a battery, even in areas as dry as the Mojave Desert.

Image: 
Mathieu Prévot, UC Berkeley

With water scarcity a growing problem worldwide, University of California, Berkeley, researchers are close to producing a microwave-sized water harvester that will allow you to pull all the water you need directly from the air -- even in the hot, dry desert.

In a paper appearing this week in ACS Central Science, a journal of the American Chemical Society, UC Berkeley's Omar Yaghi and his colleagues describe the latest version of their water harvester, which can pull more than five cups of water (1.3 liters) from low-humidity air per day for each kilogram (2.2 pounds) of water-absorbing material, a very porous substance called a metal-organic framework, or MOF. That is more than the minimum required to stay alive.

During field tests over three days in California's arid Mojave Desert, the harvester reliably produced 0.7 liters per kilogram of absorber per day -- nearly three cups of clean, pure H2O. That's 10 times better than the previous version of the harvester. The harvester cycles 24/7, powered by solar panels and a battery.

Even on the driest day in the desert, with an extremely low relative humidity of 7% and temperatures over 80 degrees Fahrenheit, the harvester produced six ounces (0.2 liters) of water per kilogram of MOF per day.

"It is well known that in order to condense water from air at a low humidity -- less than 40 percent relative humidity -- you need to cool down the air to below freezing, to zero degrees Celsius, which is impractical. With our harvester, we are doing this at very low humidity without such cooling; there is no other material that can do that," said Yaghi, a UC Berkeley professor of chemistry and co-director of the Kavli Energy NanoSciences Institute. "This is not like a dehumidifier, which operates at high relative humidity. Some people say that 0.7 liters is not a lot of water. But it is a lot of water, if you don't have water."

Yaghi's startup, Water Harvester Inc., is now testing and will soon market a device the size of a microwave oven that can supply 7 to 10 liters of water per day: enough drinking and cooking water for two to three adults per day, based on recommendations from the National Academy of Sciences that men should consume 3.7 liters and women 2.7 liters of fluid per day.

An even larger version of the harvester, one the size of a small refrigerator, will provide 200 to 250 liters of water per day, enough for a household to drink, cook and shower. And in a couple of years, the company hopes to have a village-scale harvester that will produce 20,000 liters per day. All would run on power from solar panels and a battery or off the electrical grid.

"We are making ultra-pure water, which potentially can be made widely available without connection to the water grid," said Yaghi, the James and Neeltje Tretter Chair in the College of Chemistry. "This water mobility is not only critical to those suffering from water stress, but also makes possible the larger objective -- that water should be a human right."

The key: highly porous MOFs

The harvester's secret ingredient is a type of MOF invented by Yaghi and his UC Berkeley colleagues that easily and quickly takes up water from the air and just as readily disgorges it so the water can be collected. MOFs, which Yaghi has been developing since the mid-1990s, are so porous that a gram has a surface area equivalent a football field. Other types of MOFs capture carbon dioxide from flue gases, catalyze chemical reactions or separate petrochemicals in processing plants.

The researchers came up with their first water-absorbing MOF, called MOF-801, in 2014. Water molecules in ambient air stick to the internal surface -- a process called adsorption -- and increase the humidity inside the MOF to a point where the water condenses even at room temperature, just as water condenses on cooler surfaces when the humidity is high. When the MOF is heated slightly, the water comes back out and can be condensed and collected.

The first harvester employing MOF-801 premiered in 2017 and was totally passive and solar powered: It sat and adsorbed water at night and gave it up the next day in the heat of the sun, with the water vapor condensing on the inside surface of the container.

By 2018, Yaghi's Berkeley team had turned that proof-of-concept device into a second-generation harvester that collected 0.07 liters -- a little over 2 ounces -- of water per day per kilogram of MOF during one day-night cycle in the Arizona desert, again using heat from the sun to drive the water out of the MOF.

"Although the amount of water was low, the experiment showed how water from desert air can be concentrated into the pores of the MOF, removed by mild heating with sunlight and then condensed at ambient conditions," Yaghi said.

The 2019 model is no longer passive: It uses solar panels to power fans blowing ambient air over MOF contained within a cartridge, so that more of the MOF is exposed to air. The MOF-filled cartridge, about 10 inches square and 5 inches thick, is intersected by two sets of channels: one set for adsorbing water, the other for expelling it to the condenser, allowing continuous cycling throughout the day. The solar panels, attached to batteries so that the harvester can run at night, also power small heaters that drive the water out of the MOF.

The productivity of this new water harvester is 10 times the amount harvested by the previous device and 100 times higher than the early proof-of-concept device. No traces of metal or organics have been found in the water.

The improved productivity and shorter cycling time of the new device comes from a newly designed MOF, MOF-303, that is based on aluminum, as opposed to MOF-801, which is based on zirconium. MOF-303 can hold 30% more water than MOF-801 and can adsorb and desorb water in a mere 20 minutes under ideal conditions -- something Yaghi's startup is close to achieving.

"MOF-303 does two things very well: It takes up much more water than the zirconium MOF we reported on before, and it does it much faster," Yaghi said. "This allows water to go in and out much faster; you can pump air in and harvest the water over many cycles per day."

Yaghi gets inquiries about his harvester nearly every day from people, agencies and countries around the world, many in arid regions of the Middle East, Africa, South America, Mexico, Australia and around the Mediterranean. The bulk of the funding for improvements to the harvester comes from Saudi Arabia's King Abdulaziz City for Science and Technology, as part of a joint KACST-UC Berkeley collaboration called the Center of Excellence for Nanomaterials and Clean Energy Applications. Desert kingdoms chronically short on water appreciate the harvester's potential, said Yaghi, who comes from another arid country, Jordan.

"The atmosphere has almost as much water at any one time as all the rivers and lakes," he said. "Harvesting this water could help turn dry deserts into oases."

Credit: 
University of California - Berkeley

NASA finds Tropical Depression battling wind shear off the Carolina coast

image: On August 27 at 2:40 a.m. EDT (0640 UTC) the MODIS instrument that flies aboard NASA's Aqua satellite showed a very small area of strongest storms (yellow) in Tropical Depression 6's center where cloud top temperatures were as cold as minus 80 degrees Fahrenheit (minus 62.2 Celsius).

Image: 
NASA/NRL

Newly formed Tropical Depression 6 in the Atlantic Ocean may have just formed, but it did so under adverse atmospheric conditions. The depression is battling wind shear and it's apparent on imagery from NASA's Aqua satellite.

Wind shear is a measure of how the speed and direction of winds change with altitude. When outside winds batter a tropical cyclone, it affects its circulation. A less circular storm tends to slow down in its spin and weaken.

Tropical Depression 6 or TD6 formed around 5 p.m. EDT on August 26 and has since been moving slowly while remaining a few hundred miles off the coast of the Carolinas.

On August 27 at 2:40 a.m. EDT (0640 UTC), the Moderate Imaging Spectroradiometer or MODIS instrument that flies aboard NASA's Aqua satellite used infrared light to analyze the strength of storms within the depression. Tropical cyclones are made of up hundreds of thunderstorms, and infrared data can show where the strongest storms are located. They can do that because infrared data provides temperature information, and the strongest thunderstorms that reach highest into the atmosphere have the coldest cloud top temperatures.

MODIS found those strongest storms had cloud top temperatures as cold as minus 80 degrees Fahrenheit (minus 62.2 Celsius). NASA research has found that cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall. Those strongest storms were in a small area around the center of circulation. Less strong storms were pushed to the south by northerly wind shear.

The NHC or National Hurricane Center noted in their Discussion on Aug. 27 at 11 a.m. EDT "The depression, however, continues to be sheared with the low-level center to the north of the convection. The northwesterly shear currently affecting the depression is expected to continue, and only a small increase in intensity is anticipated in the next couple of days."

NHC noted at 11 a.m. EDT (1500 UTC), the center of Tropical Depression Six was located near latitude 31.2 degrees north and longitude 71.2 degrees west. That's about 385 miles (615 km) west of Bermuda and about 370 miles (600 km) southeast of Cape Hatteras, North Carolina.

The depression has been drifting northward near 2 mph (4 kph), and little motion is anticipated today. Maximum sustained winds are near 35 mph (55 km/h) with higher gusts. Some strengthening is expected, and the cyclone is forecast to become a tropical storm later tonight or on Wednesday.

NHC forecasters expect TD6 should begin to move generally northward and then northeastward on Wednesday over the open Atlantic.

For updated forecasts, visit: https://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

NASA analyzes Tropical Storm Dorian day and night

image: NASA-NOAA's Suomi NPP satellite passed over the western North Atlantic Ocean and captured a visible image of Tropical Storm Dorian approaching the Leeward Islands on Aug. 26, 2019.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS).

Tropical Storm Dorian was approaching the Leeward Islands when NASA-NOAA's Suomi NPP satellite passed overhead from space and snapped a visible image of the storm. When Suomi NPP came by again the satellite provided a night-time image from early morning on Aug. 27.

The Leeward Islands are a group of islands in the eastern Caribbean Sea, where the sea meets the western Atlantic Ocean. The Leewards extend from the Virgin Islands east of Puerto Rico, southeast to Guadeloupe and its dependencies.

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a visible image of the storm on August 26, 2019. The VIIRS image showed the storm had taken on somewhat of a comma shape. That shape and the storm's thunderstorm pattern changed over the course of the day because dry air moved into mid-level areas of the storm, suppressing thunderstorm development. A tropical cyclone is made up of hundreds of thunderstorms, so when development is inhibited, it affects the storm's strength and sometimes the shape of the storm, depending on the direction from which the dry air comes.

By 5 p.m. EDT on Aug. 26, the National Hurricane Center said, "Although the inner-core convection has decreased recently, a recent burst of strong convection (rising air that forms thunderstorms) with cloud tops colder than minus 80 degrees Celsius [indicating powerful thunderstorms with the potential to generate heavy rainfall] has redeveloped just north of the low-level center."

At 1:06 a.m. EDT (0506 UTC) on Aug. 27, the Suomi NPP satellite passed over Dorian again, and viewed the storm at night and obtained high-resolution infrared-band imagery. The nighttime image showed that Dorian appeared to be relatively compact in size with a few overshooting cloud tops and some tropospheric convective gravity waves.

The nighttime image was created by William Straka III, a researcher at the University of Wisconsin - Madison, Space Science and Engineering Center (SSEC Cooperative Institute for Meteorological Satellite Studies (CIMSS). "The Waning Crescent moon, with 12 percent illumination didn't show many features," Straka said. "Even with airglow present, though one could see the city lights peeking through the clouds and a lone ship on the southern edge of the storm."

By August 27, Dorian moved across St. Lucia and into the eastern Caribbean Sea bringing tropical-storm-force winds with it. NHC said the Caribbean composite radar data show that Dorian remains a very compact system and that it still lacks a well-defined inner core.

The National Hurricane Center or NHC posted many watches and warnings on Aug. 27. A Hurricane Watch is in effect for Puerto Rico, the Dominican Republic from Isla Saona to Samana. A Tropical Storm Warning is in effect for Martinique, St. Vincent and the Grenadines and Puerto Rico. A Tropical Storm Watch is in effect for Dominica, Grenada and its dependencies, Saba and St. Eustatius, Dominican Republic from Isla Saona to Punta Palenque, and the Dominican Republic from Samana to Puerto Plata.

At 8 a.m. EDT (1200 UTC) on Tuesday, Aug. 27, NOAA's National Hurricane Center or NHC noted the center of Tropical Storm Dorian was located by surface observations and Martinique radar data near latitude degrees 14.0 North and longitude 61.2 West. That puts the storm's center just 15 miles (25 km) west-northwest of St. Lucia.

Dorian is moving toward the west-northwest near 13 mph (20 kph), and this motion is expected to continue through tonight, followed by a turn toward the northwest on Wednesday.

Maximum sustained winds are near 50 mph (85 kph) with higher gusts. Slow strengthening is forecast during the next 48 hours, and Dorian is forecast to be near hurricane strength when it moves close to Puerto Rico and eastern Hispaniola.

NHC said, "On the forecast track, the center of Dorian will move across the eastern and northeastern Caribbean Sea during the next few days, passing near or south of Puerto Rico on Wednesday [Aug. 28], move near or over eastern Hispaniola Wednesday night, and move north of Hispaniola on Thursday [Aug. 29]."

For updated forecasts, visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

NASA-NOAA satellite tracks tropical depression Podul across Philippines

image: On August 27, 2019, NASA-NOAA's Suomi NPP satellite passed over the South China Sea and captured a visible image of Tropical Storm Podul.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS).

Tropical Depression 13W, now named Podul, was crossing the Philippines from east to west as NASA-NOAA's Suomi NPP satellite provided a visible image of the storm.

Podul's trek across the country triggered many Philippines warnings on August 27, 2019. Tropical cyclone wind signal #2 is in effect over the following Luzon provinces:  Isabela, Aurora, Quirino, Nueva Vizcaya, Ifugao, Mountain Province, Ifugao, Benguet, Ilocos Sur, La Union and Pangasinan. Tropical cyclone wind signal #1 is in effect over the following Luzon provinces: Cagayan, Apayao, Abra, Kalinga, Ilocos Norte, Nueva Ecija, Tarlac, Zambales, Bataan, Pampanga, Bulacan, Metro Manila, Rizal, northern portion of Quezon including Polillo Island and Alabat Island, Cavite, Laguna, Camarines Norte, northeastern portion of Camarines Sur and Catanduanes.

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a visible image of the storm on August 27, 2019. The VIIRS image showed the storm blanketed the country from north to south. The bulk of the clouds were located over the northern and central Philippines.

At 11 a.m. EDT (1500 UTC), Podul, known locally in the Philippines as "Jenny," was located near 16.0 degrees north latitude and 123.2 degrees east longitude. That puts the center of circulation about 153 nautical miles northeast of Manila, Philippines. It was moving to the west-northwest and had maximum sustained winds near 30 knots (34.5 mph/55.5. kph).

The Joint Typhoon Warning Center said that Podul will move west across the Philippines, before turning northwest towards Hainan Island, China. The system is expected to make final landfall in Vietnam after five days.

Credit: 
NASA/Goddard Space Flight Center

Using Wi-Fi like sonar to measure speed and distance of indoor movement

Researchers from North Carolina State University have developed a technique for measuring speed and distance in indoor environments, which could be used to improve navigation technologies for robots, drones - or pedestrians trying to find their way around an airport. The technique uses a novel combination of Wi-Fi signals and accelerometer technology to track devices in near-real time.

"We call our approach Wi-Fi-assisted Inertial Odometry (WIO)," says Raghav Venkatnarayan, co-corresponding author of a paper on the work and a Ph.D. student at NC State. "WIO uses Wi-Fi as a velocity sensor to accurately track how far something has moved. Think of it as sonar, but using radio waves, rather than sound waves."

Many devices, such as smartphones, incorporate technology called inertial measurement units (IMUs) to calculate how far a device has moved. However, IMUs suffer from large drift errors, meaning that even minor inaccuracies can quickly become exaggerated.

In outdoor environments, many devices use GPS to correct their IMUs. But this doesn't work in indoor areas, where GPS signals are unreliable or nonexistent.

"We created WIO to work in conjunction with a device's IMU, correcting any errors and improving the accuracy of speed and distance calculations," says Muhammad Shahzad, co-corresponding author of the paper and an assistant professor of computer science at NC State. "This improvement in accuracy should also improve the calculations regarding a device's precise location in any indoor environment where there is a Wi-Fi signal."

The researchers wanted to test the WIO software but ran into a problem: they could not access the Wi-Fi network interface cards in off-the-shelf devices such as smartphones or drones. To address the problem, the researchers created a prototype device that could be used in conjunction with other devices.

The researchers found that using WIO improved a device's speed and distance calculations dramatically. For example, devices using WIO calculated distance with a margin of error ranging from 5.9% to 10.5%. Without WIO, the devices calculated distance with a margin of error from 40% to 49%.

"We envision WIO as having applications in everything from indoor navigational tools to fitness tracking to interactive gaming," Venkatnarayan says.

"We are currently working with Sony to further improve WIO's accuracy, with an eye toward incorporating the software into off-the-shelf technologies," says Shahzad.

Credit: 
North Carolina State University

How texture deceives the moving finger

The perceived speed of a surface moving across the skin depends on texture, with some textures fooling us into thinking that an object is moving faster than it is, according to a study published August 27 in the open-access journal PLOS Biology by Sliman Bensmaia of the University of Chicago, and colleagues. The researchers explore the basis for this tactile illusion and show that texture-dependent speed perception is determined by the responses of a specific type of nerve fiber in the skin.

When we manipulate objects, we acquire information about their properties, including shape and texture, through the sense of touch. But motion is an essential component of everyday tactile experience, and most manual interactions involve relative movement between the skin and objects. However, very little is known about how motion is represented in the responses of tactile nerve fibers; much of the research on the neural basis of tactile motion perception has focused on how direction is encoded, but less is known about how speed is. And while perceived speed has been shown to be dependent on surface texture, previous studies used only coarse textures, which are easier to discern and track than are fine ones.

To fill this gap, Bensmaia and colleagues measured the ability of blindfolded human observers to report the speed of natural textures (including corduroy, stretch denim, nylon and faux crocodile skin), which span the range of tactile experience, scanned across the skin. In parallel experiments, they recorded from skin nerve fibers and neurons in a brain region known as somatosensory cortex of monkeys exposed to the same textures scanned at different speeds.

The authors found that the perception of speed is heavily influenced by texture: The more the texture produces vibrations in the skin, the faster the surface is felt to be moving. The strength of the texture vibrations can also predict how easily it is to discern changes in speed. The responses of a particular class of nerve fibers - Pacinian corpuscle-associated (PC) nerve fibers -, which are exquisitely sensitive to skin vibrations, account for both texture-dependent effects on speed perception (systematic bias and sensitivity). In the cerebral cortex, approximately half of the neurons exhibit speed-dependent responses, and this subpopulation receives strong input from PC fibers. According to the authors, the findings shed new light on the influence of texture on perceived tactile speed, and the underlying neural mechanisms.

In an accompanying Primer, Mathew Diamond, who was not involved in the study, points out that there may be method in the apparent madness of this potentially deceptive system for encoding tactile speed: "Though uncertainty seems at first glance to reflect non optimality in sensory processing, it is actually a consequence of efficient coding mechanisms that exploit prior knowledge about objects that are touched." The skin must detect many more feature "dimensions" of the things we touch (roughness, temperature, softness, stickiness, wetness, etc.) than the channels through which this information is encoded; some interference between features, such as that studied here between speed and texture, are an inevitability and an opportunity.

Credit: 
PLOS