Brain

Inputs to the motor cortex make dexterous movements possible in mice

video: Using high speed video cameras, researchers tracked mice's arm motions as the animals reached out and grasped a food pellet. Then, they tested how switching off different parts of the brain affected this dexterous movement.

Image: 
Sauerbrei et al./<i>Nature</i> 2019

In a sleepy haze, reaching out and grabbing the coffee cup in front of you seems to happen on autopilot. But your caffeine-deprived brain is working hard. It's collecting sensory information and other kinds of feedback - clues about where your arm is in space relative to the mug - and sending it to your motor cortex. Then, the motor cortex plans the upcoming movement and tells your muscles to make it happen.

New research in mice is examining the role of those feedback signals entering the motor cortex, untangling how and when they're necessary to guide dexterous movements like grasping. That's been a big open question, says study coauthor Britton Sauerbrei, an associate at the Howard Hughes Medical Institute's Janelia Research Campus. Some neural circuits can generate rhythmic, patterned output without sustained input. Just as a single nudge from a rider can send a horse into a trot, these "central pattern generators" can help animals walk, swim, and fly without ongoing stimulation. But not the motor cortex, it turns out.

"What we show is the motor cortex is fundamentally different from that," says Sauerbrei. "You can't just give the cortex a little kick and have it take off and generate that pattern on its own." Instead, the motor cortex needs to receive feedback throughout the entire movement, Sauerbrei and his colleagues report December 25, 2019, in Nature.

He and his colleagues trained mice to reach for and grasp a food pellet, a behavior that depends on the motor cortex. In some animals, they turned off the thalamus, a switchboard in the brain that directs sensory information and other kinds of feedback to and from the cortex.

When the researchers blocked the signals coming into the motor cortex before the mice began to reach, the animals didn't initiate movement. And when incoming signals were blocked mid-reach, mice stopped moving their paw closer to the pellet.

The rhythm of those signals also matters, the researchers showed. In another experiment, they stimulated neurons carrying signals from the thalamus to the cortex with different patterns of incoming signals. The frequency of the stimulation affected the motor cortex output, with fast pulses disrupting mice's grasping skills.

The signals entering the motor cortex via the thalamus come from all over, and it's not yet clear which ones are most important for directing movement, says Adam Hantman, a group leader at Janelia and the paper's senior author. Inputs to the thalamus include sensory information about the position of the arm, visual information, motor commands from other brain regions, and predictions about the upcoming movement. Using tools developed by the Janelia project team Thalamoseq, Hantman's lab plans to switch specific regions of the thalamus on and off to test which inputs are really driving the behavior.

For Hantman, the complexity of understanding these kinds of motor skills is what makes studying them so exciting. "If you want to understand a behavior, and you think you're going to study one region, you might be in a tough position," he says. "You need to understand the whole central nervous system."

Credit: 
Howard Hughes Medical Institute

Chimpanzees more likely to share tools, teach skills when task is complex

image: An adult female chimpanzee with her offspring fishes for termites at Gombe, Tanzania. This image provided to accompany media for "Teaching varies with task complexity in wild chimpanzees," in PNAS the week of Dec. 23, 2019.

Image: 
Provided courtesy of Kara Walker.

Teach a chimpanzee to fish for insects to eat, and you feed her for a lifetime. Teach her a better way to use tools in gathering prey, and you may change the course of evolution.

For most wild chimpanzees, tool use is an important part of life -- but learning these skills is no simple feat. Wild chimpanzees transfer tools to each other, and this behavior has previously been shown to serve as a form of teaching.

A new study led by researchers at Washington University in St. Louis, the University of Miami and Franklin & Marshall College finds that chimpanzees that use a multi-step process and complex tools to gather termites are more likely to share tools with novices. The research was conducted in partnership with the Wildlife Conservation Society, Lincoln Park Zoo and the Jane Goodall Institute. The study helps illuminate chimpanzees' capacity for prosocial -- or helping -- behavior, a quality that has been recognized for its potential role in the evolution of human cultural abilities.

"Non-human primates are often thought to learn tool skills by watching others and practicing on their own, with little direct help from mothers or other expert tool users," said Stephanie Musgrave, assistant professor of anthropology at the University of Miami, and first author of the study published the week of Dec. 23 in the Proceedings of the National Academy of Sciences.

"In contrast, the results from this research indicate that social learning may vary in relation to how challenging the task is: during tasks that are more difficult, mothers can in fact play a more active role, including behaviors that function to teach."

Beginning with Jane Goodall in the 1960s, researchers have been studying chimpanzee tool use for decades at the Gombe Stream Research Center in Tanzania. The Gombe chimpanzee study is one of the longest running studies of animal behavior in the wild. This year marks the 20-year anniversary of the study of chimpanzees in the Goualougo Triangle, Republic of Congo, where researchers have documented some of the most complex tool behaviors of chimpanzees.

The study is distinctive because it applies standardized methods to directly compare how processes of cultural transmission may differ between two populations of wild chimpanzees. In both populations, the chimpanzees use tools to target the same resource -- but the task varies in complexity.

The findings of the current study are important on a number of levels, Musgrave said. "First, chimpanzee populations may vary not only in the complexity of their tool behaviors but in the social mechanisms that support these behaviors," she said. "Second, the capacity for helping in chimpanzees may be both more robust and more flexible than previously appreciated."

Maintaining chimpanzee cultures

Among animals, chimpanzees are exceptional tool users. Different groups of chimpanzees use different types of tools -- and likewise, researchers have suggested that the teaching process might be customized to facilitate these local skills.

In this study, researchers examined the transfer of tools between chimpanzees during termite gathering, and compared the population in the Goualougo Triangle, Republic of Congo, with the population in Gombe, Tanzania.

Termites and other insects are a valuable source of fat and protein in the diet of wild chimpanzees and also contribute important vitamins and minerals. Termites build complex nest structures that encompass a network of below-ground chambers, sometimes topped with a towering, freestanding mound reaching several meters high.

Chimpanzees in both locations use fishing-probe style tools to harvest termites, but Goualougo chimpanzees use multiple, different types of tools sequentially. They also make tools from specific plant species and customize fishing probes to improve their efficiency.

The researchers found differences in the rate, probability and types of tool transfer during termite gathering between these two populations.

At Goualougo, where the fishing tasks were more complex, the rate of tool transfer was three times higher than at Gombe, and Goualougo mothers were more likely to transfer a tool in response to a request. Further, mothers at Goualougo most often responded to tool requests by actively giving a tool to offspring. Such active transfers were never observed at Gombe, where mothers most often responded by refusing to transfer tools. Given that offspring in both populations made comparable requests for tools, these differences suggest that mothers at Goualougo were in fact more willing to provide tools.

"We have previously documented that tool transfers at Goualougo function as a form of teaching," said Crickette Sanz, associate professor of biological anthropology in Arts & Sciences at Washington University. "The population differences we observed in the present study suggest that teaching may be related specifically to the demands of learning to manufacture tools at Goualougo, where chimpanzees use multiple tool types, make tools from select plant species, and perform modifications that increase tool efficiency."

"An increased role for this type of social learning may thus be an important component of the transmission of complex tool traditions over generations," she said.

"While Gombe and Goualougo chimpanzees both fish for termites, we suspected that there might be differences in how this skill is acquired," said Elizabeth Lonsdorf, associate professor of psychology at Franklin & Marshall College. "But only after many years of accumulating these data were we able to rigorously quantify these differences."

"To date, prosocial helping in chimpanzees has been principally examined in captivity or using differing methods in the wild," said Stephen Ross, director of the Lester E. Fisher Center for the Study and Conservation of Apes at Lincoln Park Zoo. "This study provides novel evidence for helping behavior in wild chimpanzees and demonstrates that chimpanzees can help flexibly depending on context."

A shared capacity

Understanding how chimpanzee tool traditions are passed on over generations can provide insights into the evolutionary origins of complex cultural abilities in humans.

"Human evolution is characterized by the emergence and elaboration of complex technologies, which is often attributed to our species' aptitude for passing skills onto one another through mechanisms such as teaching and imitation. However, the evolutionary origins of these capacities remain unclear," Musgrave said.

"Our research shows that the human propensity to assist others in acquiring complex skills may build at least in part upon capacities that we share with our closest living relatives."

Conservation efforts are fundamental to this research and future studies.

"Chimpanzees and their cultures are endangered," said Emma Stokes, director of the Central Africa Program at the Wildlife Conservation Society.

"Recent research shows that human activity imperils the survival of chimpanzee cultures. Studying our closest living relatives offers a unique opportunity to gain insights into the evolutionary origins of cultural behavior -- but this privilege depends on long-term efforts to conserve these apes and their habitats."

Credit: 
Washington University in St. Louis

New treatment Strategy may thwart deadly brain tumors

BOSTON - Immune checkpoint inhibitors are important medications that boost the immune system's response against certain cancers; however, they tend to be ineffective against glioblastoma, the most deadly primary brain tumor in adults. New research in mice led by investigators at Massachusetts General Hospital (MGH) and the University of Florida reveals a promising strategy that makes glioblastoma susceptible to these medications. The findings, which are published in the Proceedings of the National Academy of Sciences, indicate that such combination therapy should be tested in clinical trials of patients with glioblastoma, for whom there is no known cure.

Part of the reason glioblastoma does not respond well to immune checkpoint inhibitors and other immunotherapies is because cells called myeloid-derived suppressor cells (MDSCs) infiltrate the region surrounding glioblastoma tumors, where they contribute to immunosuppression, tumor progression, and treatment resistance. Thus, targeting these cells may augment immunotherapy and improve responses to treatment in affected patients.

A collaborative effort co-led by Jeffrey K. Harrison, PhD, of the Department of Pharmacology and Therapeutics at the University of Florida, and Rakesh K. Jain, PhD, of the Department of Radiation Oncology at MGH and Harvard Medical School, set out to test this strategy. Using two mouse models of glioblastoma, the team targeted receptors--called chemokine receptors--that are important for allowing MDSCs to infiltrate into the region surrounding glioblastoma tumors. In mice that were bred to lack chemokine receptor 2 (CCR2) and to develop glioblastoma, MDSCs could not carry out such infiltration. Treating these mice with an immune checkpoint inhibitor stimulated a strong anti-cancer immune response and prolonged the animals' survival. In mice with normal CCR2, treatment with a molecule that blocks CCR2 had similar effects.

"The CCR2 antagonist used in this study--called CCX872-- has passed phase Ib safety trials in patients with pancreatic tumors, and clinical trials are ongoing to investigate the use of CCR2 inhibitors in several cancers," said Jain. "Thus, the results of this study support targeting CCR2-expressing MDSCs as a means to enhance immunotherapies, and warrant investigation of this combination therapy in clinical trials for patients with glioblastoma."

Credit: 
Massachusetts General Hospital

Researchers identify immune-suppressing target in glioblastoma

HOUSTON -- Researchers at The University of Texas MD Anderson Cancer Center have identified a tenacious subset of immune macrophages that thwart treatment of glioblastoma with anti-PD-1 checkpoint blockade, elevating a new potential target for treating the almost uniformly lethal brain tumor.

Their findings, reported in Nature Medicine, identify macrophages that express high levels of CD73, a surface enzyme that's a vital piece of an immunosuppressive molecular pathway. The strong presence of the CD73 macrophages was unique to glioblastoma among five tumor types analyzed by the researchers.

"By studying the immune microenvironments across tumor types, we've identified a rational combination therapy for glioblastoma," says first author Sangeeta Goswami, M.D., Ph.D., assistant professor of Genitourinary Medical Oncology.

Glioblastoma immunotherapy clinical trial planned

After establishing the cells' presence in human tumors and correlating them with decreased survival, the researchers took their hypothesis to a mouse model of glioblastoma. They found combining anti-PD-1 and anti-CTLA-4 immunotherapies in CD73 knockout mice stifled tumor growth and increased survival.

"We're working with pharmaceutical companies that are developing agents to target CD73 to move forward with a glioblastoma clinical trial in combination with anti-PD-1 and anti-CTLA-4 checkpoint inhibitors," says Padmanee Sharma, M.D., Ph.D., professor of Genitourinary Medical Oncology and Immunology.

Sharma and colleagues take an approach they call reverse translation. Instead of developing hypotheses through cell line and animal model experiments that are then translated to human clinical trials, the team starts by analyzing human tumors to generate hypotheses for testing in the lab in hopes of then taking findings to human clinical trials.

To more effectively extend immunotherapy to more cancers, Sharma says, researchers need to realize immune microenvironments differ from cancer to cancer. "Understanding what's different in immune niches across cancers provides clues and targets for treating tumors," Sharma says. "That's why we did this study."

The team tracked down the population of CD73-positive macrophages through a project to characterize immune cells found in five tumor types using CyTOF mass cytometry and single-cell RNA sequencing. They analyzed 94 human tumors across glioblastoma, non-small cell lung cancer and kidney, prostate and colorectal cancers to characterize clusters of immune cells.

CD73 cells associated with shorter survival

The most surprising finding was a metacluster of immune cells found predominantly among the 13 glioblastoma tumors. Cells in the cluster expressed CD68, a marker for macrophages, immune system cells that either aid or suppress immune response. The CD68 metacluster also expressed high levels of CD73 as well as other immune-inhibiting molecules. The team confirmed these findings in nine additional glioblastomas.

Single-cell RNA sequencing identified an immunosuppressive gene expression signature associated with the high-CD73-expressing macrophages. A refined gene signature for the cells was evaluated against 525 glioblastoma samples from The Cancer Genome Atlas and was correlated with decreased survival.

The team conducted CyTOF mass cytometry cluster analysis on five glioblastoma tumors treated with the PD-1 checkpoint inhibitor pembrolizumab and seven untreated tumors. They identified three CD73-expressing macrophage clusters that persisted despite pembrolizumab treatment.

Sharma and colleagues note the prevalence of CD73-expressing macrophages likely contributed to lack of tumor-killing T cell responses and poor clinical outcome.

Combination extends survival in mice

A mouse model of glioblastoma showed that knocking out CD73 alone slowed tumor growth and increased survival.

The team treated mice with either PD-1 inhibitors or a combination of PD-1 and CTLA-4 immune checkpoint inhibitors. Mice with intact CD73 treated with the combination had increased survival over untreated mice, while mice with CD73 knocked out lived even longer after combination therapy. There was no survival benefit from anti-PD-1 alone.

"Based on our data and earlier studies, we propose a combination therapy strategy to target CD73 plus dual blockade of PD-1 and CTLA-4," the team concludes in the paper, noting that anti-CD73 antibodies have yielded promising results in early studies.

Credit: 
University of Texas M. D. Anderson Cancer Center

Scientists develop gentle, microscopic hands to study tiny, soft materials

image: University of Illinois researchers have honed a technique called the Stokes trap, which can handle and test the physical limits of tiny, soft particles using only fluid flow. From left, undergraduate student Channing Richter, professor Charles Schroeder and graduate student Dinesh Kumar.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. — Handling very soft, delicate items without damaging them is hard enough with human hands, let alone doing it at the microscopic scale with laboratory instruments. Three new studies show how scientists have honed a technique for handling tiny, soft particles using precisely controlled fluid flows that act as gentle microscopic hands. The technique allows researchers to test the physical limits of these soft particles and the things made from them – ranging from biological tissues to fabric softeners.

The three studies, led by the University of Illinois’ Charles Schroeder, the Ray and Beverly Mentzer Faculty Scholar of chemical and biomolecular engineering, detail the technology and application of the Stokes trap – a method for manipulating small particles using only fluid flow. In the newest study, published in the journal Soft Matter, the team used the Stokes trap to study the dynamics of vesicles – squishy fluid-filled particles that are stripped-down versions of cells and have direct relevance to biological systems, the researchers said. This follows up on two recent studies in the journals Physical Review Fluids and Physical Review Applied that expanded the power of the trapping method.

“There are several other techniques available for manipulating small particles, such as the widely used and Nobel Prize-winning optical trap method that uses carefully aligned lasers to capture particles, ” said Dinesh Kumar, a chemical and biomolecular engineering graduate student and lead author of two of the studies. “The Stokes trap offers several advantages over other methods, including the ease of scaling up to study multiple particles and the ability to control the orientiation and trajectories of different shape particles such as rods or spheres.”

Armed with the improved Stokes trap technology, the team set out to understand the dynamics of lipid vesicles when they are far from their normal equilibrium state.

“We wanted to understand what happens to these particles when they are pulled on in a strong flow,” Schroeder said. “In real-world applications, these materials are stretched when they interact with each other; they are processed, injected and constantly undergoing stresses that lead to deformation. How they act when they deform has important implications on their use, long-term stability and processability.”

“We found that when vesicles are deformed in a strong flow, they stretch into one of three distinct shapes – symmetric dumbbell, asymmetric dumbbell or ellipsoid shape,” Kumar said. “We observed that these shape transitions are independent of the viscosity difference of the fluids between vesicle interior and exterior. This demonstrates that the Stokes trap is an effective way to measure stretching dynamics of soft materials in solution and far from equilibrium.”

With their new data, the team was able to produce a phase diagram that can be used by researchers to determine how certain types of fluid flow will influence deformation and, ultimately, the physical properties of soft particles when pulled on from different flow directions.

“For example, products like fabric softeners – which are composed of vesicle suspensions – do not work correctly when they clump together,” Kumar said. “Using the Stokes trap, we can figure out what types of particle interactions cause the vesicles to aggregate and then design a better-performing material.”

The technique is currently limited by the size of particles that the Stokes trap can catch and handle, the researchers said. They are working with particles that generally are larger than 100 nanometers in diameter, but in order for this technology to apply more directly to biological systems, they will need to be able to grab particles that are 10 to 20 nanometers in diameter – or even down to a single protein.  

The team is currently working to capture smaller particles and collaborating with colleagues at Stanford University to apply the Stokes trap to study membrane proteins.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

For CRISPR, tweaking DNA fragments before inserting yields highest efficiency rates yet

CHAMPAIGN, Ill. -- University of Illinois researchers achieved the highest reported rates of inserting genes into human cells with the CRISPR-Cas9 gene-editing system, a necessary step for harnessing CRISPR for clinical gene-therapy applications.

By chemically tweaking the ends of the DNA to be inserted, the new technique is up to five times more efficient than current approaches. The researchers saw improvements at various genetic locations tested in a human kidney cell line, even seeing 65% insertion at one site where the previous high had been 15%.

Led by chemical and biomolecular engineering professor Huimin Zhao, the researchers published their work in the journal Nature Chemical Biology.

Researchers have found CRISPR to be an efficient tool to turn off, or "knock out," a gene. However, in human cells, it has not been a very efficient way to insert or "knock in" a gene.

"A good knock-in method is important for both gene-therapy applications and for basic biological research to study gene function," said Zhao, who leads the biosystems design theme at the Carl R. Woese Institute for Genomic Biology at Illinois. "With a knock-in method, we can add a label to any gene, study its function and see how gene expression is affected by cancer or changes in chromosome structure. Or for gene-therapy applications, if someone has a disease caused by a missing gene, we want to be able to insert it."

Searching for a way to increase efficiency, Zhao's group looked at 13 different ways to modify the inserted DNA. They found that small changes to the very end of the DNA increased both the speed and efficiency of insertion.

Then, the researchers tested inserting end-modified DNA fragments of varying sizes at multiple points in the genome, using CRISPR-Cas9 to precisely target specific sites for insertion. They found efficiency improved two to five times, even when inserting larger DNA fragments - the most difficult insertion to make.

"We speculate that the efficiency improved so much because the chemical modification to the end stabilizes the DNA we are inserting," Zhao said. "Normally, when you try to transfer DNA into the cell, it gets degraded by enzymes that eat away at it from the ends. We think our chemical addition protects the ends. More DNA is getting into the nucleus, and that DNA is more stable, so that's why I think it has a higher chance to be integrated into the chromosome."

Zhao's group already is using the method to tag essential genes in gene function studies. They purposely used off-the-shelf chemicals to modify the DNA fragments so that other research teams could use the same method for their own genetic studies.

"We've developed quite a few knock-in methods in the past, but we never thought about just using chemicals to increase the stability of the DNA we want to insert," Zhao said. "It's a simple strategy, but it works."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Gazing into crystal balls to advance understanding of crystal formation

Tokyo, Japan--Crystallization is the physical phenomenon of the transformation of disordered molecules in a liquid or gas phase into a highly ordered solid crystal through two stages: nucleation and growth. Crystallization is very important in materials and natural sciences because it occurs in a wide range of materials, including metals, organic compounds, and biological molecules, so it is desirable to comprehensively understand this process.

Colloids consisting of hard spheres suspended in a liquid are often used as a model system to study crystallization. For many years, a large discrepancy of up to ten orders of magnitude has been observed between the computationally simulated and experimentally measured nucleation rates of hard-sphere colloids. This discrepancy has typically been explained by the simulations not taking hydrodynamic interactions--the interactions between solvent molecules--into account. Researchers at The University of Tokyo Institute of Industrial Science, the University of Oxford, and the Sapienza University recently teamed up to further explore this explanation for the discrepancy between actual and calculated nucleation rates.

The collaboration first developed a hard-sphere colloidal model that could reliably simulate the experimental thermodynamic behavior of real hard-sphere systems. Next, they conducted simulations of crystallization of the model system considering and neglecting hydrodynamic interactions to clarify the effect of these interactions on crystallization behavior.

"We initially designed a simulation model that accurately reproduced the real thermodynamics of hard-sphere systems," says study lead author Michio Tateno. "This confirmed the reliability and suitability of the model for use in further simulations."

The simulation results obtained using the developed model neglecting and considering hydrodynamic interactions revealed that hydrodynamic interactions did not affect nucleation rate, which was contrary to the prevailing consensus. Plots of nucleation rate against the proportion of hard spheres in the system were the same for calculations both with and without hydrodynamic interactions and also agreed with results reported by another research group.

"We performed calculations using the developed model with and without considering hydrodynamic interactions," explains senior author Hajime Tanaka. "The calculated rates of crystal nucleation were similar in both cases, which led us to conclude that hydrodynamic interactions do not explain the hugely different nucleation rates obtained experimentally and theoretically."

The research team's findings clearly illustrated that hydrodynamic interactions are not the origin of the large discrepancy between experimental and simulated nucleation rates. Their results further our understanding of crystallization behavior but leave the origin of this large discrepancy unexplained.

The article "Influence of hydrodynamic interactions on colloidal crystallization" was published in Physical Review Letters.

Credit: 
Institute of Industrial Science, The University of Tokyo

Space-time metasurface makes light reflect only in one direction

image: An illustration showing the concept of a space-time phase modulated metasurface consisting of resonating dielectric nanoantennas operating in reflection mode. A travelling phase modulation in sinusoidal form is superposed on the designed phase gradient along the horizontal direction. Light impinging on the metasurface with frequency ω is converted to a reflecting beam with frequency ω-Δω due to the parametric process arising from dynamic phase modulation, while the backpropagating beam with frequency ω-Δω is converted to ω -2Δω instead of ω, resulting in a nonreciprocal effect

Image: 
by Xuexue Guo, Yimin Ding, Yao Duan, and Xingjie Ni

Light propagation is usually reciprocal meaning that the trajectory of light travelling in one direction is identical from that in the opposite direction. Breaking reciprocity can make light propagate only in one direction. Optical components that support such unidirectional flow of light, for example isolators and circulators, are indispensable building blocks in many modern laser and communication systems. They are currently almost exclusively based on the magneto-optic effect, making the devices bulky and difficult for integration. It is in great demand to have a magnetic-free route to achieve nonreciprocal light propagation in many optical applications.

Recently, scientists developed a new type of optical metasurface with which phase modulation in both space and time is imposed on the reflected light, leading to different paths for the forward and backward light propagation. For the first time, nonreciprocal light propagation in free space was realized experimentally at optical frequencies with such an ultrathin component.

"This is the first optical metasurface with controllable ultrafast time-varying properties that is capable of breaking optical reciprocity without a bulky magnet," said Xingjie Ni, the Charles H. Fetter Assistant Professor in Department of Electrical Engineering at the Pennsylvania State University. The results were published this week in Light: Science and Applications.

The ultrathin metasurface consists of a silver back-reflector plate supporting block-shaped, silicon nanoantennas with large nonlinear Kerr index at near-infrared wavelengths around 860?nm. Heterodyne interference between two laser lines that are closely spaced in frequency was used to create efficient travelling-wave refractive index modulation upon the nanoantennas, which leads to ultrafast space-time phase modulation with unprecedentedly large temporal modulation frequency of about 2.8 THz. This dynamic modulation technique exhibits great flexibility in tuning both spatial and temporal modulation frequencies. Completely asymmetric reflections in forward and backward light propagations were achieved experimentally with a wide bandwidth around 5.77 THz within a sub-wavelength interaction length of 150 nm.

Light reflected by the space-time metasurface acquires a momentum shift induced by the spatial phase gradient as well as a frequency shift arisen from the temporal modulation. It exhibits asymmetric photonic conversions between forward and backward reflections. In addition, by exploiting unidirectional momentum transfer provided by the metasurface geometry, selective photonic conversions can be freely controlled by designing an undesired output state to lie in the forbidden, i.e. non-propagative, region.

This approach exhibits excellent flexibility in controlling light both in momentum and energy space. It will provide a new platform for exploring interesting physics arisen from time-dependent material properties and will open a new paradigm in the development of scalable, integratable, magnet-free nonreciprocal devices.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Using a chip to find better cancer fighting drugs

image: The device allows researchers to mimic the environment inside the body to screen for better cancer screening drugs.

Image: 
Kyoto University/Yokokawa Lab

Japan -- Kyoto University researchers have developed a new 'tumor-on-a-chip' device that can better mimic the environment inside the body, paving the way for improved screening of potential cancer fighting drugs.

The path to drug discovery is never easy. Scientists and clinicians can go through tens-of-thousands of potential compounds for years to find a handful of viable candidates, only for them to fail at the clinical level.

"Potential compounds are tested using animal models and cells cultured in a dish. However, those results frequently do not transfer over to human biology," explains first author Yuji Nashimoto formally of the Graduate School of Engineering, and now at Tohoku University. "Furthermore, cells on a dish lack the three-dimensional structure and blood vessels, or vasculature, that keep it alive. So, we came up with a plan to construct a device that solves these issues."

The device, reported in the journal Biomaterials, is the size of a coin with a 1 mm well at the center. This well is flanked by a series of 100 μm 'microposts'. The idea is that a three-dimensional culture of tumor cells is placed in the middle well, and then cells that construct blood vessels are places along the microposts. Over a few days the vessels grow and attaches to the culture.

"This 'perfusable vasculature' allows us to administer nutrients and drugs into the system to mimic the environment in the body," continues Nashimoto. "This allows us to have a clearer picture of the effectiveness of cancer treating compounds."

This perfusion did significantly keep the tumor cells healthy by keeping cell proliferation up and cell death down. A drug assay was then performed with the team administrating an anti-tumor drug at low doses. Interestingly, the drug was more effective under static conditions compared to when nutrients were flowing through the tumor cells.

In contrast, the drug's effects became more potent when the flow was turned on and the dosage was increased. Ryuji Yokokawa, who lead the team, explains that the unexpected results prove that we need to consider the balance between proliferation of tumor cells and the efficacy of the drug under flow conditions.

"We hypothesize that at low doses the benefit of the nutrient flow outweighs the effect of the anti-tumor drug. It proves the importance of blood flow in the vasculature when screening for drugs."

He concludes, "Due to its size and utility, we hope the new device can expedite the tests on the countless number of potential new drugs. While many questions remain, we are happy to have developed this device and have shown that three-dimensional perfused cell culture is vital for the next step in drug discovery."

Credit: 
Kyoto University

Researchers produce first laser ultrasound images of humans

For most people, getting an ultrasound is a relatively easy procedure: As a technician gently presses a probe against a patient's skin, sound waves generated by the probe travel through the skin, bouncing off muscle, fat, and other soft tissues before reflecting back to the probe, which detects and translates the waves into an image of what lies beneath.

Conventional ultrasound doesn't expose patients to harmful radiation as X-ray and CT scanners do, and it's generally noninvasive. But it does require contact with a patient's body, and as such, may be limiting in situations where clinicians might want to image patients who don't tolerate the probe well, such as babies, burn victims, or other patients with sensitive skin. Furthermore, ultrasound probe contact induces significant image variability, which is a major challenge in modern ultrasound imaging.

Now, MIT engineers have come up with an alternative to conventional ultrasound that doesn't require contact with the body to see inside a patient. The new laser ultrasound technique leverages an eye- and skin-safe laser system to remotely image the inside of a person. When trained on a patient's skin, one laser remotely generates sound waves that bounce through the body. A second laser remotely detects the reflected waves, which researchers then translate into an image similar to conventional ultrasound.

In a paper published today by Nature in the journal Light: Science and Applications, the team reports generating the first laser ultrasound images in humans. The researchers scanned the forearms of several volunteers and observed common tissue features such as muscle, fat, and bone, down to about 6 centimeters below the skin. These images, comparable to conventional ultrasound, were produced using remote lasers focused on a volunteer from half a meter away.

"We're at the beginning of what we could do with laser ultrasound," says Brian W. Anthony, a principal research scientist in MIT's Department of Mechanical Engineering and Institute for Medical Engineering and Science (IMES), a senior author on the paper. "Imagine we get to a point where we can do everything ultrasound can do now, but at a distance. This gives you a whole new way of seeing organs inside the body and determining properties of deep tissue, without making contact with the patient."

Anthony's co-authors on the paper are lead author and MIT postdoc Xiang (Shawn) Zhang, recent doctoral graduate Jonathan Fincke, along with Charles Wynn, Matthew Johnson, and Robert Haupt of MIT's Lincoln Laboratory.

Yelling into a canyon -- with a flashlight

In recent years, researchers have explored laser-based methods in ultrasound excitation in a field known as photoacoustics. Instead of directly sending sound waves into the body, the idea is to send in light, in the form of a pulsed laser tuned at a particular wavelength, that penetrates the skin and is absorbed by blood vessels.

The blood vessels rapidly expand and relax -- instantly heated by a laser pulse then rapidly cooled by the body back to their original size -- only to be struck again by another light pulse. The resulting mechanical vibrations generate sound waves that travel back up, where they can be detected by transducers placed on the skin and translated into a photoacoustic image.

While photoacoustics uses lasers to remotely probe internal structures, the technique still requires a detector in direct contact with the body in order to pick up the sound waves. What's more, light can only travel a short distance into the skin before fading away. As a result, other researchers have used photoacoustics to image blood vessels just beneath the skin, but not much deeper.

Since sound waves travel further into the body than light, Zhang, Anthony, and their colleagues looked for a way to convert a laser beam's light into sound waves at the surface of the skin, in order to image deeper in the body.

Based on their research, the team selected 1,550-nanometer lasers, a wavelength which is highly absorbed by water (and is eye- and skin-safe with a large safety margin). As skin is essentially composed of water, the team reasoned that it should efficiently absorb this light, and heat up and expand in response. As it oscillates back to its normal state, the skin itself should produce sound waves that propagate through the body.

The researchers tested this idea with a laser setup, using one pulsed laser set at 1,550 nanometers to generate sound waves, and a second continuous laser, tuned to the same wavelength, to remotely detect reflected sound waves. This second laser is a sensitive motion detector that measures vibrations on the skin surface caused by the sound waves bouncing off muscle, fat, and other tissues. Skin surface motion, generated by the reflected sound waves, causes a change in the laser's frequency, which can be measured. By mechanically scanning the lasers over the body, scientists can acquire data at different locations and generate an image of the region.

"It's like we're constantly yelling into the Grand Canyon while walking along the wall and listening at different locations," Anthony says. "That then gives you enough data to figure out the geometry of all the things inside that the waves bounced against -- and the yelling is done with a flashlight."

In-home imaging

The researchers first used the new setup to image metal objects embedded in a gelatin mold roughly resembling skin's water content. They imaged the same gelatin using a commercial ultrasound probe and found both images were encouragingly similar. They moved on to image excised animal tissue -- in this case, pig skin -- where they found laser ultrasound could distinguish subtler features, such as the boundary between muscle, fat, and bone.

Finally, the team carried out the first laser ultrasound experiments in humans, using a protocol that was approved by the MIT Committee on the Use of Humans as Experimental Subjects. After scanning the forearms of several healthy volunteers, the researchers produced the first fully noncontact laser ultrasound images of a human. The fat, muscle, and tissue boundaries are clearly visible and comparable to images generated using commercial, contact-based ultrasound probes.

The researchers plan to improve their technique, and they are looking for ways to boost the system's performance to resolve fine features in the tissue. They are also looking to hone the detection laser's capabilities. Further down the road, they hope to miniaturize the laser setup, so that laser ultrasound might one day be deployed as a portable device.

"I can imagine a scenario where you're able to do this in the home," Anthony says. "When I get up in the morning, I can get an image of my thyroid or arteries, and can have in-home physiological imaging inside of my body. You could imagine deploying this in the ambient environment to get an understanding of your internal state."

This research was supported in part by the MIT Lincoln Laboratory Biomedical Line Program for the United States Air Force and by the U.S. Army Medical Research and Material Command's Military Operational Medicine Research Program.

Credit: 
Massachusetts Institute of Technology

NIST study suggests universal method for measuring light power

image: Optical power comparisons

Image: 
N. Hanacek/NIST

Always on the lookout for better ways to measure all kinds of things, researchers at the National Institute of Standards and Technology (NIST) have published a detailed study suggesting an "elegant" improved definition for the standard unit of light power, the optical watt.

But it's more than just an opportunity to nerd out over an international measurement unit. The proposed definition promises a more precise, less expensive and more portable method for measuring this important quantity for science, technology, manufacturing, commerce and national defense.

Instead of the current definition based on comparisons to electrical heating, the NIST study suggests the optical watt could be determined from light's radiation force and its speed, a fundamental constant. Fundamental constants are numbers that stay the same all over the world, making measurements based on them universal. Primary standards that define measurement units based on fundamental constants are considered ideal.

"Johannes Kepler made the first observations of radiation pressure in the early 17th century," NIST project leader John Lehman said. "A few years ago, we decided to create a primary measurement standard on this basis." Maybe someday we will all redefine optical power this way."

The proposed definition is based on James Clerk Maxwell's work in 1862 showing that the force exerted by light is proportional to its power divided by the speed of light. Now, practical measurement standards can be created on this basis thanks to recent NIST technical advances.

The NIST proposal is in sync with the recent redefinition of the International Systems of Units (SI), offering more reliable measurements based on unchanging properties of nature. The watt is the SI unit of power, which, as Maxwell found, can be used to calculate force, or vice versa. The NIST suggestion might also help resolve debates over how best to define a related optical quantity known as luminous intensity. This quantity is expressed in terms of the SI base unit known as the candela, which depends in part on the properties of human vision.

The new study was done in the context of NIST's laser power measurement and calibration services, first offered in 1974. In recent years NIST researchers have expanded the range of light power that can be measured, from very low light intensity of a few photons (light particles) per second to 100-kilowatt lasers emitting 100,000,000,000,000,000,000,000 photons per second.

To measure optical power the conventional way, researchers aim a laser at a coated detector, measure the detector's temperature change, and then determine the electrical power needed to generate an equivalent amount of heat. As is required for measurement standards, this method can be traced to the SI, but indirectly through the volt and the ohm, which in turn are "derived units" based on nist-equations using multiple SI units. NIST's equipment for measuring the optical watt this way is large and not portable.

In the newer approach, laser power can be measured by comparing it to the force of gravity on a reference mass (typically weighed on an "analytical balance" or scale) or an equivalent force. NIST has recently developed a technique that measures a laser's force, or the push exerted on a mirror by a stream of photons. The result, measured in either milligrams (mass) or micronewtons (force), is traceable to the SI base unit the kilogram and can be used to calculate optical power. The approach is especially helpful for high-power lasers used in manufacturing and the military because it is simpler, faster, less costly and more portable than conventional methods.

The new NIST study established the validity of the mass/force approach by finding strong agreement between its results and those of the conventional method. But the NIST proposal also offers several significant advantages: a portable primary standard in the form of a reference mass, and the potential for improved precision.

Calculations of mass require knowledge of the exact amount that gravity accelerates objects at a particular altitude and location. Force calculations do not, making it simpler as a primary standard.

The new definition could be:

One watt of optical power is that which, upon normal reflection from a perfect mirror, produces a force whose magnitude (in newtons) is equal to 2 divided by the speed of light; or

One newton of force is that which is produced when an optical power (in watts) of a magnitude equal to the speed of light divided by 2 reflects normally from a perfectly reflecting mirror.

The NIST team plans to present its results to date and proposal at an upcoming conference and continue the line of research.

"We are pursuing research to enable the radiation pressure approach to make the most accurate measurement of laser power in the world," NIST physicist Paul Williams said.

Credit: 
National Institute of Standards and Technology (NIST)

Counting photons is now routine enough to need standards

image: As part of a research project to help establish standards for photon-counting detectors, NIST physicist Thomas Gerrits adjusts the laser beam hitting a detector. The squiggly overhead light helps researchers see the lab setup without disturbing the detectors, which are insensitive to blue light.

Image: 
J. Burrus/NIST

Since the National Institute of Standards and Technology (NIST) built its first superconducting devices for counting photons (the smallest units of light) in the 1990s, these once-rare detectors have become popular research tools all over the world. Now, NIST has taken a step toward enabling universal standards for these devices, which are becoming increasingly important in science and industry.

Single-photon detectors (SPDs) are now key to research areas ranging from optical communications and astrophysics to cutting-edge information technologies based on quantum physics, such as quantum cryptography and quantum teleportation.

To ensure their accuracy and reliability, SPDs need to be evaluated and compared to some benchmark, ideally a formal standard. NIST researchers are developing methods to do that and have already started to perform custom calibrations for the handful of companies that make SPDs.

The NIST team has just published methods for measuring the efficiency of five SPDs, including one made at NIST, as a prelude to offering an official calibration service.

"This is a first step towards implementation of a quantum standard -- we produced a tool to verify a future single-photon detection standard," NIST physicist Thomas Gerrits said. "There is no standard right now, but many national metrology institutes, including NIST, are working on this."

"There have been journal papers on this topic before, but we did in-depth uncertainty analyses and described in great detail how we did the tests," Gerrits said. "The aim is to serve as a reference for our planned calibration service."

NIST is uniquely qualified to develop these evaluation methods because the institute makes the most efficient SPDs in the world and is constantly improving their performance. NIST specializes in two superconducting designs -- one based on nanowires or nanostrips, evaluated in the new study, and transition-edge sensors, to be studied in the near future. Future work may also address standards for detectors that measure very low light levels but can't count the number of photons.

In the modern metric system, known as the SI, the basic unit of measurement that's most closely related to photon detection is the candela, which is relevant to light detected by the human eye. Future SI redefinitions might include photon-counting standards, which could offer a more accurate way of measuring light in terms of the candela. Single-photon light levels are less than one-billionth of the amounts in current standards.

The new paper details NIST's use of conventional technologies to measure SPD detection efficiency, defined as the probability of detecting a photon hitting the detector and producing a measurable outcome. The NIST team ensured the measurements are traceable to a primary standard for optical power meters (NIST's Laser Optimized Cryogenic Radiometer). The meters maintain accuracy as measurements are scaled down to low light levels, with the overall measurement uncertainty mostly due to the power meter calibration.

The researchers measured the efficiencies of five detectors, including three silicon photon-counting photodiodes and NIST's nanowire detector. Photons were sent by optical fiber for some measurements and through the air in other cases. Measurements were made for two different wavelengths of light commonly used in fiber optics and communications. Uncertainties ranged from a low of 0.70% for measurements in fiber at a wavelength of 1533.6 nanometers (nm) to 1.78% for over-the-air readings at 851.7 nm.

Credit: 
National Institute of Standards and Technology (NIST)

One-off genetic score can detect stroke risk from birth

A group of investigators from Australia, Germany, and the UK have shown that genetic data obtained from a single blood draw or saliva sample can be used to identify individuals at a 3-fold increased risk of developing ischaemic stroke, a devastating condition and one of the leading causes of disability and death world-wide. The scientists developed a genetic risk score that is similarly or more predictive than commonly known risk factors for stroke. Their work further suggests that individuals with high genetic risk may require more intensive preventive measures to mitigate stroke risk than is recommended by current guidelines.

Genomic risk prediction, based on an individual's unique DNA sequence, has distinct advantages over established risk factors as it could be used to infer risk of disease from birth. It may thus allow initiation of preventive strategies before individuals develop conventional risk factors for stroke such as hypertension or hyperlipidemia, said Martin Dichgans, Professor of Neurology and Director at the Institute for Stroke and Dementia Research (ISD), University Hospital, Ludwig-Maximilians-University (LMU) Munich, and one of the leaders of the current study.

The results of this study were published online in the journal, Nature Communications. The study utilised large-scale genetic data from research groups worldwide and applied their results to data on 420.000 individuals from the UK Biobank.

The study was led by investigators from the Baker Heart and Diabetes Institute (Australia), University of Cambridge (UK), and Ludwig-Maximilians-University, Munich (Germany).

"The sequencing of the human genome has revealed many insights. For common diseases, such as stroke, it is clear that genetics is not destiny; however, each person does have their own innate risk for any particular disease. The challenge is now how we best incorporate this risk information into clinical practice so that the public can live healthier and longer." said Dr Michael Inouye, of the Baker Heart and Diabetes Institute and University of Cambridge, and another leader of the current study.

Stroke is the second most common cause of both death and disability-adjusted life-years worldwide. About 80% of stroke cases are caused by occlusion of a brain supplying artery (so-called 'ischaemic stroke'). The risk of ischaemic stroke is determined by genetic and environmental factors, which act through modifiable risk factors such as hypertension and diabetes.

In the study, the researchers employed a machine learning approach to integrate stroke-related genetic data from various sources into a single genetic risk score for each individual. They then assessed the performance of this new genetic risk score in the UK Biobank and found that it both outperformed previous genetic scores and had similar predictive performance as other well-known risk factors for stroke, such as smoking status or body mass index.

Importantly, the new genetic risk score was significantly better than family history at predicting future ischaemic stroke, to the extent that it could detect the roughly 1 in 400 individuals at 3-fold increased risk.

Individuals at high genetic risk of ischaemic stroke are not without options however, and the researchers further showed that these individuals may still substantially reduce their stroke risk by minimizing their conventional risk factors. These include lowering blood pressure and body mass index, as well as ceasing smoking.

The study's analyses show that current clinical guidelines may be insufficient for individuals at high genetic risk of stroke, and that these individuals may need more intensive interventions.

With non-invasive, affordable DNA genotyping array technology together with a new genetic risk score for ischaemic stroke, the future looks bright for genomic medicine to enable effective early interventions for those at high risk of strokes and, indeed, other cardiovascular diseases.

Credit: 
Baker Heart and Diabetes Institute

Bilingual children are strong, creative storytellers, study shows

image: Elena Nicoladis, professor in the Department of Psychology, is lead author on a new study exploring storytelling ability in bilingual children. Photo credit: John Ulan

Image: 
John Ulan

Bilingual children use as many words as monolingual children when telling a story, and demonstrate high levels of cognitive flexibility, according to new research by University of Alberta scientists.

"We found that the number of words that bilingual children use in their stories is highly correlated with their cognitive flexibility--the ability to switch between thinking about different concepts," said Elena Nicoladis, lead author and professor in the Department of Psychology in the Faculty of Science. "This suggests that bilinguals are adept at using the medium of storytelling."

Vocabulary is a strong predictor of school achievement, and so is storytelling. "These results suggest that parents of bilingual children do not need to be concerned about long-term school achievement, said Nicoladis. "In a storytelling context, bilingual kids are able to use this flexibility to convey stories in creative ways."

The research examined a group of French-English bilingual children who have been taught two languages since birth, rather than learning a second language later in life. Results show that bilingual children used just as many words to tell a story in English as monolingual children. Participants also used just as many words in French as they did in English when telling a story.

Previous research has shown that bilingual children score lower than monolingual children on traditional vocabulary tests, meaning this results are changing our understanding of multiple languages and cognition in children.

"The past research is not surprising," added Nicoladis. "Learning a word is related to how much time you spend in each language. For bilingual children, time is split between languages. So, unsurprisingly, they tend to have lower vocabularies in each of their languages. However, this research shows that as a function of storytelling, bilingual children are equally strong as monolingual children."

This research used a new, highly sensitive measure for examining cognitive flexibility, examining a participant's ability to switch between games with different rules, while maintaining accuracy and reaction time. This study builds on previous research examining vocabulary in bilingual children who have learned English as a second language.

Credit: 
University of Alberta

Model beats Wall Street analysts in forecasting business financials

Knowing a company's true sales can help determine its value. Investors, for instance, often employ financial analysts to predict a company's upcoming earnings using various public data, computational tools, and their own intuition. Now MIT researchers have developed an automated model that significantly outperforms humans in predicting business sales using very limited, "noisy" data.

In finance, there's growing interest in using imprecise but frequently generated consumer data -- called "alternative data" -- to help predict a company's earnings for trading and investment purposes. Alternative data can comprise credit card purchases, location data from smartphones, or even satellite images showing how many cars are parked in a retailer's lot. Combining alternative data with more traditional but infrequent ground-truth financial data -- such as quarterly earnings, press releases, and stock prices -- can paint a clearer picture of a company's financial health on even a daily or weekly basis.

But, so far, it's been very difficult to get accurate, frequent estimates using alternative data. In a paper published this week in the Proceedings of ACM Sigmetrics Conference, the researchers describe a model for forecasting financials that uses only anonymized weekly credit card transactions and three-month earning reports.

Tasked with predicting quarterly earnings of more than 30 companies, the model outperformed the combined estimates of expert Wall Street analysts on 57 percent of predictions. Notably, the analysts had access to any available private or public data and other machine-learning models, while the researchers' model used a very small dataset of the two data types.

"Alternative data are these weird, proxy signals to help track the underlying financials of a company," says first author Michael Fleder, a postdoc in the Laboratory for Information and Decision Systems (LIDS). "We asked, 'Can you combine these noisy signals with quarterly numbers to estimate the true financials of a company at high frequencies?' Turns out the answer is yes."

The model could give an edge to investors, traders, or companies looking to frequently compare their sales with competitors. Beyond finance, the model could help social and political scientists, for example, to study aggregated, anonymous data on public behavior. "It'll be useful for anyone who wants to figure out what people are doing," Fleder says.

Joining Fleder on the paper is EECS Professor Devavrat Shah, who is the director of MIT's Statistics and Data Science Center, a member of the Laboratory for Information and Decision Systems, a principal investigator for the MIT Institute for Foundations of Data Science, and an adjunct professor at the Tata Institute of Fundamental Research.

Tackling the "small data" problem

For better or worse, a lot of consumer data is up for sale. Retailers, for instance, can buy credit card transactions or location data to see how many people are shopping at a competitor. Advertisers can use the data to see how their advertisements are impacting sales. But getting those answers still primarily relies on humans. No machine-learning model has been able to adequately crunch the numbers.

Counterintuitively, the problem is actually lack of data. Each financial input, such as a quarterly report or weekly credit card total, is only one number. Quarterly reports over two years total only eight data points. Credit card data for, say, every week over the same period is only roughly another 100 "noisy" data points, meaning they contain potentially uninterpretable information.

"We have a 'small data' problem," Fleder says. "You only get a tiny slice of what people are spending and you have to extrapolate and infer what's really going on from that fraction of data."

For their work, the researchers obtained consumer credit card transactions -- at typically weekly and biweekly intervals -- and quarterly reports for 34 retailers from 2015 to 2018 from a hedge fund. Across all companies, they gathered 306 quarters-worth of data in total.

Computing daily sales is fairly simple in concept. The model assumes a company's daily sales remain similar, only slightly decreasing or increasing from one day to the next. Mathematically, that means sales values for consecutive days are multiplied by some constant value plus some statistical noise value -- which captures some of the inherent randomness in a company's sales. Tomorrow's sales, for instance, equal today's sales multiplied by, say, 0.998 or 1.01, plus the estimated number for noise.

If given accurate model parameters for the daily constant and noise level, a standard inference algorithm can calculate that equation to output an accurate forecast of daily sales. But the trick is calculating those parameters.

Untangling the numbers

That's where quarterly reports and probability techniques come in handy. In a simple world, a quarterly report could be divided by, say, 90 days to calculate the daily sales (implying sales are roughly constant day-to-day). In reality, sales vary from day to day. Also, including alternative data to help understand how sales vary over a quarter complicates matters: Apart from being noisy, purchased credit card data always consist of some indeterminate fraction of the total sales. All that makes it very difficult to know how exactly the credit card totals factor into the overall sales estimate.

"That requires a bit of untangling the numbers," Fleder says. "If we observe 1 percent of a company's weekly sales through credit card transactions, how do we know it's 1 percent? And, if the credit card data is noisy, how do you know how noisy it is? We don't have access to the ground truth for daily or weekly sales totals. But the quarterly aggregates help us reason about those totals."

To do so, the researchers use a variation of the standard inference algorithm, called Kalman filtering or Belief Propagation, which has been used in various technologies from space shuttles to smartphone GPS. Kalman filtering uses data measurements observed over time, containing noise inaccuracies, to generate a probability distribution for unknown variables over a designated timeframe. In the researchers' work, that means estimating the possible sales of a single day.

To train the model, the technique first breaks down quarterly sales into a set number of measured days, say 90 -- allowing sales to vary day-to-day. Then, it matches the observed, noisy credit card data to unknown daily sales. Using the quarterly numbers and some extrapolation, it estimates the fraction of total sales the credit card data likely represents. Then, it calculates each day's fraction of observed sales, noise level, and an error estimate for how well it made its predictions.

The inference algorithm plugs all those values into the formula to predict daily sales totals. Then, it can sum those totals to get weekly, monthly, or quarterly numbers. Across all 34 companies, the model beat a consensus benchmark -- which combines estimates of Wall Street analysts -- on 57.2 percent of 306 quarterly predictions.

Next, the researchers are designing the model to analyze a combination of credit card transactions and other alternative data, such as location information. "This isn't all we can do. This is just a natural starting point," Fleder says.

Credit: 
Massachusetts Institute of Technology