Brain

For many, 'flexible work boundaries' become 'work without boundaries'

image: Liuba Belkin is an associate professor of management at Lehigh University.

Image: 
Lehigh University

Personal relationships and home life suffer for those tied to their work emails round-the-clock, according to a new study. The study is the first to test the relationship between organizational expectations to monitor work-related electronic communication during non-work hours and the health and relationship satisfaction of employees and their significant others.

In "Killing Me Softly: Electronic Communications Monitoring and Employee and Spouse Well-Being," researchers report that such expectations are "an insidious stressor that not only increases employee anxiety, decreases their relationship satisfaction and has detrimental effects on employee health, but also that it negatively affects partner (significant other) health and marital satisfaction perceptions," said Liuba Belkin, associate professor of management at Lehigh University. Belkin co-authored the article with William Becker of Virginia Tech, Samantha A. Conroy of Colorado State University, and Sarah Tuskey, a Virginia Tech doctoral student.

The research will be presented at the Academy of Management annual meeting, held Aug. 10-14, 2018, in Chicago, Ill. and is published in the Academy of Management Best Paper Proceedings.

Regardless of how much time individuals actually spent monitoring and answering work emails outside of work hours, the mere presence of organizational expectations to monitor email outside of work led to employee anxiety and negative effects on well-being, which also affected their partners (spouses/significant others), Belkin said. "Thus, we demonstrated that these normative expectations for work email monitoring during non-work hours is a significant stressor above and beyond actual workload and time spent on handling it during non-work hours," she said. In addition, the strong negative impact of such organizational expectations was found to not only affect employees, but also their significant others' well-being, demonstrating a "spillover effect," Belkin said.

The research builds upon earlier work by the researchers that examined organizational expectations to monitor email and its effects on employees' ability to detach from work, emotional exhaustion and work-family balance perceptions. That study, "Exhausted, But Unable to Disconnect: The Impact of Email-Related Organizational Expectations on Work-Family Balance," was the first to identify email-related expectations as a job stressor along with already established factors such as high workload, interpersonal conflicts, physical environment or time pressure.

What Can Employers and Individuals Do?

For individuals, mindfulness training has been shown to be an effective approach to reducing anxiety and work-related negative affect and could possibly help with the long-term health and relationship satisfaction effects of electronic communication demands on employees and their partners, the study reports. "Mindfulness is a practice within the control of the employee even if email expectations are not (i.e., those are enforced by their organization or their manager)," Belkin said.

For organizations, policies that reduce expectations to monitor electronic communication outside of work would be ideal. "This may not always be an option due to various industry/job demands," Belkin said. "Nevertheless, organizations could set off-hour email windows and limit use of electronic communications outside of those windows or set up email schedules when various employees are available to respond." The idea would be to create clear boundaries for employees that indicate the times when work role identity enactment is likely to be needed and the times when employees can focus solely on their family role identities. For example, research indicates that when employees are allowed to engage in part-time telecommuting practices, they experience less emotional exhaustion and decreased work-family conflict, Belkin said.

Additionally, organizational expectations should be communicated clearly. If the nature of a job requires email availability, such expectations should be stated formally as a part of job responsibilities. Putting these expectations upfront may not only reduce anxiety and negative emotions, but also increase understanding from significant others by "reframing" work and family boundaries and surrounding expectations around employee work-family time, the study reports.

For the study, researchers recruited 142 sets of full-time employees and their significant others. "Our findings extend literature on work-related electronic communication at the interface of work and non-work and deepen our understanding of the impact of organizational expectations on employees and their families," the study concluded.

Credit: 
Lehigh University

Study defines spending trends among dual-eligible beneficiaries

While there has been much effort to control spending for individuals eligible for both Medicaid and Medicare in the United States, for the first time a team of Vanderbilt University health policy researchers have analyzed spending trends for this population over a multiyear period in order to gain a much clearer understanding of exactly how much is being spent and by whom.

"We measured how much Medicare spends per dual-eligible beneficiary and how much that changed between 2007 to 2015, and we compared those trends to other beneficiaries who don't have Medicaid," said lead author Laura Keohane, PhD, assistant professor of Health Policy at Vanderbilt University School of Medicine. "After adjusting for increases in Medicare payment rates, over this time period we found that dual-eligible beneficiaries over age 65 on average had very similar spending growth compared to other Medicare beneficiaries. In the most recent years of our study, dual-eligible beneficiaries had lower average annual spending growth."

For younger Medicare beneficiaries under age 65, the team found that dual-eligible beneficiaries actually had slightly lower average spending growth over the entire time period and especially lower spending growth in the latest years of the study period.

The results of their study, which was funded by The Commonwealth Fund, were published in the August edition of Health Affairs, a peer-reviewed journal published by Project Hope.

"It is important for policy makers to know that during this period of slow growth in Medicare spending per beneficiary, spending growth for dual-eligible beneficiaries was even slower," said one of the authors, Melinda Buntin, PhD, professor and chair of the Department of Health Policy at Vanderbilt, one of the study's co-authors.

In 2016, there were 11.7 million individuals simultaneously enrolled in Medicare and Medicaid, according a recent report from the Centers for Medicare and Medicaid Services (CMS). These dual-eligible beneficiaries typically experience high rates of chronic illness, with many having multiple chronic conditions and long-term care needs. Additionally, 41 percent of dual-eligible beneficiaries have at least one mental health diagnosis. About half of dual-eligible beneficiaries rely upon some form of long-term supports and services, including institutional as well as home and community-based supports (HCBS).

Dual-eligible beneficiaries are challenged by the need to navigate two separate programs: Medicare for coverage of most health care services and prescription medications, and Medicaid for coverage of long-term care, certain behavioral health services, and for help with Medicare premiums and cost-sharing.

"It is very true that dual-eligible beneficiaries have higher spending levels than other Medicare beneficiaries, but the implication of our work is that the gap in spending levels between dual eligible beneficiaries and other beneficiaries is not increasing over time," Keohane said. "If anything, in the last couple of years that gap has decreased a little bit.

'So, yes, dual-eligible beneficiaries are a very high cost population, but in terms of understanding the sustainability of future spending for this population, it is at least somewhat reassuring that their spending growth rates are similar or even lower than other Medicare beneficiaries."

Another finding of the study is that when looking at spending growth across all Medicare beneficiaries, one group with the highest average annual spending growth was individuals who use long-term nursing home use. This population had average annual spending growth rates ranging from 1.7 to 4.1 percent depending on age group and Medicaid participation.

The next step for this line of research is discovering why dual-eligible beneficiaries have lower spending growth in recent years and if that decrease is related to some of the measures that have been put in place to contain cost, such as the shift to value-based payments and efforts to better coordinate care for dual-eligible beneficiaries, Keohane said.

"To be able to benchmark dual-eligible beneficiaries' spending growths and how spending varies across different sectors and for individuals with different diseases and different demographic characteristics is helpful for being able to better identify areas where we might be able to do more for the dual-eligible population," Keohane said. "There is better data available than ever about who is participating in Medicaid; lack of data had historically been one of the major challenges trying to research this population.

"It's so exciting to see more and more researchers making use of that data. It can be challenging because there are a lot of variations across states in how Medicaid programs operate and important distinctions between types of Medicaid benefits, but considering the health needs of this population, we need more research attention in this area."

Credit: 
Vanderbilt University Medical Center

Hijacking hormones for plant growth

Hormones designed in the lab through a technique combining chemistry, biology, and engineering might be used to manipulate plant growth in numerous ways, according to a New Phytologist study.

Scientists harnessed the power of synthetic chemistry to design compounds similar to auxin, a small chemical hormone that controls nearly all aspects of plant growth, development, and behavior.

These compounds might be used for various agricultural purposes, for example for manipulating the ripening of fruit crops or for preventing the undesirable spread of transgenes (genes that have been transferred from one organism to another) in the field.

"It is truly gratifying as a plant biologist that collaboration with synthetic chemists could yield such a game-changing tool. With a new version of auxin and its engineered receptor, we could possibly pinpoint the desired auxin action in target plants or tissues of interest without disrupting the physiology of other plant parts or neighbors," said lead author Dr. Keiko Torii, of the University of Washington, in Seattle.

Credit: 
Wiley

Families with college kids more likely to lose their home during recessions

In times of economic difficulties, having to pay a child through college could be a major reason for a family to lose their home. This is according to two US researchers, Jacob Faber of New York University and Peter Rich of Cornell University, in a study published in Springer's journal Demography. Their investigations show that during the Great Recession of the 2000s, banks often foreclosed on the homes of families who were supporting their children's further education. Faber and Rich therefore recommend that policymakers look for other ways to alleviate families' financial burdens in addition to curbing risky mortgages.

Between 2005 and 2011, 39.9 per cent of Americans between the ages of 18 and 24 attended a two or four-year college (11.1 per cent more than in 1985). Tuition fees also nearly doubled over the same period. Need-based grants and sliding-scale tuition adjustments have made college more accessible to many people, but families still must make a financial contribution. Parents often draw from savings, earnings, and loans to cover this, and some financial advisors are known to recommend that people borrow against their homes.

Faber and Rich evaluated annual college data and foreclosures from 2005 to 2011 among people living in 305 commuting zones in the US. Their sample covered 84.8 per cent of the total US population and included information from rural and urban counties. They analysed data about foreclosures and federal taxes in these zones, and took note of unemployment rates, refinance mortgage debt, home prices, and the number of 19-year-olds living in these areas.

Their findings show that a higher rate of families sending their children to college predicted a higher rate of foreclosures in the subsequent year. They also verified these findings by analyzing three independent datasets tracking individual households over time, each of which show a greater likelihood of foreclosure among households sending children to college. The results expose an unexplored role that higher education costs had on household financial risk during the 2000s.

"This may help explain why some families with children were more likely to experience foreclosure during this period than childless households - as shown in previous studies. Our findings do not suggest that households' decisions to send children to college were as consequential as housing or labor market dynamics in shaping the Great Recession, but it is important to understand all contributing factors, especially because the penalties of foreclosure can be substantial and lasting," says Faber.

The researchers found that the connection between college attendance and foreclosures persisted for families at all points in the income distribution, suggesting that both poor and nonpoor families have had difficulty supporting their children through college. The authors believe that financial aid for college should therefore be more transparent, flexible and comprehensive, to allow parents to see upfront what they should budget for when their child starts studying, even with the help of financial aid. Moreover, they argue, their findings show that college prices have soared beyond what many families can reasonably afford even with tuition offsets, supporting calls to further reign in the high cost of college access.

Rich explained: "Our study warrants policy attention not only to risky home lending, but also to other determinants of financial hazard--such as the cost of college attendance--that can overextend families and render us all vulnerable to future economic crises."

Credit: 
Springer

Lessons from flies: genetic diversity impacts disease severity

image: Natural variations in the genetic background of 200 strains of fruit flies contribute to differences in severity of a disease model for retinitis pigmentosa.

Image: 
University of Utah Health

New research offers clues as to why some diseases are highly variable between individuals. The phenomenon is apparent in people with retinitis pigmentosa, a condition that causes the light-sensing cells in the eye to degenerate. While some only develop night blindness, others completely lose their sight, even when their condition is caused by the same genetic mutation.

By analyzing thousands of flies, scientists at University of Utah Health found that variation in a background gene, called Baldspot, can make a difference in severity of the disease.

"We're seeing that each individual's genetics is a little different and this can have profound impacts on disease outcomes," says Clement Chow, Ph.D., assistant professor of Human Genetics at U of U Health, who carried out the work with first author and postdoctoral researcher Rebecca Palu, Ph.D. Palu and Chow show that Baldspot is independent of the primary disease-causing mutation. Rather, it modifies disease severity by helping cells in the body withstand stressful conditions.

Targeting genes, such as the human equivalent of Baldspot, could be an effective avenue for developing new treatments against some disorders. The research publishes online in PLOS Genetics on August 6.

Indirect Influence

Chow gained a new appreciation for the power of genetic diversity three years ago. In collaboration with his advisors at the time, Andrew Clark and Mariana Wolfner, he introduced the same retinitis pigmentosa-causing mutation into 200 strains of fruit flies. While flies within each strains were essentially genetic clones, flies between strains were as genetically variable as distant cousins.

What they saw next was telling. Each population manifested the disease characteristics differently, essentially producing 200 versions of the disorder.

Some fly populations were hardly affected while the eyes of others had degenerated severely, reminiscent of the broad disease variability seen in people. All flies were raised in identical, controlled laboratory conditions, largely ruling out environmental exposures as cause for the heterogeneity. Instead, differences stemmed from genetic variations that occurred naturally.

"This goes to show that studies utilizing many different genetic backgrounds are incredibly informative. Without noting them we risk missing the nuances and that could make treating patients challenging," says Chow.

Right on Target

The findings are particularly relevant to today's precision medicine push to tailor care for each individual, says Chow. Designing therapeutics against genetic modifiers could be a tool for personalizing treatments. Step one is to figure out what they are and how they work.

By comparing the DNA sequences of the strains, Chow and Palu traced disease variation to differences in more than 100 background genes, one of which was Baldspot. Eliminating Baldspot from the eyes of healthy flies had no apparent effect, but doing the same in flies with the retinitis pigmentosa-causing mutation altered disease severity.

"Ordinarily you can't tell which flies have variations in Baldspot," says Palu. "Disease conditions reveal the effect of this otherwise silent genetic variation." The variations work by protecting the cell from disease-causing conditions.

Fly eyes showed signs that the gene impacted a type of stress pathway, called the ER stress response. Using a molecular tag that glows fluorescent green when the ER stress response is active, the Utah scientists saw that the retinitis pigmentosa-causing mutation triggers the pathway. Removing Baldspot from flies with the mutation lowers the stress response resulting in decreased cell death, ultimately improving the condition of the eye.

"It is gratifying to make predictions based on what we know about the gene and show that this is the important factor," says Palu.

The ER stress response helps cells withstand an accumulation of misfolded proteins that occurs with some diseases, including retinitis pigmentosa. If the human version of Baldspot similarly impacts retinitis pigmentosa, blocking the gene could be a promising target for future treatment options.

The gene's role in eye disease could be the tip of the iceberg. Palu demonstrated that Baldspot is active in a variety of tissues and has the potential to influence a number of ER-stress related disorders.

"Our work highlights that in order to think about more personalized therapies and personalized drugs, we need to understand how the genetic makeup of each individual affects how that disease is going to show up," says Chow. "The larger goal is to bring this knowledge back to the clinic."

Credit: 
University of Utah Health

Old mining techniques make a new way to recycle lithium batteries

image: Using 100-year-old minerals processing methods, chemical engineering students have found a solution to a looming 21st-century problem: how to economically recycle lithium ion batteries.

Image: 
Lei Pan, Michigan Tech

Lei Pan's team of chemical engineering students had worked long and hard on their research project, and they were happy just to be showing their results at the People, Prosperity and the Planet (P3) competition last April in Washington, DC. What they didn't expect was to be mobbed by enthusiastic onlookers.

"We got a lot of 'oh wow!' responses, from eight-year-olds wanting to know how it worked to EPA officials wondering why no one had done this before," says senior Zachary Oldenburg. "My response to the EPA was, 'Because no one else had a project leader who's a mining engineer.'"

Pan, an assistant professor of chemical engineering at Michigan Technological University, earned his graduate degrees in mining engineering. It was his idea to adapt 20th century mining technology to recycle lithium ion batteries, from the small ones in cell phones to the multi-kilowatt models that power electric cars. Pan figured the same technologies used to separate metal from ore could be applied to spent batteries. So he gave his students a crash course in basic minerals processing methods and set them loose in the lab.

"My mind goes back to the beginning, when nothing was working," says Trevyn Payne, a chemical engineering senior. "A lot of times it was, honestly, 'Let's just try this.' Sometimes when things worked out, it was kind of an accident."

Oldenburg provides an example. "We were trying all kinds of solvents to liberate chemicals, and after hours and hours, we found out that plain water worked the best."

But eventually, everything came together. "You can see your results improve experiment by experiment," explains doctoral student Ruiting Zhan. "That's pretty good. It gives you a sense of achievement."

The team used mining industry technologies to separate everything in the battery: the casing, metal foils and coatings for the anode and cathode, which includes lithium metal oxide, the most valuable part. The components can be returned to the manufacturer and re-made into new batteries.

"The biggest advantage of our process is that it's inexpensive and energy efficient." Ruitang Zhan

"For the purpose of remanufacturing, our recycled materials are as good as virgin materials, and they are cheaper," Oldenburg adds.

The fact that their process is tried and true is perhaps its most attractive quality to industry, Pan notes. "We saw the opportunity to use an existing technology to address emerging challenges," he says. "We use standard gravity separations to separate copper from aluminum, and we use froth flotation to recover critical materials, including graphite, lithium and cobalt. These mining technologies are the cheapest available, and the infrastructure to implement them already exists."

Passers-by weren't the only ones at the P3 competition impressed by the students' effort. AIChE's (the American Institute of Chemical Engineers) Youth Council on Sustainable Science and Technology (YCOSST) has announced it will be presenting the team its YCOSST P3 Award, which recognizes the project "that best employs sustainable practices, interdisciplinary collaborations, engineering principles and youth involvement, and whose design is simple enough to have a sustainable impact without requiring significant technical expertise of its users."

Credit: 
Michigan Technological University

'New physics' charmingly escapes us

image: Baryons containing a charm quark can decay at once into a proton and two muons. Using data from the LHCb experiment, scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences in Cracow have shown that in these extremely rare processes there are still no signs of the 'new physics'. The signal of the nonresonant decay is visible at the top, the signal of the resonant decay into a proton and omega meson is presented below.

Image: 
IFJ PAN, CERN, The LHCb Collaboration

In the world of elementary particles, traces of a potential "new physics" may be concealed in processes related to the decay of baryons. Analysis of data from the LHCb experiment at the Large Hadron Collider performed by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences in Cracow have, however, shown that one of the rarest decays of baryons containing the charm quark so far shows no anomalies.

Baryons, which are composite particles made of three quarks, can decay into lighter particles. These types of decays usually occur indirectly via intermediate state (resonant). Sometimes, however, the decay proceeds directly in one step (nonresonant). The Standard Model, the best tool of modern physics formulated half a century ago to describe phenomena occurring among elementary particles, predicts that some of nonresonant baryon decays are extremely rare: depending on the type of baryon they should occur once per billion cases or even less frequently.

"If the frequency of some nonresonant decays were to be different than predicted by the Standard Model, it could indicate the existence of processes and particles not known yet, that indicate existence of 'new physics'. This is why nonresonant decays have attracted our attention for so long," explains Prof. Mariusz Witek from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow.

Prof. Witek led a five-member group of physicists from Cracow searching for nonresonant decays of charmed baryon Lambda c in data collected in 2011 and 2012 by the international LHCb experiment at the Large Hadron Collider in Geneva.

Why was the attention of the researchers drawn this time to Lambda c baryons, i.e. particles made of down (d), up (u) and charm (c) quarks? The most massive top (t) quark decays so fast that it does not combine with other quarks at all, so it does not create baryons, whose decays could be observed. The decays of particles containing the second largest quark in terms of mass, the beauty (b) quark, has already been analyzed earlier on, because their decays were slightly easier to detect. The Cracow group was involved here and contributed to observation of interesting deviation from theoretical predictions. In this situation, only the decays of charmed byrons remained largely unexplored.

"The Standard Model predicts that nonresonant decays of Lambda c baryons into three particles: a proton and two muons, should occur more or less once in hundreds of billions of decays. This is a much rarer phenomenon than the decays of baryons containing the beauty quark, which we were analysed earlier," emphasizes Dr. Marcin Chrzaszcz (IFJ PAN) and adds, "Measurements and analyses are now much more difficult, we have to look into a much larger group of events registered in the LHCb experiment. However, it is worth doing, because as a reward you can come across a trail of much more subtle processes. If we manage to observe any inconsistencies with predictions, this would most likely be a signal of a 'new physics'."

With such rare phenomena, the distinguishing of nonresonant decays of Lambda c baryons from background has proved to be a hard and time-consuming task. Nonetheless, the Cracow-based physicists have managed to improve an upper limit on frequency of nonresonant decays by up to 100 times. It was estimated to be less than one in hundreds of millions.

"The taking into account of additional data, including the second run of the LHC accelerator, should soon improve our result by a factor of 10. So we would be very close to the predictions of the Standard Model. If some sort of 'new physics' is manifesting itself in the decays of Lambda c baryons, this will be the last chance for it to reveal itself. At present, there is not the slightest trace of it," sums up Prof. Witek.

During the analyses, the Cracow-based researchers also observed resonant decays, in which the Lambda c baryon decayed into a proton and omega meson. The lack of signals indicating yet another path of resonant decay - into a proton and a rho meson - was somewhat surprising. However, this result turned out to be in line with theoretical predictions.

The Henryk Niewodniczanski Institute of Nuclear Physics (IFJ PAN) is currently the largest research institute of the Polish Academy of Sciences. The broad range of studies and activities of IFJ PAN includes basic and applied research, ranging from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of methods of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly yield of the IFJ PAN encompasses more than 600 scientific papers in the Journal Citation Reports published by the Thomson Reuters. The part of the Institute is the Cyclotron Centre Bronowice (CCB) which is an infrastructure, unique in Central Europe, to serve as a clinical and research centre in the area of medical and nuclear physics. IFJ PAN is a member of the Marian Smoluchowski Kraków Research Consortium: "Matter-Energy-Future" which possesses the status of a Leading National Research Centre (KNOW) in physics for the years 2012-2017. The Institute is of A+ Category (leading level in Poland) in the field of sciences and engineering.

Credit: 
The Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences

Discovery gives cystic fibrosis researchers new direction

When scientists discovered the gene that causes cystic fibrosis in 1989, they were optimistic that a cure was on the horizon. As the years rolled by, hundreds of mutations were identified in the gene, but new treatments were slow to emerge and a cure has yet to materialize. One major reason for the delay is that scientists have had trouble figuring out precisely where the gene is active. Now they know.

A multi-disciplinary team of researchers at the Novartis Institutes for BioMedical Research (NIBR) and Harvard Medical School (HMS) started out trying to catalogue all the different cells in the airway and the paths they take to become those cells. In the process, they discovered a completely new type of cell, which they name a pulmonary ionocyte. When the cystic fibrosis gene from 1989, dubbed CFTR, is active, it is usually in the pulmonary ionocyte, which makes up just one-to-two percent of the airway. The team reported this discovery online in Nature on August 1. A similar finding was reported by scientists at the Broad Institute and Massachusetts General Hospital in the same journal on the same day.

"We can use this information to be a bit more clever when we devise therapeutic approaches to cystic fibrosis," says Aron Jaffe, co-corresponding author and a co-leader of respiratory disease research at NIBR.

"As people work toward cures, knowing you are looking at one percent of the cell population seems essential for any type of trouble shooting to improve a therapy or develop new therapies," adds Allon Klein, co-corresponding author and assistant professor of systems biology at HMS.

Noting that many of the CFTR mutations only stop part of the gene's function, Jaffe suggests, for example, for those mutations you might try increasing the amount of pulmonary ionocytes to increase the amount of CFTR activity. The discovery also could make the task of teams trying to use gene therapy to correct the CFTR mutations a lot easier.

"I was surprised to spot potential paths to new therapies so quickly after doing the initial experiments," says Lindsey Plasschaert, co-first author and postdoctoral researcher at NIBR.

This type of research on the fundamental nature of different types of cells rarely yields insights with direct implications for drug discovery.

Technology painted the path

Finding the new cell was no easy task. The team used a new technology called single-cell RNA sequencing to determine which genes were active in each individual cell in samples of airway tissue. Since gene activity defines a cell's function¬--and thus its identity--they could use the technology to sort and catalogue the many cells that allow us to make use of the air we breathe.

The secret--and the hard part--is being able to look at the gene activity of just one cell at a time. The NIBR scientists began working with the HMS scientists in 2015, about the time the HMS scientists published a seminal paper on their single-cell RNA sequencing platform, which is called InDrops. Up to that point researchers had been able to perform single-cell analysis of genetic activity on a few cells at a time, not the thousands of cells needed for a catalogue like the one Jaffe envisioned for the airway.

InDrops allows scientists to capture individual cells in water droplets and then wrap those in oil so that the contents of the cells don't mix. In order to look at only active genes, it starts with the cell's messenger RNA. Messenger RNA delivers information encoded in the DNA to the machinery that produces proteins, and it is only made when a gene is active.

Ideally, scientists would sequence the messenger RNA to identify the source genes. But with no good method for sequencing RNA, it must be converted back into DNA. Doing this on many thousands of isolated cells would be a time consuming and expensive task, so Klein's lab figured out how to rapidly label the contents of the cell with tiny pieces of DNA that act as bar codes by isolating each cell in a tiny water droplet in oil.

"We are introducing the DNA into the droplets using squishy gels," says Klein. "Each has about a billion little tags on it, which are then used to decorate all the material from the cell, and then we can break the droplets open and the contents of each cell are now tagged with a different bar code. And we can read out the bar codes later and figure out which genetic material came from which cell."

InDrops is now commercially available, but the Novartis researchers were able to access the technology for the airway project when it had just been invented. And Klein's lab is on the cutting edge of analyzing the data that it produces. Graduate student Rapolas Zilionis, co-first author on the paper, handled this for the team along with Virginia Savova from the Klein lab.

"They analyzed the data and then fed it into an interactive viewer that allowed me to go through and look for differences in gene activity between the different cell types," Plasschaert explains.

New cell pops out

Looking at that activity was essential to the catalogue of cell types envisioned at the start of the project. The analysis revealed a couple clusters of cells that did not appear to be like anything in the scientific literature, which sent Plasschaert down a laborious journey to further characterize those cells.

In essence, she was looking at a bowl of jelly beans, and she could tell some were colors not seen before, but she had no idea what flavor they were, or for her cells, their function. The experiments she designed showed that one set resembled the ionocytes--or cells that transport ions--found in some other tissues, most commonly in fish and frogs.

In fish, ionocytes transport ions in order to maintain an equilibrium with the water that surrounds them. In a bit of an evolutionary parallel, the cells discovered by the Novartis-Harvard team--which the researchers call pulmonary ionocytes--move ions at the interface between our tissues and the air around us. After considering various ways the ionocytes could do this, Plasschaert settled on CFTR activity.

The CFTR protein was known to transport ions. And Plasschaert conducted several experiments to show that the newly discovered pulmonary ionocytes have much more of the CFTR activity than another type of cell, ciliated cells, previously thought to be the main site of CFTR activity in the airway.

"The key finding turned out to be the high level of CFTR activity in the ionocytes and not in the ciliated cells people expected," Plasschaert says. "This will help us target the functionally relevant cells for cystic fibrosis therapies going forward."

Graduate student Zilionis says that from the beginning he was excited by the project's main goal of cataloguing the types of cells that make up our airway and was surprised that it produced a result with therapeutic potential so quickly.

"We can now design ways to isolate pulmonary ionocytes," he says. "We can get those cells in a dish and start doing things with them, looking for how they impact disease."

Credit: 
Novartis Institutes for BioMedical Research

Better way found to determine the integrity of metals

Researchers at the University of Waterloo have found a better way to identify atomic structures, an essential step in improving materials selection in the aviation, construction and automotive industries.

The findings of the study could result in greater confidence when determining the integrity of metals.

Devinder Kumar, a PhD candidate in systems design engineering at Waterloo, collaborated with the Fritz Haber Institute (FHI) in Berlin, to develop a powerful AI model that can accurately detect different atomic structures in metallic materials. The system can find imperfections in the metal that were previously undetectable.

"Anywhere you have metals you want to know the consistency, and that can't be done in current practical scenarios because current methods fail to identify the symmetry in imperfect conditions," said Kumar, who is a member of the Vision and Image Processing Research Group under the supervision of Alexander Wong, a professor at Waterloo and Canada Research Chair in the area of artificial intelligence.

"So, this new method of evaluating metallic material will lead to better material design overall and has the potential to affect all the industries where you need material designing properties."

FHI came up with a new scenario that can artificially create data which relates to the real world. Kumar along with his collaborators was able to use this to generate about 80,000 images of the different kind of defects and displacements to produce a very effective AI model to identify various types of crystal structures in practical scenarios. This data has been released to the public so people can actually learn their own algorithms.

"In theory, all metallic materials have perfect symmetry, and all the items are in the correct place, but in practice because of various reasons such as cheap manufacturing there are defects," Kumar said. "All these current methods fail when they try to match actual ideal structures, most of them fail when there is even one per cent defect."

"We have made an AI-based algorithm or model that can classify these kinds of symmetries even up to 40 per cent of defect."

The study, Insightful classification of crystal structures using deep learning, was published recently in the journal Nature Communications.

Credit: 
University of Waterloo

New model reveals rips in Earth's mantle layer below southern Tibet

image: Illinois geology professor Xiaodong Song is a co-author of a new study that suggests rips in the upper mantle layer of the Indian tectonic plate are responsible for the locations of earthquakes and the surface deformation seen in southern Tibet.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- Seismic waves are helping researchers uncover the mysterious subsurface history of the Tibetan Plateau, possibly lending insight to future earthquake activity in the region.

The specifics of the deep geologic processes that occurred roughly 50 million years ago, when the Indian and Asian tectonics plates collided, have remained elusive. By collecting high-resolution earthquake data, geologists have generated a model that provides the clearest picture so far of the geology below the surface of the Tibetan Plateau. They report their findings in the Proceedings of the National Academy of Sciences.

"The continental collision between the Indian and Asian tectonic plates shaped the landscape of East Asia, producing some of the deadliest earthquakes in the world," said Xiaodong Song, a geology professor at the University of Illinois and co-author of the new study. "However, the vast high plateau is largely inaccessible to geological and geophysical studies."

Song and his colleagues reveal that the upper mantle layer of the Indian tectonic plate appears to be torn into four pieces that dive under Asia - each at a different angle and distance from the origin of the tear.

The team gathered geophysical data from various sources to generate seismic wave tomographic images of Tibet that extend roughly 160 kilometers deep. They found that these newly modeled images match well with historic earthquake activity and with geological and geochemical observations.

"The presence of these tears helps give a unified explanation as to why mantle-deep earthquakes occur in some parts of southern and central Tibet and not others," Song said.

The intact regions of crust between the tears are strong enough to accumulate strain to generate earthquakes. The crustal areas above the torn regions are exposed to more of the heat from the mantle and are therefore more ductile, the researchers said. That ductile flexibility makes warmer crust less susceptible to earthquakes.

"What were previously thought of as unusual locations for some of the intercontinental earthquakes in the southern Tibetan Plateau seem to make more sense now after looking at this model," said graduate student and co-author Jiangtao Li. "There is a striking correlation with the location of the earthquakes and the orientation of the fragmented Indian upper mantle."

The model also explains some of the deformation patterns seen at the surface, including a series of unusual north-south rifts. Together, the earthquake locations and deformation patterns are evidence of a strongly coupled crust and upper mantle in southern Tibet, the researchers said.

Armed with this new information, geoscientists now have a clearer picture of what role the Indian upper mantle plays in shaping the Tibetan Plateau and why earthquakes happen where they do in this region. This could help assess earthquake risk, the researchers said.

"Overall, our new research suggests that we need to take a deeper view to understand the Himalayan-Tibetan continental deformation and evolution," Song said.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Engineers use Tiki torches in study of soot, diesel filters

image: These are sample cores from particulate filters used in testing.

Image: 
University of Notre Dame

Chemical engineers testing methods to improve efficiency of diesel engines while maintaining performance are getting help from a summer staple: Tiki torches.

A team of engineers at the University of Notre Dame is using the backyard torches as part of an effort to mimic the soot oxidation process in a diesel engine -- when soot in diesel exhaust collects in the walls of a particulate filter and has to be burned off -- according to a study recently published in Catalysts.

"This study is part of an effort over many years in which we have discovered and developed low-cost catalysts for soot oxidation that are based on silica glass," said Paul McGinn, a co-lead author of the study and professor in the Department of Chemical and Biomolecular Engineering at Notre Dame.

McGinn and co-principal investigator Changsheng Su at Cummins Inc. developed a method to coat diesel particulate filters with a silica glass, which slowly releases potassium ions. The potassium acts as a catalyst, reducing temperatures required to initiate filter regeneration -- or soot oxidation -- for improved efficiency.

What they needed was a simple way to simulate real-world driving conditions, including the continuous flow of soot as it passes through a diesel particulate filter.

"We could do it continuously using the Tiki torch, using the Tiki soot as a surrogate for real engine soot," McGinn said. "Depending on the setting, you really get a lot of soot coming off of it, which is what we want." The team constructed a sophisticated reactor equipped with soot generator and backpressure sensors, which allows them to control conditions including oxygen rates, air-to-fuel ratios and soot production per hour.

New methods of reducing soot oxidation are of particular interest to manufacturers of diesel engines. Diesel exhaust contains, among other things, soot particles and nitrogen oxides (NOx), with soot being a major contributor to global warming and a cause of breathing problems. The Environmental Protection Agency (EPA) has been working to reduce emissions from vehicles, industrial vehicles, locomotives and ships for more than a decade.

For diesel engine vehicles, the EPA requires both soot and NOx be kept below certain levels but challenges remain to reducing those emissions economically without sacrificing performance. When the engine's operating conditions are adjusted to emit low levels of NOx, soot levels increase, and vice versa.

A standard diesel particulate filter is a ceramic cylinder with a honeycomb-style structure and porous walls. Every other channel -- or opening -- of the filter is closed off. As exhaust enters the filter, soot collects along the interior walls as cleaned exhaust passes through.

To burn off soot buildup along the filter walls, exhaust temperatures need to reach 600 degrees Celsius (1,112 F). "When you're in an urban environment where you're stopping and starting your engine, the exhaust temperature doesn't get that hot," McGinn said.

In some cases, fuel is used to heat up the filter and burn off the soot -- a process called active regeneration -- which delivers a hit the vehicle's fuel mileage and requires substantial noble metal (such as platinum) usage.

"Everyone is looking for a low-cost way to get the temperature down," McGinn said. "In our case, we've developed an inexpensive glass coating that's one to two microns thick and apply it to the diesel particulate filters. The glass delivers a potassium catalyst slowly over 150,000 miles of driving and allows for what's called passive regeneration. So when you're out on the highway at high speed, the exhaust temperature gets high enough to burn off soot buildup continuously."

With Tiki torches providing the soot buildup needed for testing, McGinn said his team will look at how to further tailor the glass composition to also reduce NOx.

Credit: 
University of Notre Dame

Combined approach offers hope to lung cancer patients who become resistant to drugs

image: Response to treatment in cancer cells: The abundance of the EGFR (top row) and HER2 (bottom row) receptors is reduced when the cells are exposed to triple therapy -- Tagrisso, Erbitux and Herceptin (right column) and to the two antibodies, Erbitux and Herceptin (second from right column), but not when they are exposed to Tagrisso alone (second column from left) or to no therapy at all (left column): The abundance of the EGFR (top row) and HER2 (bottom row) receptors is reduced when the cells are exposed to triple therapy -- Tagrisso, Erbitux and Herceptin (right column) and to the two antibodies, Erbitux and Herceptin (second from right column), but not when they are exposed to Tagrisso alone (second column from left) or to no therapy at all (left column)

Image: 
Weizmann Institute of Science

New-generation lung cancer drugs have been effective in a large number of patients, but within about a year, the patients tend to develop resistance to the therapy. Researchers at the Weizmann Institute of Science, in collaboration with physicians, have conducted a study in mice, in which they used existing drugs in a new combination to help crush potential resistance to the treatment. Their findings were published recently in the journal Clinical Cancer Research.

Lung cancer is the most common cause of death from malignancy, accounting for about one-fifth of cancer deaths worldwide according to World Health Organization estimates. New drugs treat certain subtypes of this cancer by targeting the genetic mutations characteristic of each subtype.

In about 12%, on average, of lung cancer patients - most of them non-smokers - the malignancy is due to a mutation in a gene called EGFR. This gene encodes a receptor that is embedded in the cell membrane, protruding in both directions: Its "head," the outer portion on the cell surface, binds with a growth factor that transmits a growth signal to the cell; the "legs," the inner portion inside the cell, works as an enzyme that further conveys the signal to the cellular nucleus. EGFR's growth message prompts the cell to divide which normally serves a good purpose - for example, helping tissues to heal - but a mutation on the inner part of the receptor can cause the cell to divide uncontrollably, leading to cancer.

Patients with the EGFR mutation can be helped by small molecules known as kinase inhibitors, which block the mutation, preventing EGFR from generating a signal for uncontrolled division. These drugs work much better than chemotherapy: They are more effective and cause fewer side effects, and they can be taken as a pill rather than by injection. The problem is that within 10 to 14 months many of the patients develop a secondary mutation in the EGFR. This causes their tumors to relapse because it enables EGFR to get around the kinase inhibitor

In 2015, a new kinase inhibitor known by the trade name Tagrisso, which blocks this second mutation, was approved for clinical use when the lung tumor starts growing again. Tagrisso helps, but usually not for long. Again, within 10 to 14 months a third mutation or other alterations emerge in the EGFR gene, causing another relapse.

"This of course is a nightmare for the patients, their families and the doctors," says Prof. Yosef Yarden of the Biological Regulation Department. "We've now developed a new approach that works in mice and may help relieve this frustrating situation if our method proves to work in humans."

In collaboration with physicians from the Chaim Sheba Medical Center in Tel Hashomer, Israel, Yarden's team tried out a combination therapy. Mice implanted with human lung cancer cells were given Tagrisso and a drug that blocks the EGFR on the cell surface. This drug was Erbitux, an antibody that binds to the protruding outer portion of the EGFR, preventing the cell from receiving the growth message. The Tagrisso they were given works inside the cell, preventing the inner portion of EGFR, the growth-promoting kinase, from relaying the growth signal.

This original attempt at a combination therapy had proved unsuccessful, probably because when EGFR is blocked on the cell surface, it calls upon a close "relative," a receptor called HER2, to pop up on the cell membrane. So in the new study, the researchers gave mice a triple combination therapy, which apart from Tagrisso included two antibodies instead of one: Erbitux and a drug called Herceptin, which blocks HER2.

This time the approach worked. Tumors shrunk substantially and did not regrow as long as the mice received the triple combination treatment. The use of this approach in human patients should be facilitated by the fact that both antibodies are drugs already approved for use against other cancers: Erbitux is used in colorectal and Herceptin, in breast, cancer.

"If confirmed in humans, the new combination therapy may help extend the lives of many thousands of lung cancer patients who currently develop resistance to kinase inhibitors," Yarden says.

Credit: 
Weizmann Institute of Science

Reading rivers

Think of it like a geological mystery story: For decades, scientists have known that some 25,000 years ago, a massive ice sheet stretched to cover most of Canada and a large section of the northeastern United States, but what's been trickier to pin down is how - and especially how quickly - did it reach its ultimate size.

One clue to finding the answer to that mystery, Tamara Pico said, may be the Hudson River.

A graduate student working in the group of Jerry Mitrovica, the Frank B. Baird, Jr. Professor of Science, Pico is the lead author of a study that estimates how glaciers moved by examining how the weight of the ice sheet altered topography and led to changes in the course of the river. The study is described in a July 2018 paper published in Geology.

"The Hudson River has changed course multiple times over the last million years," Pico said. "The last time was about 30,000 years ago, just before the last glacial maximum, when it moved to the east.

"That ancestral channel has been dated and mapped...and the way the ice sheet connects to this is, as it is growing, it's loading the crust it's sitting on. The Earth is like bread dough on these time scales, so as it gets depressed under the ice sheet, the region around it bulges upward - in fact, we call it the peripheral bulge. The Hudson is sitting on this bulge, and as it's lifted up and tilted, the river can be forced to change directions"

To develop a system that could connect the growth of the ice sheet with changes in the Hudson's direction, Pico began with a model for how the Earth deforms in response to various loads.

"So we can say, if there's an ice sheet over Canada, I can predict the land in New York City to be uplifted by X many meters," she said. "What we did was create a number of different ice histories that show how the ice sheet might have grown, each of which predicts a certain pattern of uplift and then we can model how the river might have evolved in response to that upwelling."

The end result, Pico said, is a model that - for the first time - may be able use the changes in natural features in the landscape to measure the growth of ice sheets.

"This is the first time a study has used the change in a river's direction to understand which ice history is most likely," she said. "There's very little data about how the ice sheet grew, because as it grows it acts like a bulldozer and scrapes everything away to the edges. We have plenty of information about how the ice retreats, because it deposits debris as it melts back, but we don't get that type of record as the ice is advancing."

What little data scientists do have about how the ice sheet grew, Pico said, comes from data about sea level during the period, and suggests that the ice sheet over Canada, particularly in the eastern part of the country, remained relatively small for a long period of time, then suddenly began to grow quickly.

"In a way, this study is motivated by that, because it's asking can we use evidence for a change in river direction ...to test whether the ice sheet grew quickly or slowly," she said. "We can only ask that question because these areas were never covered by ice, so this record is preserved. We can use evidence in the landscape and the rivers to say something about the ice sheet, even though this area was never covered by ice."

While the study offers strong suggestive evidence that the technique works, Pico said there is still a great deal of work to be done to confirm that the findings are solid.

"This is the first time this has been done, so we need to do more work to explore how the river responds to this type of uplift and understand what we should be looking for in the landscape," she said. "But I think it's extremely exciting because we are so limited in what we know about ice sheets before the last glacial maximum. We don't know how fast they grew. If we don't know that, we don't know how stable they are.

Going forward, Pico said she is working to apply the technique to several other rivers along the eastern seaboard, including the Delaware, Potomac and Susquehanna rivers, all of which show signs of rapid change during the same period.

"There is some evidence that rivers experienced very unusual changes that are no doubt related to this process," she said. "The Delaware may have actually reversed slope, and the Potomac and Susquehanna both show a large increase in erosion in some areas, suggesting the water was moving much faster."

In the long run, Pico said, the study may help researchers rewrite their understanding of how quickly the landscape can change and how rivers and other natural features respond.

"For me, this work is about trying to connect the evidence on land to the history of glaciation to show the community that this process - what we call glacial isostatic adjustment - can really impact rivers," Pico said. "People most often think of rivers as stable features of the landscape that remain fixed over very long, million-year, time scales, but we can show that these ice age effects can alter the landscape on millennial time scales - the ice sheet grows, the Earth deforms, and rivers respond."

Credit: 
Harvard University

Theorists publish highest-precision prediction of muon magnetic anomaly

image: The Muon g-2 storage ring installed and ready to take data at Fermi National Accelerator Laboratory.

Image: 
Fermilab

UPTON, NY--Theoretical physicists at the U.S. Department of Energy's (DOE's) Brookhaven National Laboratory and their collaborators have just released the most precise prediction of how subatomic particles called muons--heavy cousins of electrons--"wobble" off their path in a powerful magnetic field. The calculations take into account how muons interact with all other known particles through three of nature's four fundamental forces (the strong nuclear force, the weak nuclear force, and electromagnetism) while reducing the greatest source of uncertainty in the prediction. The results, published in Physical Review Letters as an Editors' Suggestion, come just in time for the start of a new experiment measuring the wobble now underway at DOE's Fermi National Accelerator Laboratory (Fermilab).

A version of this experiment, known as "Muon g-2," ran at Brookhaven Lab in the late 1990s and early 2000s, producing a series of results indicating a discrepancy between the measurement and the prediction. Though not quite significant enough to declare a discovery, those results hinted that new, yet-to-be discovered particles might be affecting the muons' behavior. The new experiment at Fermilab, combined with the higher-precision calculations, will provide a more stringent test of the Standard Model, the reigning theory of particle physics. If the discrepancy between experiment and theory still stands, it could point to the existence of new particles.

"If there's another particle that pops into existence and interacts with the muon before it interacts with the magnetic field, that could explain the difference between the experimental measurement and our theoretical prediction," said Christoph Lehner, one of the Brookhaven Lab theorists involved in the latest calculations. "That could be a particle we've never seen before, one not included in the Standard Model."

Finding new particles beyond those already cataloged by the Standard Model has long been a quest for particle physicists. Spotting signs of a new particle affecting the behavior of muons could guide the design of experiments to search for direct evidence of such particles, said Taku Izubuchi, another leader of Brookhaven's theoretical physics team.

"It would be a strong hint and would give us some information about what this unknown particle might be--something about what the new physics is, how this particle affects the muon, and what to look for," Izubuchi said.

The muon anomaly

The Muon g-2 experiment measures what happens as muons circulate through a 50-foot-diameter electromagnet storage ring. The muons, which have intrinsic magnetism and spin (sort of like spinning toy tops), start off with their spins aligned with their direction of motion. But as the particles go 'round and 'round the magnet racetrack, they interact with the storage ring's magnetic field and also with a zoo of virtual particles that pop in and out of existence within the vacuum. This all happens in accordance with the rules of the Standard Model, which describes all the known particles and their interactions, so the mathematical calculations based on that theory can precisely predict how the muons' alignment should precess, or "wobble" away from their spin-aligned path. Sensors surrounding the magnet measure the precession with extreme precision so the physicists can test whether the theory-generated prediction is correct.

Both the experiments measuring this quantity and the theoretical predictions have become more and more precise, tracing a journey across the country with input from many famous physicists.

A race and collaboration for precision

"There is a race of sorts between experiment and theory," Lehner said. "Getting a more precise experimental measurement allows you to test more and more details of the theory. And then you also need to control the theory calculation at higher and higher levels to match the precision of the experiment."

With lingering hints of a new discovery from the Brookhaven experiment--but also the possibility that the discrepancy would disappear with higher precision measurements--physicists pushed for the opportunity to continue the search using a higher-intensity muon beam at Fermilab. In the summer of 2013, the two labs teamed up to transport Brookhaven's storage ring via an epic land-and-sea journey from Long Island to Illinois. After tuning up the magnet and making a slew of other adjustments, the team at Fermilab recently started taking new data.

Meanwhile, the theorists have been refining their calculations to match the precision of the new experiment.

"There have been many heroic physicists who have spent a huge part of their lives on this problem," Izubuchi said. "What we are measuring is a tiny deviation from the expected behavior of these particles--like measuring a half a millimeter deviation in the flight distance between New York and Los Angeles! But everything about the fate of the laws of physics depends on that difference. So, it sounds small, but it's really important. You have to understand everything to explain this deviation," he said.

The path to reduced uncertainty

By "everything" he means how all the known particles of the Standard Model affect muons via nature's four fundamental forces--gravity, electromagnetism, the strong nuclear force, and the electroweak force. Fortunately, the electroweak contributions are well understood, and gravity is thought to play a currently negligible role in the muon's wobble. So the latest effort--led by the Brookhaven team with contributions from the RBC Collaboration (made up of physicists from the RIKEN BNL Research Center, Brookhaven Lab, and Columbia University) and the UKQCD collaboration--focuses specifically on the combined effects of the strong force (described by a theory called quantum chromodynamics, or QCD) and electromagnetism.

"This has been the least understood part of the theory, and therefore the greatest source of uncertainty in the overall prediction. Our paper is the most successful attempt to reduce those uncertainties, the last piece at the so-called 'precision frontier'--the one that improves the overall theory calculation," Lehner said.

The mathematical calculations are extremely complex--from laying out all the possible particle interactions and understanding their individual contributions to calculating their combined effects. To tackle the challenge, the physicists used a method known as Lattice QCD, originally developed at Brookhaven Lab, and powerful supercomputers. The largest was the Leadership Computing Facility at Argonne National Laboratory, a DOE Office of Science user facility, while smaller supercomputers hosted by Brookhaven's Computational Sciences Initiative (CSI)--including one machine purchased with funds from RIKEN, CSI, and Lehner's DOE Early Career Research Award funding--were also essential to the final result.

"One of the reasons for our increased precision was our new methodology, which combined the most precise data from supercomputer simulations with related experimental measurements," Lehner noted.

Other groups have also been working on this problem, he said, and the entire community of about 100 theoretical physicists will be discussing all of the results in a series of workshops over the next several months to come to agreement on the value they will use to compare with the Fermilab measurements.

"We're really looking forward to Fermilab's results," Izubuchi said, echoing the anticipation of all the physicists who have come before him in this quest to understand the secrets of the universe.

Credit: 
DOE/Brookhaven National Laboratory

Sticking with the wrong choice

MINNEAPOLIS, MN- July 13, 2018- The behavior of people who remain committed to a choice, even when it is clear that an alternate choice would be a better option, has been a perplexing phenomenon to psychologists and economists. For example, people will continue to wait in the slow line at a grocery store, stick out an unhealthy relationship, or refuse to abandon an expensive, wasteful project - all because such individuals have already invested time, effort, or money. This well-known cognitive phenomenon termed the "sunk cost fallacy" has long been considered a problem unique to humans. New research has discovered that humans are not the only species that share these economically irrational flaws.

New research from the University of Minnesota published in the journal Science discovered that mice, rats, and humans all commit the sunk cost fallacy.

"The key to this research was that all three species learned to play the same economic game," says Brian Sweis, the paper's lead author, an MD/PhD student at the University of Minnesota. Mice and rats spent time from a limited budget foraging for flavored food pieces while humans similarly spent a limited time budget foraging for what humans these days seek - entertaining videos on the web.

Rats and mice ran around a maze that contained four food-delivery-locations ("restaurants"). On entry into each restaurant, the animal was informed of how long it would be before food would be delivered by an auditory tone. They had one hour to gather food and thus each entry meant they had to answer a question like, "Am I willing to spend 20 seconds from my time budget waiting for my cherry-flavored food pellet?" with a delay lasting anywhere from 1 to 30 seconds.

Similarly, humans saw a series of web galleries and were informed of the delay by a download bar. This meant humans had to answer an equivalent question: "Am I willing to spend 20 seconds from my time budget waiting for my kitten video?" In this way, each subject from each species revealed their own subjective preferences for individual food flavors or video galleries.

In this task, every entry required two decisions, a first decision when the delay was revealed, but did not count down, and then a second decision if the offer was accepted when subjects could quit and change their minds during the countdown. Remarkably, the authors found that all three species become more reluctant to quit the longer they waited - demonstrating the sunk cost fallacy.

Strikingly, subjects hesitated before accepting or rejecting offers during the initial decision before the countdown. "It's as if they knew they didn't want to get in line until they were sure," says Sweis. Even more surprising, neither mice, rats, nor humans took into account the sunk costs spent while deliberating. This suggests that the process of deliberation and the process of changing one's mind after an initial commitment depend on different economic factors, and that these factors are conserved across species.

"This project depended on the collaborative nature of science today," says senior author David Redish, a professor in the University of Minnesota Medical School's Neuroscience Department. "This was a collaboration between three laboratories and required working back and forth to ensure that we could ask similar questions across different species on these parallel tasks."

As such, this project builds on a number of breakthrough discoveries recently published by these laboratories, which find that mice, rats, and humans use similar neural systems to make these different types of decisions, that mice and rats also show regret after making mistakes, and that even mice can learn to avoid those mistakes by deliberating first, as revealed in a recent paper by these authors in PLOS Biology.

"These tasks reveal complex decision processes underlying the conflict between really wanting something on the one hand versus knowing better on the other," says Sweis.

"This is a conflict between different neural decision systems, and that means we can separately manipulate those systems," says Redish.

In other publications recently appearing in Nature Communications and the Proceedings of the National Academy of Sciences, these authors have found that both the effect of different drugs (cocaine, morphine) and different changes to neural circuits affect these two systems differently, which suggests that different forms of addiction would likely benefit from individualized treatments tailored to dysfunctions in distinct brain circuits.

"Decisions depend on neural circuits, which means that manipulating those circuits changes the decision process," says Mark Thomas, another of the study's senior authors and a professor in the Medical School's Neuroscience Department.

"There was a day when we asked ourselves, 'Rats forage for food, what do undergrads forage for?'" remembers author Samantha Abram, now a postdoctoral psychology fellow at the San Francisco VA Medical Center, who led the human component as a graduate student in the University of Minnesota Clinical Science and Psychopathology Research Program with her advisor Angus MacDonald, a professor in the Psychology Department of the University of Minnesota College of Liberal Arts.

By having all three species play the same economic game, these authors have revealed a new insight into how different parts of the brain make different types of decisions and that there is an evolutionary history to the flaws that make us human.

Credit: 
University of Minnesota Medical School