Culture

Gender roles highlight gender bias in judicial decisions

Judges may be just as biased or even more biased than the general public in deciding court cases where traditional gender roles are challenged, according to a new study published in the journal Social Psychological and Personality Science.

This study examined the role of gender bias relating to judges and legal decisions, and the sex discrimination worked both ways, sometimes against women and sometimes against men.

"These results show that judges' ideology and life experiences might influence their court decisions," said Andrea Miller, PhD, a visiting assistant professor of psychology at the University of Illinois Urbana-Champaign. "Many judges are not able to factor out their personal beliefs while they are considering court cases, even when they have the best possible intentions."

More than 500 judges from a state court system (68 percent men, 30 percent women, and 2 percent unidentified) participated in the study in an effort by that court system to address gender bias. The court system wasn't identified for confidentiality reasons.

"The judges who participated in the study did so at great personal and professional risk because they care deeply about confronting the possibility that there might be social group disparities in case outcomes," stated Miller, "This state court system has become a leader in the search for evidence-based solutions to the problem of implicit bias."

More than 500 lay people (59 percent men, 41 percent women) also were recruited online to take part in the study.

The judges and lay people analyzed two mock court cases, including a child custody case and a sex discrimination lawsuit where the plaintiff was presented as either a man or woman. The participants also completed surveys about their beliefs in traditional gender roles, such as stereotypes that women are more interested in raising children than in their careers and that children are better off if their fathers are the primary breadwinners for the family.

In the sex discrimination lawsuit, the plaintiff alleged that he or she was denied a promotion after taking six weeks of paid parental leave to care for an adopted baby. The plaintiff also wanted to introduce expert evidence from a psychologist about research on sex discrimination. Judges who supported traditional gender roles were more likely than lay people with similar gender ideologies to dismiss the case or rule against a female plaintiff.

In the divorce case, the father and mother both sought primary custody of their two children. Both spouses worked full-time jobs and sometimes had conflicts with caring for their children. Judges and lay people who supported traditional gender roles allocated more custody time to the mother than to the equally-qualified father, but the judges were even more biased in favoring the mother than were laypeople. Only three percent of the judges in the sample gave the father more custody time than the mother.

"In both of these cases, support for traditional gender roles was associated with decisions that encouraged women to engage in more family caregiving at the expense of their careers and discouraged men from participating in family caregiving at all." Miller said.

"Cultural ideas about gender bias may shape judges' decision-making as much as the rest of us," Miller said. "The significant expertise that judges possess doesn't inoculate them again decision-making biases, and we can't expect much change until we see policy reforms that address decision-making procedures in the courtroom."

Credit: 
Society for Personality and Social Psychology

Census data can level the playing field for small businesses

Local governments and small businesses could save thousands of dollars a year in consulting and research fees if they just used information that's already publicly available, according to research from the University of Waterloo.

A new Waterloo study found that information commonly paid for by small and medium enterprises (SMEs) and local governments, such as consumer spending data that can help businesses decide where to locate and determine market demand, can actually be obtained for free.

"Data such as the census information from Statistics Canada could easily be used in conjunction with a company's own information to aid in decision making," said Derek Robinson, a professor of geography at Waterloo. "The spatial pattern of storing and marketing opportunities can be mapped out using this information, which can help small businesses avoid costly decisions that may not be easy for them to recover from."

As part of the study, Robinson and Andrei Balulescu, a former Waterloo graduate, used publicly available information as well as data from private industry in the home improvement retail sector in Ontario. They found that by combining this information, they could easily identify the spatial pattern of consumer spending on home improvement products and could identify geographic hot spots and cold spots related to people's spending habits.

The study broke the province into close to 20,000 units and was able to generate data on 23 individual product categories related to home improvement and how spending in these categories varied by household income. The information was verified using proprietary sales data provided by a big-box industry partner in the home improvement retail sector to show that the estimates were indeed accurate.

"SMEs account for 99.7 per cent of all firms in Canada and employ 54.8 per cent of all payroll persons," said Balulescu. "This method can be applied to any sector, and could help businesses pick the right location, based on market demand for new stores, and help to prevent the roughly 7,000 businesses that fail each year from having to close their doors."

"This tool can help to put smaller businesses on an equal footing with large retailers, who have more capital to spend on gathering businesses intelligence. Furthermore, it could help local economic development departments prove to companies there is both market and opportunity in their region."

The study, titled, 'Comparison of methods for quantifying consumer spending on retail using publicly available data', including detailed maps of "hot and cold" spots of home-improvement spending can be found in the International Journal of Geographical Information Science.

Credit: 
University of Waterloo

Debt matters: Women use credit to bridge income gaps, while men are less cautious

When it comes to attitudes about money matters, gender often makes a difference. Take high-risk investments -- research shows women tend to be more cautious than their male counterparts.

With debt, however, little is known about how gender plays a role. A new study on attitudes about debt shows that men have greater tolerance for using debt to buy luxury items, while women are more accepting of debt used in appropriate ways, including to bridge income gaps. The study is published in The Journal of Consumer Affairs.

"We found that gender absolutely influences attitudes about debt," said Mary Eschelbach Hansen, a study author and American University economics professor. "When women observe others facing financial troubles or unemployment, or when women themselves have these experiences, they come to view debt as a tool to help smooth consumption. And, in general, they are less tempted than men to use debt to buy luxuries."

Using consumer survey data from 2004-2013, the researchers examined whether women and men had differing tolerances for debt and if economic events, both recent and in the past, affected their feelings about taking on debt. The researchers chose to focus on analyzing the survey data of women and men who had never been married.

Hansen and her colleagues Erin E. George, assistant professor of economics at Hood College, and Julie Lyn Routzahn, associate professor of economics and business administration at McDaniel College, measured the difference in men's and women's responses to questions about their attitudes towards borrowing money for luxury purchases and towards covering living expenses when income is cut. They also considered whether people taking the survey had recently been unemployed or had difficulty making debt payments. The researchers used changes between the annual surveys to measure how living through the Great Recession affected women and men.

The Great Recession and women

During the Great Recession, a period of global economic decline that began in 2007, women in the U.S. lost jobs early on. Declining tax revenues led to austerity measures that disproportionately affected women working in the public sector and those who received public benefits. The Great Recession is the only recession since 1973 during which women experienced substantial job loss. The subprime mortgage crisis was also gendered, as women were more likely to be targeted by lenders to receive subprime loans.

"As women observed the negative effects of the mortgage crisis and the Great Recession on other women, it reinforced their beliefs to use credit to bridge gaps in income. But perhaps more importantly, the experience of the Great Recession made women more cautious about taking on debt for non-essentials. This attitude of caution is a central reason why their financial position improved relative to the position of men," Erin George said.

For example, the researchers note that in the 2010 survey, the monthly debt burden of never-married men was higher than the burden carried by the typical never-married woman.

Study implications

The findings are good news for women, the researchers contend, because if women mainly use debt to smooth consumption, they can safeguard their well-being. If personal difficulties reduce women's willingness to borrow for luxuries, then there may be improvement in their financial stability. They have greater potential to acquire assets, thus reducing financial insecurity in old age. As the main findings concern women who have never married, improved financial stability increases the bargaining power of women entering marriage, thus reducing domestic abuse and divorce, while improving outcomes for children.

The findings also suggest that education about debt management should be spread across adulthood and that gender-specific education may be more effective than a gender-neutral curriculum.

"Lifetime financial education is important," Julie Routzahn said. "People's attitudes change over time as things happen to them and in the wider world. Women, in particular, tend to have lower wages and assets. Focusing on financial education for women at critical junctures in their lives, when they may benefit from such education, should be considered. Critical junctures could include, for example, when women are applying for unemployment insurance. That would be a good juncture because we know women are strongly affected by those experiences."

Credit: 
American University

Genetic test may improve post-stent treatment, outcome

DALLAS, April 3, 2018 - Using genetic testing to inform which blood thinner to use following a procedure to open narrowed blood vessels resulted in significantly fewer complications among patients, according to new research in Circulation: Genomic and Precision Medicine, an American Heart Association journal.

In the United States, heart disease is the leading cause of death, and stroke is the fifth-leading cause. A major contributor to these cardiovascular diseases is clogged blood vessels (atherosclerosis), which result from the buildup of fatty deposits or plaque.

Treatment for clogged blood vessels often includes angioplasty. In this procedure, the doctor inserts a small, medical balloon into the damaged blood vessels, and then inflates and removes it. Small tubes, or stents, also may be used to hold open the blood vessels. To prevent further damage from occurring, patients often take multiple blood thinners, such as clopidogrel and aspirin, after stent placement.

Previous research has shown that clopidogrel is less effective in patients with mutations on a specific gene, called CYP2C19, than in patients without the mutations. Whether genetic testing can help guide treatment in clinical practice, however, has remained unclear.

In this study, results showed that genetic testing for CYP2C19 mutations could be used to guide blood-thinner treatment after stent placement. Furthermore, patients with the mutations who received one of two clopidogrel alternatives compared to clopidogrel were more than three times less likely to die or have a heart attack, stroke or other major complications 12 months after treatment. Specifically, major complications occurred among 27 percent of clopidogel patients with the genetic mutations, compared to 8 percent of patients with the mutations who received the alternative medications.

These findings are similar to those of an earlier, multicenter study that found the risk of a major cardiovascular event more than doubled in patients with the genetic mutations who took clopidogrel.

"Using an algorithm based on genetic testing to guide treatment is sustainable and associated with better clinical outcomes in a real-world clinical practice, although it is difficult to consistently maintain," said Craig R. Lee, Pharm.D., Ph.D., F.A.H.A., associate professor of pharmacy at the University of North Carolina at Chapel Hill Eshelman School of Pharmacy. "Clinicians need to be aware of the increased risk of major adverse cardiovascular events associated with use of clopidogrel in patients receiving stents who carry either one or two copies of the mutation."

Study participants included 1,193 patients at the University of North Carolina Cardiac Catheterization Laboratory who received stent placement between July 1, 2012, and June 30, 2014. Their average age was 63 years and more than two-thirds were male. Most were white, 21 percent were black, and 1 percent was Asian. Patients identified as high risk, due to decreased blood flow to the heart, received the genetic testing. Follow up was 12 months.

The study has several limitations. For one, the investigators collected information after treatment, so they could not definitively say whether blood-thinner choice and the results of genetic testing caused better patient outcomes. Another limitation includes the use of a single hospital, which may not be applicable to different settings.

"We are using CYP2C19 genetic testing on a daily basis at our institution to help decide in a timely manner which drug to prescribe," said George "Rick" Stouffer, III, M.D., F.A.H.A., chief of cardiology and co-director of the McAllister Heart Institute at UNC.

Credit: 
American Heart Association

Proper data analysis might be among Hurricane Maria's casualties

The ability to use statistics to guide decision-making may be collateral damage of Hurricane Maria's devastating blow to Puerto Rico, according to a Penn State demographer.

In an article published today (April 2) in Health Affairs, Alexis Raúl Santos, the director of the graduate program in applied demography, said that a failure to properly account for all the deaths related to the 2017 storm and the possible dismantling of the territory's data collection services might affect the island's current chance of recovery, as well as its ability to respond to future emergencies.

"There are a lot of things that can go wrong if you aren't carefully gathering and analyzing data, particularly in your ability to convey the devastation of, in this case, an environmental disaster," said Santos. "One of the main concerns I have is that if you minimize the impact that this disaster has had in Puerto Rico, you are going to lose the attention of people who are in decision-making roles about the allocation of resources."

Those resources include money, but also other forms of help, such as the allocation of first responders, electrical technicians and food aid, he said. This inability to properly allocate resources may also be behind Puerto Rico's sluggish recovery from the disaster.

"It's been more than six months after the hurricane and there are still people without energy on the island," said Santos. "And that's unheard of for any jurisdiction. Usually after a disaster, things are fixed in two or three months, at the latest."

He added that Puerto Rico's status as a territory, not a state, makes the ability of statistics to draw attention to concerns even more important.

"Statistics are the only real voice Puerto Ricans have," said Santos, who is also an assistant teaching professor in sociology and criminology. "They don't have votes. They can't vote for a member of Congress, or the president of the United States. Their political power is diminished, so the only way you can create an effective strategy is to use data as your main tool for discussion."

He added that he is particularly concerned with the increased scrutiny of the Puerto Rico Institute of Statistics and what that might mean for future data gathering and analysis on the island.

"If we do not accept what our data are telling us, we will not be able to address the problems," he said. "Any local government that wants to address the needs of its people should listen to people who are doing data analysis and to allow the data to speak for itself."

Santos said that in a recent study, he and his colleagues estimated the death toll to be around 1,085, far higher than the 64 lives lost initially listed in government statistics. The government has since re-adjusted its tally to about 1,000, according to Santos.

The researchers arrived at their numbers by analyzing excess deaths on the island to more accurately quantify fatalities caused -- directly and indirectly -- by Hurricane Maria. They found, for example, that the death rate from September to October was 27 percent higher in 2017 compared to previous years.

Most of the excess deaths were concentrated among older age groups, according to the researchers. Excess deaths in nursing homes were 45 percent higher in 2017 compared to 2016. The researchers also found a 41 percent increase in excess deaths at emergency departments.

Credit: 
Penn State

Spear points prove early inhabitants liked to travel

Careful examination of numerous fluted spear points found in Alaska and western Canada prove that the Ice Age peopling of the Americas was much more complex than previously believed, according to a study done by two Texas A&M University researchers.

Heather Smith and Ted Goebel both were involved with the study that was associated with the Center for the Study of the First Americans, part of the Department of Anthropology at Texas A&M. Smith is now an assistant professor at Eastern New Mexico University.

Their work has been published in the current issue of PNAS (Proceedings of the National Academy of Sciences).

Smith, who worked on the study as part of her Ph.D. at Texas A&M, and Goebel, professor of anthropology at Texas A&M, believe the findings could change how we view the traveling patterns and routes of early humans from 14,000 to 12,000 years ago as they settled in numerous parts of North America.

Using new digital methods of analyses utilized for the first time in such a study of these artifacts, the researchers found that early settlers in the emerging ice-free corridor of interior western Canada "were travelling north to Alaska, not south from Alaska, as previously interpreted," says Goebel.

"Although during the late Ice Age there were two possible routes for the first Americans to follow on their migration from the Bering Land Bridge area southward to temperate North America, it now looks like only the Pacific coastal route was used, while the interior Canadian route may not have been fully explored until millennia later, and when it was, primarily from the south.

"The findings of these fluted spear points provide archaeological evidence supporting new genetic models explaining how humans colonized the New World."

Traditional interpretations of the peopling of the Americas have predicted that early inhabitants migrated from Siberia through Alaska, and then followed the ice-free corridor that gradually opened in western Canada to reach the Great Plains of the western U.S. But newer genetic studies of ancient Siberians, Alaskans, and Americans, as well as the discovery of new sites south of the Canadian ice sheets predating the opening of the ice-free corridor, suggest instead that the first Americans passed along the Pacific coast.

"The key is that the projectile points are related in their technology and morphology, and the way in which some of these characteristics vary forms the pattern of an ancestral-descendent relationship. This suggests that the people who carried the artifacts to these locations were related as well." adds Smith.

"It shows that these early people in western Canada and Alaska were descendent of Clovis (the first settlers of North America) and they used the same type of weapons to hunt for food, especially bison. These makers of fluted points were not just all over mid-continent North America but were also migrating northward back to the Arctic."

These artifacts can be used to document migration patterns of prehistoric peoples, she says.

"The spear points prove that the peopling of the Americas was much more complex than we had believed and that these early settlers went in a lot of different directions, not just south. We now have a better picture of what weapons they used to hunt and where their travels took them."

"This is tangible evidence of a connection between people in the Arctic and the Mid-continent 12,000 years ago, a connection which may be either genetic or social, but ultimately, speaks volumes of the capability and adaptability of early cultures in North America," she notes.

Credit: 
Texas A&M University

People use emotion to persuade, even when it could backfire

We intuitively use more emotional language to enhance our powers of persuasion, according to research published in Psychological Science, a journal of the Association for Psychological Science. The research shows that people tend toward appeals that aren't simply more positive or negative but are infused with emotionality, even when they're trying to sway an audience that may not be receptive to such language.

"Beyond simply becoming more positive or negative, people spontaneously shift toward using more emotional language when trying to persuade," says researcher Matthew D. Rocklage of The Kellogg School of Management at Northwestern University.

We might imagine that people would use very positive words such as "excellent" or "outstanding" to bring others around to their point of view, but the findings showed that people specifically used terms that convey a greater degree of emotion, such as "exciting" and "thrilling."

Understanding the components that make for a persuasive message is a critical focus of fields ranging from advertising to politics and even public health. Rocklage and colleagues wanted to look at the question from a different angle, exploring how we communicate with others when we are the ones trying to persuade.

"It's possible that to be seen as rational and reasonable, people might remove emotion from their language when attempting to persuade," says Rocklage. Drawing from attitudes theory and social-function theories of emotion, however, Rocklage and colleagues Derek D. Rucker and Loran F. Nordgren hypothesized that people would go the other way, tapping into emotional language as a means of social influence.

In one online study, the researchers showed 1,285 participants a photo and some relevant details for a particular product available from Amazon.com. They asked some participants to write a five-star review that would persuade readers to purchase that product, while they asked others to write a five-star review that simply described the product's positive features.

Using an established tool for quantitative linguistic analysis, the Evaluative Lexicon, the researchers then quantified how emotional, positive or negative, and extreme the reviews were.

Although the reviews were equally positive in their language, the data showed that reviewers used more emotional language when they were trying to persuade readers to buy a product compared with when they were writing a five-star review without intending to persuade. Participants' persuasive reviews also had more emotional language compared with actual five-star reviews for the same products published on Amazon.com.

Importantly, the shift toward more emotional language appeared to be automatic rather than deliberative. Participants still used more emotional descriptors in persuasive reviews when they were simultaneously trying to remember an 8-digit number, a competing task that made strategizing very difficult.

The tendency to use more emotional language emerged even when participants were attempting to persuade a group of "rational" thinkers.

"Past research indicates that emotional appeals can backfire when an audience prefers unemotional appeals," says Rocklage. "Our findings indicate that there is a strong enough connection between persuasion and emotion in people's minds that they continue to use emotion even in the face of an audience where that approach can backfire."

Indeed, additional evidence indicated a connection between emotion and persuasion in memory. The researchers found that the more emotional a word was, the more likely participants were to associate it with persuasion and the quicker they did so.

An interesting avenue for future research, says Rocklage, is to investigate whether the association transfers across various contexts.

"For instance, would people use less emotion if they were in a boardroom meeting or if they were writing a formal letter of recommendation?" he wonders.

Credit: 
Association for Psychological Science

UMD researcher uncovers protein used to outsmart the human immune system

image: Ixodes scapularis ticks transmit the pathogens of Lyme disease, resulting a multisystem illness in a variety of animals and humans. The image shows bottom side a live Ixodes tick as seen under a confocal immunofluorescence microscope.

Image: 
Dr. Utpal Pal, University of Maryland

A University of Maryland (UMD) researcher has uncovered a mechanism by which the bacteria that causes Lyme disease persists in the body and fights your early, innate immune responses. Dr. Utpal Pal, Professor in Veterinary Medicine, has been studying the Borrelia burgdorferi bacteria throughout his twelve years with UMD, and his work has already produced the protein marker used to identify this bacterial infection in the body. Now, Dr. Pal has isolated a protein produced by the bacteria that disables one of the body's first immune responses, giving insight into mechanisms that are largely not understood. He has also observed a never-before-seen phenomena demonstrating that even without this protein and with the immune system responding perfectly, the bacteria can spring back in the body weeks later. Understanding this bacteria, which is amongst only a few pathogens that can actually persist in the body for long periods of time, has major implications for the treatment of tick-borne diseases like Lyme disease, which is an increasingly chronic and consistently prevalent public health issue.

"Most people don't realize that they actually are walking around with more bacterial cells in their bodies than their own cells, so we are really bags of bacteria," explains Pal. "Most are good, but the second your body detects something that is a pathogen and can cause disease, your immune system starts to work." The body sends a first, nonspecific wave of attack to kill the bacteria detected that doesn't belong. This happens within a few hours to days. If this doesn't work, it takes seven to ten days to learn about the enemy and send a large second wave of reinforcements to kill what is left. "Lyme disease is actually caused by your immune system," explains Pal. "This bacteria wins the first battle, and your body overreacts so much that it causes intense inflammation in all the joints and areas that the bacteria spreads by sending so many reinforcements to kill it. Borrelia is then killed, but the inflammation remains and causes many of your symptoms for Lyme disease. That is why killing Borrelia in the first wave of immunity is so important."

The Centers for Disease Control and Prevention (CDC) estimate about 300,000 cases of Lyme disease annually in the United States. However, these cases are largely underestimated and reported due to the attention given to mosquito-transmitted diseases like malaria. "The majority of all vector-borne diseases in the US are actually tick-borne, and 6 of the 15 distinct tick diseases are transmitted by the Ixodes tick we study in our lab," says Pal. "The symptoms of these diseases present similarly to many other illnesses and are hard to pin down, so they are vastly underreported and an even bigger public health concern locally and globally than people realize." Now, chronic Lyme disease is a growing concern. Six to twelve months after traditional antibiotic therapy, many people have non-objective symptoms that return with varying intensity and no current treatment strategy, known as Post-Treatment Lyme Disease Syndrome.

Dr. Pal's research has shed some light on this issue and paved the way for future research and treatment options by discovering that even without the protein used to beat the first wave of immune defense, infection can reoccur in the body weeks later. "This means there is a second line of defense for Borrelia just like for our body's immune system. This had never been observed before and gives us insight into what could be causing these chronic Lyme disease cases," explains Pal.

Dr. Pal is frequently consulted for his expertise and has written books on this highly versatile bacteria. The federal government has recently put more emphasis on tick-borne disease research and a major public health issue with the passage of the 21st Century Cures Act. As part of this, Dr. Pal was asked to serve on a Tick-Borne Disease Working Group Subcommittee for the U.S. Department of Health & Human Services (DHHS) focused on vaccines and therapeutics for tick-borne diseases, driving future research in the field. Dr. Pal currently holds two concurrent multi-million dollar RO1 grants from the National Institutes of Health (NIH) for this work, only granted for highly important and influential research. "I am fascinated by Borrelia, and this discovery will open the door for much more work to treat and control important diseases like Lyme disease," says Pal.

Dr. Pal's paper, Plasticity in early immune evasion strategies of a bacterial pathogen, is published in the Proceedings of the National Academy of Sciences.

Credit: 
University of Maryland

First age-map of the heart of the Milky Way

image: This is an artist's impression showing the peanut shaped structure in the central bulge.

Image: 
ESO/NASA/JPL-Caltech/M. Kornmesser/R. Hurt

The first large-scale age-map of the Milky Way shows that a period of star formation lasting around 4 billion years created the complex structure at the heart of our galaxy. The results will be presented by Marina Rejkuba at the European Week of Astronomy and Space Science (EWASS) in Liverpool on Tuesday, 3rd April.

The Milky Way is a spiral galaxy with a bulge at the centre, thousands of light years in diameter, that contains about a quarter of the total mass of stars. Previous studies have shown that the bulge hosts two components: a population of metal-poor stars that have a spherical distribution, and a population of metal-rich stars that form an elongated bar with a "waist", like an x or a bi-lobed peanut. However, analyses of the ages of the stars to date have produced conflicting results. Now, an international team led by astronomers from the European Southern Observatory (ESO) have analysed the colour, brightness and spectral information on chemistry of individual stars to produce the age-map of the Milky Way.

The team have used simulated and observed data for millions of stars from the VISTA Variables in the Via Lactea (VVV) infrared survey of the inner Milky Way and compared them with measurements of the metal content of around 6000 stars across the inner bulge from a spectroscopic survey carried out with the GIRAFFE/FLAMES spectrograph on the ESO Very Large Telescope (GIBS).

Rejkuba says: "We analysed the colour and brightness of stars to find those that have just reached the point of exhausting their hydrogen fuel-burning in the core, which is a sensitive age indicator. Our findings were not consistent with a purely old Milky Way bulge, but require star formation lasting around 4 billion years and starting around 11 billion years ago. The youngest stars that we see are at least 7 billion years old, which is older than some previous studies had suggested."

The results presented are based on the analysis of three areas of the VVV infrared map, which, combined, make up the largest area studied so far in the Milky Way bulge. In all three areas, the findings on the age range of the stars are consistent.

Francisco Surot Madrid, the co-lead author of the study, says: "Previous studies have told us that the metal-rich stars in the bar are likely to be the youngest stars. Whilst we can't disentangle which star belongs to the bar/peanut or the spheroid component in the data we are using, our results tell us that the bar was already formed about 7 billion years ago and there were no large amounts of gas inflowing and forming stars along the bar after that."

The ultimate goal of this project is the construction of a map of the star formation history of the entire Milky Way bulge.

Co-lead author, Elena Valenti, says: "The final map will show us the star formation rate as a function of both age and metallicity for the stars across the bulge. This will be an important ingredient in telling the complete story of the formation of the Milky Way bulge."

Credit: 
Royal Astronomical Society

When drugs are wrong, skipped or make you sick: The cost of non-optimized medications

image: Jonathan Watanabe, PharmD, Ph.D., Skaggs School of Pharmacy and Pharmaceutical Sciences at UC San Diego, describes non-optimized medication therapy as "prescriptions that may not be exactly appropriate for your indication -- not quite the right medication or dose -- or you just don't take the medication for whatever reason, don't take them as directed, or the medication causes an adverse event or a new health problem."

Image: 
UC San Diego Health

Rising drug prices have gotten a lot of attention lately, but the actual cost of prescription medications is more than just the dollars and cents on the bill. Researchers at Skaggs School of Pharmacy and Pharmaceutical Sciences at University of California San Diego estimate that illness and death resulting from non-optimized medication therapy costs $528.4 billion annually, equivalent to 16 percent of total U.S. health care expenditures in 2016.

The analysis is published in the March 26 online issue of the Annals of Pharmacotherapy.

"Ideally, when you're sick, a health care professional prescribes you a medication, you take it as directed and you get better," said Jonathan Watanabe, PharmD, PhD, associate professor of clinical pharmacy in the Skaggs School of Pharmacy. "But what happens a lot of the time is the medication regimen is not optimized. In other words, the prescription may not be exactly appropriate for your indication -- not quite the right medication or dose -- or you just don't take the medication for whatever reason, don't take them as directed, or the medication causes an adverse event or a new health problem."

Watanabe led the study with Jan Hirsch, PhD, professor of clinical pharmacy and chair of the Division of Clinical Pharmacy at Skaggs School of Pharmacy, and Terry McInnis, MD, of Laboratory Corporation of America and the Get the Medications Right Institute.

Here's an example of non-optimized medication therapy: You come down with the flu and visit the local hospital's emergency department. A doctor prescribes Tamiflu, but you don't fill the prescription. It's too expensive or you don't have time or energy. Your symptoms worsen and you end up back at the hospital, and eventually in the Intensive Care Unit (ICU) -- all at great cost to yourself as the patient, as well as to the hospital and insurance company.

But the problem isn't just nonadherence (not taking the medication at all, or not taking it as directed), Watanabe said. Non-optimized medication therapy also includes instances in which a medication contributes to a new health problem. For example, the ACE inhibitor you're taking to lower your blood pressure causes you to cough, so you take an over-the-counter cough-and-cold medicine that also includes an ingredient that increases blood pressure and raises the risk of sleepiness and falls.

"In that case, the drug treatment is functioning like a new disease," Hirsch said.

For this study, Watanabe, Hirsch and McInnis created decision analytic models of the many health outcomes that could ensue due to a treatment failure or new treatment-caused medical problem, including emergency department visits, hospitalization, long-term care, medical appointments and additional medications. The data came from a variety of validated sources, including the federal government and the National Nursing Home Survey.

The researchers considered the current cost of each possible consequence and estimated the total annual cost of illnesses and deaths that result from non-optimized medication therapy to be $528.4 billion, with a plausible range of $495.3 billion to $672.7 billion. They estimated that the average cost of an individual experiencing treatment failure, a new medical problem or both after initial prescription use to be approximately $2,500. The estimates did not include non-medical costs such as transportation or caregiving or indirect costs related to lost productivity.

This is the first time these data have been updated since 2008, when it was last estimated to cost $290 billion annually, or about 13 percent of U.S. health care spending at that time. Watanabe said a lot has happened since.

"We've experienced increased medical costs and we now have the Affordable Care Act, which gave 20 million more people access to prescription drugs and, as a result, more chances for nonadherence and medication-related health issues.

"Our study also clarifies that the cost of $528.4 billion is due to much more than simply nonadherence, which has been a misinterpretation of prior estimates, but also includes any situation when the medication regimen is not optimized to correctly and safely treat something treatable."

While the estimate is the best researchers can make based upon available data, they acknowledge uncertainty in the probabilities of the predicted outcomes. Watanabe said better coding and tracking systems, now being rolled out in many health systems, will improve monitoring of medical outcomes related to medication therapy -- and help prevent problems.

"Non-optimized medication therapy is a massive avoidable cost. If medications were prescribed, monitored and taken properly, we wouldn't face this cost, and patients would be healthier," he said.

To improve outcomes and lower costs, Watanabe and team propose expanding the presence of direct patient care models by clinical pharmacists in collaboration with prescribing clinicians, a process known as comprehensive medication management.

There are many evolving models of pharmacists providing enhanced medication management services. Across the U.S., pharmacists review medications as part of the federal Medicare-mandated Medication Therapy Management program. Pharmacists in some states, including California, also have provider status, meaning they can initiate, change or end a patient's prescription in a collaborative agreement with prescribers. In other models, pharmacists work with prescribers and other members of a patient's health care team to review medications and recommend medication regimen changes under a prescriber's supervision.

To improve medication-related care, Watanabe and co-authors wrote that they would like to see a systematic and coordinated effort to break down prescriber-pharmacist silos and expand comprehensive medication management programs, in which clinical pharmacists have access to complete medical records, improved dialogue with other members of a patient's health care team and input as a medication is prescribed -- similar to what is now taking place at many U.S. Veterans Affairs clinics.

Meanwhile, their study findings are already being used to support several national initiatives to improve medication management.

"Pharmacists and pharmacies are the most readily available health care access point for most people, and their role will likely expand as the health care landscape shifts to emphasize more community-based and ambulatory care," Watanabe said. "Simply put, pharmacists can help optimize medication regiments to produce the best outcomes at the lowest cost."

Credit: 
University of California - San Diego

Links between eating red meat and distal colon cancer in women

A new study suggests that a diet free from red meat significantly reduces the risk of a type of colon cancer in women living in the United Kingdom.

University of Leeds researchers were part of an international team that assessed whether red meat, poultry, fish or vegetarian diets are associated with risk of colon and rectal cancer.

When comparing the effects of these diets to cancer development in specific subsites of the colon, they found that those regularly eating red meat compared to a red meat-free diet had higher rates of distal colon cancer -- cancer found on the descending section of the colon, where faeces is stored.

Lead author Dr Diego Rada Fernandez de Jauregui is part of the Nutritional Epidemiology Group (NEG) at Leeds, and the University of the Basque Country in Spain. He said: "The impact of different types of red meat and dietary patterns on cancer locations is one of the biggest challenges in the study of diet and colorectal cancer.

"Our research is one of the few studies looking at this relationship and while further analysis in a larger study is needed, it could provide valuable information for those with family history of colorectal cancer and those working on prevention."

More than 2.2 million new cases of colorectal cancer, also known as bowel cancer, are expected worldwide by 2030. It is the third most commonly diagnosed cancer in UK women. Previous studies have suggested that eating lots of red and processed meat increases the risk of colorectal cancer and it is estimated that around 1 in 5 bowel cancers in the UK are linked to eating these meats. However, there is limited available information about specific dietary patterns and the site of cancer occurrence in the bowel.

The study used data from the United Kingdom Women's Cohort Study. This cohort included a total of 32,147 women from England, Wales and Scotland. They were recruited and surveyed by the World Cancer Research fund between 1995 and 1998 and were tracked for an average of 17 years.

In addition to reporting their dietary habits, a total of 462 colorectal cases were documented and of the 335 colon cancers, 119 instances were of distal colon cancer. The study analysis, published today in the International Journal for Cancer, explored the relationship between the four dietary patterns and colorectal cancer and a further exploratory analysis examined the correlation between diet and colon subsites.

Co-author Janet Cade is head of the NEG and Professor of Nutritional Epidemiology and Public Health at the School of Food Science and Nutrition at Leeds. She said: "Our study not only helps shed light on how meat consumption may affect the sections of the colorectum differently, it emphasises the importance of reliable dietary reporting from large groups of people.

"With access to the United Kingdom Women's Cohort Study we are able to uncover trends in public health and analyse how diet can influence the prevention of cancer. Accurate dietary reporting provides researchers with the information they need to link the two together."

Credit: 
University of Leeds

New study uncovers major differences in billing complexity among US health insurers

One frequently proclaimed advantage of single-payer health care is its potential to reduce administrative costs, but new Vancouver School of Economics research calls that assumption into question.

The study, published this week in Health Affairs, analyzed a novel dataset to develop measures of how complex the billing process is for physicians interacting with insurers in the United States health care system.

"These results are dramatic and striking," said Joshua Gottlieb, one of the study's authors and associate professor at the VSE. "Conventional wisdom held that it should be more challenging for doctors to bill private insurers. Yet, when it comes to Medicaid, our results show the opposite."

Gottlieb and his coauthors were able to measure the complexity of billing Medicare, Medicaid, and private insurers while controlling for differences in physicians' billing ability and the complexity of the patient.

They found Medicaid was two to three times as difficult as Medicare or private insurance for doctors to bill. Medicaid managed care programs-those were states pay fixed annual fees to private insurers who insure Medicaid patients-had slightly lower billing complexity than state-run fee-for-service Medicaid.

The study evaluated 44 million claims from 68,000 physicians, worth a total of $8.4 billion.

In total, the authors estimate that the disputed bills amount to $54 billion annually across all insurers, and $11 billion could be saved if all billing efficiency were improved to the best level observed in the data.

The study also found billing complexity is decreasing over time, especially for Medicaid. The researchers aid this is good news, as it could encourage more physicians to treat Medicaid patients.

"For a health care system that spends so much money on administrative costs, from 15 to 30 per cent according to previous studies, this decline is an important cause for optimism," said Gottlieb. "Further work needs to be done to explore the reasons for this change."

Background

The study, "The Complexity of Billing And Paying for Physician Care", was published yesterday in Health Affairs. Gottlieb's coauthors are Adam Hale Shapiro, research advisor at the Federal Reserve Bank of San Francisco, and Abe Dunn, assistant chief economist in the Office of the Chief Economist, Bureau of Economic Analysis, Department of Commerce in Washington, D.C.

The study measured complexity using multiple metrics: the rates of claim denial and non-payment, and the number of interactions required for the physician and insurer to resolve the claim. The study also measured the amount of money disputed between the physician and insurer.

Credit: 
University of British Columbia

Heart defects in infant may predict heart problems in birth mother later in life

DALLAS, April 2, 2018 -- Women who give birth to infants with congenital heart defects may have an increased risk of cardiovascular hospitalizations later in life, according to new research in the American Heart Association's journal Circulation.

The study of more than one million women is the first to show congenital heart defects in newborns may be a marker for an increased risk of their mothers developing heart problems, including heart attack and heart failure, years after pregnancy.

Researchers analyzed data on women who delivered infants between 1989 and 2013 in Quebec, Canada, who had critical, noncritical or no heart defects. They tracked the women up to 25 years after pregnancy for hospitalizations related to cardiovascular disease including heart attack, heart failure, atherosclerotic disorders and heart transplants.

Compared to mothers of infants without congenital heart defects, researchers found:

43 percent higher risk of any cardiovascular hospitalization in women whose offspring had critical heart defects; and

24 percent higher risk of any cardiovascular hospitalization in women whose infants had noncritical defects.

How heart defects in infants relate to post-pregnancy cardiovascular disease in their mothers is unclear, the study notes, and a genetic component cannot be excluded. In addition, because 85 percent of infants with heart defects now survive past adolescence, the psychosocial impact of congenital heart disease on caregivers may have a cumulative effect over the long term.

"Caring for infants with critical heart defects is associated with psychosocial and financial stress, which may increase the mothers' long-term risk for cardiovascular disease," said Nathalie Auger, M.D., the study's lead author and an epidemiologist at the University of Montreal Hospital Research Centre in Montreal, Quebec, Canada.

Researchers believe the study provides an opportunity for these mothers to benefit from early prevention strategies and counseling to reduce their risk of cardiovascular disease -- the leading cause of death in women.

Healthcare providers, like obstetricians, who treat and follow mothers in the early stages of dealing with children who have heart defects can help women understand and minimize their risk, Auger said.

"Those physicians are very well-positioned to inform women about this possibility, the greater risk of heart disease, and to provide recommendations for targeting other risk factors like smoking, obesity and physical activity," she said.

Some limitations of the research include the fact that women were young at the start of study, so for many, the 25-year follow-up did not extend past menopause, which excluded the highest risk period for cardiovascular disease. And, because researchers used existing medical data, they didn't have detailed risk factor information on the women, such as body weight and smoking status. These are important points that should be considered in future studies, researchers noted.

Credit: 
American Heart Association

Researchers develop injectable bandage

A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.

With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.

In a recent article "Nanoengineered Injectable Hydrogels for Wound Healing Application" published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.

"Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches," said Gaharwar. "An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis."

The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.

When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.

"Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound" said Giriraj Lokhande, a graduate student in Gaharwar's lab and first author of the paper. "The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics."

Credit: 
Texas A&M University

Payment reform fix?

An experiment in Maryland designed to save health care dollars by shifting services away from expensive hospital-based care and toward less costly primary, preventive and outpatient services has yielded disappointing results.

These are the findings of two separate studies led by investigators from Harvard Medical School and the University of Pittsburgh. One study will be published in the April issue of Health Affairs, the other one appears in the February issue of JAMA Internal Medicine.

Maryland's program was rooted in the idea that paying hospitals a fixed global budget--rather than for each patient admission--would deter unnecessary admissions and provide better care outside of the hospital. Under this program, hospitals that saved money by reducing admissions would keep the savings. If hospitals exceeded their budgets, they would absorb the resulting costs.

"There is widespread interest in moving to alternative payment models that contain health care spending while still ensuring robust health outcomes," said study lead author, Eric Roberts, an assistant professor at the University of Pittsburgh Graduate School of Public Health. "Unfortunately, with the Maryland experiment we didn't find meaningful changes in care that policymakers had hoped this program would achieve."

Maryland's program was implemented in two waves: the first among rural hospitals, and the second among the remaining hospitals in the state. The two studies examine each phase in turn.

In the Health Affairs study, the researchers compared patterns of use and spending among Medicare beneficiaries served by rural Maryland hospitals with global budgets to patterns among patients served by a group of nonparticipating hospitals. After three years, the researchers found no reductions in hospital use or spending that could be linked to the global budget program.

In their JAMA Internal Medicine study, the researchers focused on the first two years of Maryland's state-wide rollout of the program, which included larger hospitals serving urban and suburban areas of the state. This study found no evidence that the global budget program was associated with reductions in hospital use or increases in primary care visits.

"When we drilled down into our data, we did not find any evidence that patterns of hospital and primary care use changed in Maryland relative to similar areas unaffected by the program," said Ateev Mehrotra, associate professor of health care policy and medicine at Harvard Medical School and a hospitalist at Beth Israel Deaconess Medical Center.

So what went wrong? One possibility is that the disappointing results may stem from complicated administrative logistics, the authors said. Also, they said, the policy changed only the way hospitals are paid, while payments to individual physicians were kept out of the budget and remained on a fee-for-service basis.

"The latter could be a big deal," Mehrotra said. "If the physician is not on board, the hospital doesn't have much control about the day-to-day decisions about how care is delivered."

Identifying alternatives to the current payment model is important to help alleviate the nation's health care spending problem, the team added.

"We believe that payment reform is critical, but we need to carefully evaluate which payment models are most effective," Mehrotra added.

The researchers emphasize that Maryland's new payment strategy still might have saved money for the state and Medicare. However, any savings did not come from changing the care that Maryland residents received, but rather from changes to hospital prices.

Maryland is now tweaking its program to include physicians. In the interim, Pennsylvania is building off Maryland's program by implementing a global budget model for rural hospitals. "Evaluating these different programs in different contexts will help policymakers better understand which payment systems work best," said Mehrotra.

Credit: 
Harvard Medical School