Culture

First flu strain children encounter may help explain why virus hits some harder than others

image: A computer-generated 3D rendering of a flu virus.

Image: 
Dan Higgins/Courtesy of CDC/Douglas Jordan

Why are some people better able to fight off the flu than others? Part of the answer, according to a new study, is related to the first flu strain we encounter in childhood.

Scientists from UCLA and the University of Arizona have found that people's ability to fight off the flu virus is determined not only by the subtypes of flu they have had throughout their lives, but also by the sequence in which they are been infected by the viruses. Their study is published in the open-access journal PLoS Pathogens.

The research offers an explanation for why some people fare much worse than others when infected with the same strain of the flu virus, and the findings could help inform strategies for minimizing the effects of the seasonal flu.

In addition, UCLA scientists, including Professor James Lloyd-Smith, who also was a senior author of the PLoS Pathogens research, recently completed a study that analyzes travel-related screening for the new novel coronavirus 2019-nCoV. (The research is under review; a preprint is online.)

The researchers report that screening travelers is not very effective for the 2019 coronavirus -- that it will catch less than half of infected travelers, on average -- and that most infected travelers are undetectable, meaning that they have no symptoms yet, and are unaware that they have been exposed. So stopping the spread of the virus is not a matter of just enhancing screening methods at airports and other travel hubs.

"This puts the onus on government officials and public health officials to follow up with travelers after they arrive, to isolate them and trace their contacts if they get sick later," said Lloyd-Smith, a UCLA professor of ecology and evolutionary biology. Many governments have started to impose quarantines, or even travel bans, as they realize that screening is not sufficient to stop the spread of the coronavirus.

One major concern, Lloyd-Smith said, is that other countries, especially developing nations, lack the infrastructure and resources for those measures, and are therefore vulnerable to importing the disease.

"Much of the public health world is very concerned about the virus being introduced into Africa or India, where large populations exist do not have access to advanced medical care," he said.

The researchers, including scientists from the University of Chicago and the London School of Tropical Hygiene and Medicine, have developed a free online app where people can calculate the effectiveness of travel screening based on a range of parameters.

Solving a decades-old question

The PLoS Pathogens study may help solve a problem that had for decades vexed scientists and health care professionals: why the same strain of the flu virus affects people with various degrees of severity.

A team that included some of the same UCLA and Arizona scientists reported in 2016 that exposure to influenza viruses during childhood gives people partial protection for the rest of their lives against distantly related influenza viruses. Biologists call the idea that past exposure to the flu virus determines a person's future response to infections "immunological imprinting."

The 2016 research helped overturn a commonly held belief that previous exposure to a flu virus conferred little or no immunological protection against strains that can jump from animals into humans, such as those causing the strains known as swine flu or bird flu. Those strains, which have caused hundreds of spillover cases of severe illness and death in humans, are of global concern because they could gain mutations that allow them to readily jump not only from animal populations to humans, but also to spread rapidly from person to person.

In the new study, the researchers investigated whether immunological imprinting could explain people's response to flu strains already circulating in the human population and to what extent it could account for observed discrepancies in how severely the seasonal flu affects people in different age groups.

To track how different strains of the flu virus affect people at different ages, the team analyzed health records that the Arizona Department of Health Services obtains from hospitals and private physicians.

Two subtypes of influenza virus, H3N2 and H1N1, have been responsible for seasonal outbreaks of the flu over the past several decades. H3N2 causes the majority of severe cases in high-risk elderly people and the majority of deaths from the flu. H1N1 is more likely to affect young and middle-aged adults, and causes fewer deaths.

The health record data revealed a pattern: People first exposed to the less severe strain, H1N1, during childhood were less likely to end up hospitalized if they encountered H1N1 again later in life than people who were first exposed to H3N2. And people first exposed to H3N2 received extra protection against H3N2 later in life.

The researchers also analyzed the evolutionary relationships between the flu strains. H1N1 and H3N2, they learned, belong to two separate branches on the influenza "family tree," said James Lloyd-Smith, a UCLA professor of ecology and evolutionary biology and one of the study's senior authors. While infection with one does result in the immune system being better prepared to fight a future infection from the other, protection against future infections is much stronger when one is exposed to strains from the same group one has battled before, he said.

The records also revealed another pattern: People whose first childhood exposure was to H2N2, a close cousin of H1N1, did not have a protective advantage when they later encountered H1N1. That phenomenon was much more difficult to explain, because the two subtypes are in the same group, and the researchers' earlier work showed that exposure to one can, in some cases, grant considerable protection against the other.

"Our immune system often struggles to recognize and defend against closely related strains of seasonal flu, even though these are essentially the genetic sisters and brothers of strains that circulated just a few years ago," said lead author Katelyn Gostic, who was a UCLA doctoral student in Lloyd-Smith's laboratory when the study was conducted and is now a postdoctoral fellow at the University of Chicago. "This is perplexing because our research on bird flu shows that deep in our immune memory, we have some ability to recognize and defend against the distantly related, genetic third cousins of the strains we saw as children.

"We hope that by studying differences in immunity against bird flus -- where our immune system shows a natural ability to deploy broadly effective protection -- and against seasonal flus -- where our immune system seems to have bigger blind spots -- we can uncover clues useful to universal influenza vaccine development."

Around the world, influenza remains a major killer. The past two flu seasons have been more severe than expected, said Michael Worobey, a co-author of the study and head of the University of Arizona's department of ecology and evolutionary biology. In the 2017-18 season, 80,000 people died in the U.S., more than in the swine flu pandemic of 2009, he said.

People who had their first bout of flu as children in 1955 -- when the H1N1 was circulating but the H3N2 virus was not -- were much more likely to be hospitalized with an H3N2 infection than an H1N1 infection last year, when both strains were circulating, Worobey said.

"The second subtype you're exposed to is not able to create an immune response that is as protective and durable as the first," he said.

The researchers hope that their findings could help predict which age groups might be severely affected during future flu seasons based on the subtype circulating. That information could also help health officials prepare their response, including decisions about who should receive certain vaccines that are only available in limited quantities.

Credit: 
University of California - Los Angeles

Simple solution to ensure raw egg safety

Salmonella is a key cause of foodborne gastroenteritis around the world, with most outbreaks linked to eggs, poultry meat, pork, beef, dairy, nuts and fresh produce.

Now Flinders University researchers have found a simple solution for preventing salmonellosis affecting eggs through surface contamination, giving crucial help for food services industries.

Raw eggs are used in many food products such as mayonnaise, mousse, eggnog, and artisanal ice cream. However, a problem is associated with eggshells being contaminated with the bacterium Salmonella enterica serovar Typhimurium (ST).

To address this issue, the Flinders research team aimed to develop a decontamination method that removed ST contamination from the eggshell without impacting the egg's usability.

Using a method that employed equipment commonly found in commercial kitchens, the researchers decontaminated eggs by placing them in a sous-vide cooker with the water heated to 57C. Complete decontamination of ST was achieved by treating eggs for 9 minutes. The decontamination method uses kitchen equipment commonly used for sous-vide cooking,

The results, published recently in the journal Foodborne Pathogens and Disease, is the first study to look at decontamination of ST on the eggshell.

The decontaminated eggs were found by chefs, using measurements and acceptability scores, to have no significant difference in their quality or performance as an ingredient when compared with nontreated eggs.

A preview of the paper, 'A Successful Technique for the Surface Decontamination of Salmonella enterica Serovar Typhimurium Externally Contaminated Whole Shell Eggs Using Common Commercial Kitchen Equipment' (November 2019) by Thilini Keerthirathne, Kirstin Ross, Howard Fallowfield and Harriet Whiley is online DOI: 10.1089/fpd.2019.2734

A second study by the Flinders environmental health research team examined the effectiveness of current Australian guidelines that recommend raw egg mayonnaise should be prepared and stored under 5C and adjusted to a pH less than 4.6 or 4.2.

Despite these guidelines, a significant numbers of salmonellosis outbreaks continue to be recorded every year in Australia.

The researchers found that the survival of Salmonella Typhimurium in mayonnaise is significantly improved at 4C and that lower temperatures protected ST from the bactericidal effect of low pH.

"We found that the preparation of mayonnaise at pH 4.2 or less and incubating it at room temperature for at least 24 hours could reduce the incidence of salmonellosis," says Flinders environmental health researcher Thilini Keerthirathne.

"But there is a risk of storing mayonnaise at 37C. If the pH is not correctly measured, the warmer temperatures will promote the growth of salmonella. As such it is crucial to ensure the pH of the mayonnaise if at pH 4.2 or less."

Credit: 
Flinders University

New membranes for cellular recycling

image: Baker's yeast (Saccharomyces cerevisiae) is an ideal model organism for autophagy research. Its fundamental cellular structure is similar to animal cells and it is very easy to grow in culture.

Image: 
MPI for Biology of Ageing

There is a constant spring-cleaning in our cells: The cell's own recycling system, so-called autophagy, fills garbage bags with cellular waste, transports them to the recycling yard and makes the decomposed material available again. Researchers from the Max Planck Institute for Biology of Ageing in Cologne, Germany, have now been able to show in the model organism yeast that the membrane of the garbage bags, known as autophagosomes, is newly produced on the spot around the garbage and not built of already existing components.

The self-renewal of cells through autophagy is a central process in the body. It also plays a role in ageing and many age-related diseases. The rule of thumb is: the more recycling, the longer you live. "If we manage to optimize the autophagy machinery, this could improve health in old age, but of course we first need to understand precisely how it works," explains Martin Graef, research group leader at the Max Planck Institute.

Therefore, Maximilian Schütter, a doctoral student in Martin Graef's research group, took a close look at how the garbage bags are made. These so-called autophagosomes consist of phospholipid membranes that form around the cellular waste and then transport it for recycling. Until now, it has always been assumed that membranes already present in the cell are assembled around the waste. However, the researchers have now been able to show that the membrane is instead newly formed on the spot. To do this, a protein located on the membrane of the autophagosomes activates free fatty acids and makes them available for the production of phospholipids, which are then incorporated into the expanding membrane.

"This discovery is so fundamental that not only our view of autophagy has changed, but many new research approaches are opening up," explains Graef. It is known, for example, that recycling in the cells deteriorates when a diet is very rich in fat. "We may have found an explanation for this. Since free fatty acids are incorporated into the membrane, a change in the composition of the fats might have a direct effect on autophagy as a result of a different diet."

Credit: 
Max-Planck-Gesellschaft

Tunes for training: High-tempo music may make exercise easier and more beneficial

With the start of the new year, gyms are at their busiest and many people are trying to establish a workout routine to improve their health. Getting an edge by making exercise easier and more effective could be the difference between success and guiltily returning to the warm embrace of the couch. What if doing something as simple as listening to a particular type of music could give you that edge?

A new study in Frontiers in Psychology is the first to show that listening to music at a higher tempo reduces the perceived effort involved in exercise and increases its benefits. These effects were greater for endurance exercises, such as walking, than for high-intensity exercises, such as weightlifting. The researchers hope that the findings could help people to increase and improve their exercise habits.

Many people listen to music while exercising and previous studies have documented some of the benefits. For instance, music can distract from fatigue and discomfort and increase participation in exercise. However, "how" we experience music is highly subjective, with cultural factors and personal preferences influencing its effects on individuals. Music is multifaceted with various aspects such as rhythm, lyrics and melody contributing to the experience.

Until now, researchers did not understand the specific properties of music that affect us during exercise, including which types of music are best suited to enhancing certain types of exercise. Understanding these specifics could help to unlock the full potential of music as an exercise enhancer.

The researchers set out to investigate the effect of the tempo of a piece of music on female volunteers performing either an endurance exercise (walking on a treadmill) or a high-intensity exercise (using a leg press).

The volunteers completed exercise sessions in silence, or while listening to pop music at different tempos. The researchers recorded a variety of parameters, including the volunteers' opinions about the effort required to complete the exercises and their heart rate while exercising, as a higher heart rate would mean that the exercise was more beneficial for physical fitness.

"We found that listening to high-tempo music while exercising resulted in the highest heart rate and lowest perceived exertion compared with not listening to music," explained Professor Luca P. Ardigò of the University of Verona in Italy. "This means that the exercise seemed like less effort, but it was more beneficial in terms of enhancing physical fitness."

These effects were more noticeable in volunteers completing the endurance exercise sessions, compared with those performing high-intensity exercises, suggesting that people performing endurance activities such as walking or running may receive the greatest benefit from listening to high-tempo music.

The researchers hope that these results will provide a simple way to improve levels of physical activity. While the current study involved a small group of volunteer subjects, larger studies in the future will be needed to continue exploring the nuances of how music affects our training.

"In the current study, we investigated the effect of music tempo in exercise, but in the future we would also like to study the effects of other music features such as genre, melody, or lyrics, on endurance and high intensity exercise," said Ardigò.

So, you could try playing fast-tempo music next time you hit the gym for a turbo-charged workout. Otherwise, it might at least get your foot tapping while you sit on the couch and eat chocolate.

Credit: 
Frontiers

Study finds first major discovery in hydroformylation in 50 years

Baton Rouge, La.-- In a new study published in Science, an AAAS publication, LSU chemistry professor emeritus George Stanley and fellow LSU researchers from the Department of Chemistry and the Department of Biological Sciences discovered a new cationic cobalt bisphosphine hydroformylation catalyst system that is highly active and extremely robust.

Catalysts can be viewed as a parallel of the infamous philosopher's stone. They cannot change one element to another, but they can aid in transforming one chemical substance into another, while remaining unchanged themselves. Cobalt, a common mineral, does well in accepting atoms from other molecules and forming complex molecules.

Fellow researchers working on the study alongside Stanley include assistant professor of biological sciences David Vinyard and chemistry graduate students Drew Hood and Ryan Johnson. Researchers from ExxonMobil Chemical Company also contributed to the project.

Majority of industries--about 75 percent--choose to use rhodium-based catalysts because of the low-pressure technologies and cheaper-to-build facilities, but Stanley said not only can cobalt-based catalysts make more--and better versions--of certain aldehyde products, but the price of rhodium is excessive in comparison.

"A cationic cobalt bisphosphine catalyst is only about 20 times slower than the best rhodium catalysts," he said, "despite being 10,000 times less expensive." Today, the price of rhodium has reached closed to $9,800 an ounce, while cobalt has been steady around only 90 cents per ounce.

Louisiana, alone, has three large hydroformylation chemical plants: the ExxonMobil facility in Baton Rouge that uses the high-pressure cobalt catalyst technology; the Shell plant in Geismar that uses the medium-pressure phosphine-modified cobalt catalyst system; and the Dow chemical plant in Taft that uses low-pressure phosphine-modified rhodium catalysts.

"About 25 percent of products produced by hydroformylation require high-pressure cobalt or rhodium technologies," he explained. "This new cationic cobalt bisphosphine technology offers a far more energy efficient catalyst that can operate at medium pressures for these reactions."

Hydroformylation, or oxo, is the catalytic reaction that converts alkenes, carbon monoxide, and hydrogen into more complex organic products, like plasticizers--a substance added to produce flexibility and to reduce brittleness--and cleaning detergents.

Although the group's new cobalt catalyst has low selectivity to the generally desired linear aldehyde product for simple alkenes, Stanley said it has excellent activity and selectivity for internal branched alkenes that are difficult to hydroformylate.

For example, researchers are finding that washing detergents are less likely to dissolve in cold water because of their linearity--a trait found in rhodium catalysts. Cobalt catalysts can make detergent molecules with more "branches" that can react to grease and water in a more efficient way.

Stanley said this is the first major discovery in hydroformylation in at least 50 years.

"What excites me the most is to have a discovery that could have real-life practical applications," he said. "Coming up with a catalyst that is very energy efficient, very green, that can actually be used on the large-scale, industrial side of things is the dream of every chemist."

Credit: 
Louisiana State University

Updated shark tagging atlas provides more than 50 years of tagging and recapture data

A 52-year database of the distribution and movements of 35 Atlantic shark species revealed new information on some of the least known species. It also uncovered a few surprises about where sharks go and how long they live.

Scientists collected data for sharks tagged and/or recaptured between 1962 and 2013. The sharks were found in the Atlantic Ocean and associated areas, including the Gulf of Mexico, the Caribbean Sea, and the Mediterranean Sea. Participants tagged a total of 229,810 sharks of 35 species and recaptured 13,419 sharks of 31 species in that time span. The scientific journal Marine Fisheries Review recently published the data in Dec. 2019.

This new atlas updates an earlier version covering 1962 to 1993 and adds information on 22 species. Detailed profiles are provided for 14 shark species, including bull and tiger sharks and smooth dogfish. The updated data significantly extended their known ranges and movements.

Collaborative, Long-Running Program

The Cooperative Shark Tagging Program is the largest and longest-running in the world. The program is a collaborative effort among recreational anglers, the commercial fishing industry, biologists, and NOAA Fisheries. Its goal is to study the life history of sharks in the Atlantic Ocean.

Initiated in 1962 by biologist and shark researcher John "Jack" Casey at the Northeast Fisheries Science Center, the original group of 74 volunteer anglers began participating in the project in 1963. Since then the program has expanded to include thousands of participants along the entire North American and European Atlantic coasts, including the Gulf of Mexico.

"The program's long-term data has shown the importance of tagging large numbers of each species and recording information in a database to determine shark movements," said Lisa Natanson, a shark researcher in the Apex Predators Program at the Northeast Fisheries Science Center's Narragansett Laboratory in Rhode Island. For example, until the tagging program was 34 years old, no one knew that tiger sharks cross the Atlantic.

An International Effort

Anglers from 32 countries tagged sharks and persons representing 59 countries participated in returns. There are two principal types of tags: the dart or M tag, in use since 1965, and the fin or rototag, used primarily by participating biologists.

Recreational fishermen, most using rod and reel, accomplished the majority of the tagging, followed by biologists using longline and net gear. Commercial fishermen using long line and net gear returned the most tags, followed closely by anglers using rod and reel.

Blue sharks accounted for 51 percent of the tags at nearly 118,000, with sandbar sharks a distant second at just under 36,000. Just over 8,200 blue sharks and 1,471 sandbar sharks were recaptured. Of 20 tagged crocodile sharks, none were recaptured. Most species had more than 100 sharks tagged.

A blue shark also set the record for traveling the greatest distance: 3,997 nautical miles. That shark was tagged off Long Island, New York and recaptured in the South Atlantic off Africa after more than 8 years. A sandbar shark holds the record for the longest time before recapture at 27.8 years.

Thousands of Volunteer Citizen Scientists

Atlas authors Nancy Kohler and Patricia Turner worked in the center's Apex Predators Program at the Narragansett Laboratory and are now both retired from NOAA Fisheries. They noted that the data collected through this program of citizen scientists would not have been possible for any individual, single institution or agency to accomplish.

"A collective of thousands of knowledgeable volunteer recreational and commercial fishermen accomplished this for little more than the cost of the tags, making the cost/benefit ratio for this program extremely low," according to the authors. "The Cooperative Shark Tagging program creates an enormous body of scientific data for understanding distributions and migration patterns for shark species."

The geographic distributions and movements for most shark species--­particularly over large space and time scales--remain largely unknown, but these data are filling in those gaps. This information is vital for developing appropriate management strategies and determining the usefulness of conservation measures.

"Sustainable management is a dynamic process that requires the best available science," said Karyl Brewster-Geisz, a fishery management specialist with NOAA Fisheries' Office of Sustainable Fisheries. "Data from the Cooperative Shark Tagging Program, one of the oldest shark data sets, plays an important role in establishing management measures that provide recreational and commercial fishing opportunities while preventing overfishing."

According to the authors, "Given the fact that shark species are slow growing, long-lived, and highly mobile, with relatively low return rates for tagged sharks, continued tagging efforts are essential to provide this critical life history and population dynamics information."

Credit: 
NOAA Northeast Fisheries Science Center

Genetic autism risk in PTEN patients clarified

image: In a newly published study, a team of researchers led by Charis Eng, MD, PhD, of Cleveland Clinic's Genomic Medicine Institute, identified for the first time an explanation of why patients with identical PTEN mutations often have vastly different clinical presentations.

Image: 
Cleveland Clinic

Cleveland Clinic researchers have identified for the first time an explanation of why patients with identical PTEN mutations often have vastly different clinical presentations.

In a new study published in JAMA Network Open, a team of researchers led by Charis Eng, MD, PhD, of Cleveland Clinic Lerner Research Institute's Genomic Medicine Institute, discovered that copy number variations (CNVs) may act as genomic modifiers that influence the risk of autism spectrum disorder (ASD) and/or developmental delay (DD) versus cancer risk in individuals with PTEN mutations.

Germline mutations of the tumor suppressor gene PTEN are associated with a group of genetic disorders that increase the risk of certain cancers, cognitive and behavioral deficits, benign growths and tumors (i.e., hamartomas), and macrocephaly. These disorders are known collectively as PTEN hamartoma tumor syndrome (PHTS), but they manifest as a broad, difficult-to-predict range of clinical outcomes and have been found to inexplicably result in distinct subsets of patients with either cancer or ASD/DD. In fact, PTEN is one of the most common genes associated with ASD.

Previous studies have indicated associations between CNVs, or large structural genetic changes involving the deletion and/or duplication of DNA segments, and neurodevelopmental disorders and sporadic cancers. Therefore, specific CNVs may be linked with either ASD/DD or cancer incidence in individuals with PTEN mutations.

To investigate these associations, Dr. Eng's team quantified the total number of CNVs in patients from three PHTS phenotype groups (i.e., PHTS-ASD/DD, PHTS-no ASD/DD and PHTS-cancer) with similar PTEN mutations. They demonstrated an overall increased CNV burden per individual in patients with ASD/DD compared to those without ASD/DD or those with cancer. However, they found no difference in CNV burden between patients without ASD/DD and patients with cancer.

They also determined that 10% of the PHTS-ASD/DD patients carried CNVs associated with neurodevelopmental disorders - compared to only 2.6% of PHTS-no ASD/DD and 1.7% of PHTS-cancer patients - while no CNVs involved in known cancer-associated genes were identified in PHTS-cancer patients.

These findings suggest that CNVs operate as genomic modifiers of ASD/DD risk in individuals with PHTS, meaning they not only provide insight into the ASD/DD versus cancer phenotypes associated with PTEN mutations but also may aid in the prediction of clinical outcomes to inform PHTS medical management. Furthermore, the study demonstrates that CNV burden analysis may also be applied to other clinically heterogeneous disorders for which no outcome-specific predictors are known.

Dr. Eng was the first to link PTEN to Cowden Syndrome, which is a PHTS disorder, and subsequently to ASD. She is the inaugural chair of Cleveland Clinic Lerner Research Institute's Genomic Medicine Institute and inaugural director of the Center for Personalized Genetic Healthcare, which includes the PTEN Multidisciplinary Clinic for children and adults with a confirmed or possible diagnosis within the PHTS spectrum.

Credit: 
Cleveland Clinic

Jump in employment seen among Medicaid expansion enrollees, especially the most vulnerable

Getting covered by health insurance may have a major impact on a low-income person's ability to get a job or enroll in school, according to a new study that gives the first direct look at the relationship between the two.

The percentage of low-income people enrolled in Michigan's Medicaid expansion program who had jobs or were enrolled in school jumped six points in one year, the study shows. That outpaced employment gains among the state's general population during that same time.

Even larger increases in employment and school enrollment happened among African-Americans, men, people in their late 30s and 40s, and those with the lowest incomes.

The study of low-income people enrolled in what the state calls the Healthy Michigan Plan was performed by a team at the University of Michigan Institute for Healthcare Policy and Innovation, and published in a new paper in JAMA Network Open.

Year-over-year gains

For the study, the researchers surveyed more than 3,000 people with Healthy Michigan Plan coverage from across the state, first in 2016 and again in 2017 or the first month of 2018. The program is open to all people over age 18 in Michigan who have incomes up to 133% of the poverty level, or about $15,800 for an individual and $32,000 for a family of four in 2017. About 670,000 Michiganders currently have the coverage.

The six-point jump in those who said they had a job or were in school was seen both in those who stayed in the program for the entire period studied, and among the 23% who had left the program by the time the researchers contacted them.

The findings have implications for states that have not expanded Medicaid under the Affordable Care Act, and for those that have applied for or received permission from the federal government to require enrollees to report whether they are working, or doing other qualifying activities, in order to keep Medicaid coverage.

"While on a statewide level, both in the general population and the low-income population, employment levels didn't change much between 2016 and 2017, we saw a clear increase in employment or student status among those in the Healthy Michigan Plan, even among those with health conditions who we might think would need more time to achieve this," says Renuka Tipirneni, M.D., M.Sc., the lead author of the new study and an assistant professor of internal medicine at U-M. "The ability to get access to care, attention for existing or new health issues, and to gain function, appears to have a clear impact on the chances of getting a job or studying or training to get a job later."

Tipirneni is a member of the IHPI team that is carrying out a full evaluation of the Healthy Michigan Plan's effects for the state government. She notes that the study is unique because it includes longitudinal data on the Medicaid expansion enrollee population, including those who have left the program. The team continues to gather data on the longer-term status of these enrollees and former enrollees.

Larger jumps in some groups

The team had previously published findings from their 2016 survey of Healthy Michigan Plan enrollees that showed that nearly 49% were working, 27% were out of work, 11% said they were unable to work and 5% were already enrolled as students. The rest were homemakers or retired.

The new study shows that the combined percentage of those who were employed or students in 2017 was 60%, which is an increase from the combined 54% who were employed or students in 2016.

When the researchers delved deeper into the survey results, they found that people aged 35 to 50 had an eight percentage point increase, people whose incomes were less than one-third of poverty level had a 9-point jump, and non-Hispanic black enrollees had nearly an 11-point jump. The increase among all men was 6.7 points, compared with 4.8 points for women.

The 6-point gain was an average for Medicaid expansion enrollees across the state. By comparison, the general Michigan population had a steady rate of employment from 2016 to 2017, with 75% of residents between the ages of 19 and 64 being employed or enrolled in school in 2017, according to data from the U.S. Census Bureau.

Among all Michiganders in this age range with incomes at or below a level that would be eligible for enrollment in Medicaid or the Healthy Michigan Plan, employment or student status didn't change from 2016 to 2017, when 43% of the entire group was employed or students.

"Our findings suggest states could achieve goals of fuller employment among low-income residents by expanding Medicaid coverage or maintaining an expansion program," says Susan Goold, M.D., MHSA, M.A., the senior author on the new paper and a professor of medicine, "Good health helps people gain employment or stay employed."

Goold and colleagues are looking at the patterns of coverage among those who enrolled in the Healthy Michigan Plan in its first five years, including how many left the plan because they obtained other coverage such as through an employer or for other reasons.

IHPI is conducting the evaluation required by the Centers for Medicare and Medicaid Services (CMS) of the Healthy Michigan Plan (HMP) under contract with the Michigan Department of Health and Human Services (MDHHS). Data collection for this paper was funded by MDHHS and CMS for the purposes of the evaluation, but the study findings do not represent the official views of either agency.

Credit: 
Michigan Medicine - University of Michigan

Rates of new colorectal cancer cases as people turn 50, historically begin screening

What The Study Did: Cancer registries representing about 28% of the U.S. population were used to examine how new cases of colorectal cancer increased from age 49 to 50, the age when many people of average risk for the disease historically began screening, although screening age recommendations vary.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Jordan J. Karlitz, M.D., of the Southeast Louisiana Veterans Health Care System and Tulane University School of Medicine in New Orleans, is the corresponding author.

(10.1001/jamanetworkopen.2019.20407)

Editor's Note: The article includes funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Spike in colorectal cancer from age 49 to 50 suggests many undiagnosed before screenings

image: Lead study author Dr. Jordan Karlitz, associate clinical professor of Medicine at Tulane University School of Medicine and staff gastroenterologist at the Southeast Louisiana Veterans Healthcare System.

Image: 
Photograph by Cheryl Gerber

A year-by-year age analysis of colorectal cancer rates among U.S. adults finds a 46% increase in new diagnoses from ages 49 to 50, indicating that many latent cases of the disease are likely going undiagnosed until routine screenings begin at 50, according to a new study in JAMA Network Open.

Researchers found that almost 93% of the cases discovered at age 50 were invasive, meaning that most would require more aggressive treatment including surgery and were likely lingering for some time before diagnosis.

"Our findings suggest a high case burden of preclinical, undetected early onset colorectal cancers in patients younger than 50 that is not reflected in observed incidence rates," said lead study author Dr. Jordan Karlitz, associate clinical professor of Medicine at Tulane University School of Medicine and staff gastroenterologist at the Southeast Louisiana Veterans Health Care System.

Colorectal cancer is the second leading cause of cancer deaths in the United States. As rates for younger adults continue to rise, there is considerable debate about whether to lower the age for recommended screenings. In 2018, the American Cancer Society called for routine screenings to start at 45. However, the U.S. Preventive Services Task Force, which sets federal screenings standards, currently recommends average risk screening begin at age 50. The agency is studying the issue to determine whether changes will improve outcomes

Those against beginning screenings at age 45 have argued that incidence rates in those age 45 to 49 years have been considered relatively low compared to those aged 50 to 54 years. Study authors suspected the risks for those in their mid-to-late 40s are underestimated because incident data for those age ranges would likely only include cases caught because they presented symptoms and/or have a family history of cancer, in contrast to those 50 and older who have cancers also detected due to screening.

To assess this, they examined colorectal cancer incidence rates in one-year increments between the ages of 30 to 60 from the year 2000 to 2015. They suspected that if there were many asymptomatic cases of the disease undetected there would be a marked increase in cases between 49 and 50 when screenings begin.

Researchers found a steep increase from 34.9 diagnoses per 100,000 people at age 49 to 51 cases per 100,000 at age 50. Sharp increases were also seen in both men (52.9%) and women (39.1%), in white (46.2%) and black (47.3%) populations in colon (51.4%) and rectal (37.6%) cancers in this one-year age transition. These incidence increases from age 49 to 50 were not seen in prior studies because only age group ranges were analyzed.

Researchers also examined the stage at which the cancers were caught and found a spike in localized and regional cancers, which would require surgery and possibly chemotherapy and radiation treatment.

The study adds fuel to the debate about whether screenings should begin at age 45. Karlitz said the combined burden of undetected and detected early onset colorectal cancer cases for those 45-49 may actually approach that of individuals in their early 50s.

"Our data support that the incidence of colorectal cancer increases substantially among individuals in their early 50s compared with individuals in their late 40s, not because rates are truly lower among those aged 45 to 49 years, but because colorectal cancers are present but undetected until diagnosed when screening is ultimately initiated," he said.

A limitation of the study is its population-based design, which limited researchers' ability to determine exactly which patients had cancers detected at age 50 through screening versus diagnostic testing.

"Nevertheless, the significantly high rate of invasive cases supports that almost all cancers accounted for in the rate increase from age 49 to 50 required aggressive treatment, regardless of how they were detected," Karlitz said.

Credit: 
Tulane University

Lung cancer screening decision aid delivered through tobacco quitlines improves informed decision-making

In the first comparative clinical trial of lung cancer screening decision aid versus standard educational information, researchers from The University of Texas MD Anderson Cancer Center have shown that a decision aid delivered through tobacco quitlines effectively reaches a screening-eligible population and results in informed decisions about lung cancer screening.

Lung cancer is the second most common cancer diagnosis among adults in the U.S. and is the most common cause of cancer-related deaths. Screening for lung cancer with low-dose CT scans is the only secondary preventive service shown to decrease mortality from lung cancer; the primary action to prevent lung cancer is to avoid, or quit using, tobacco products.

While lung cancer screening can save lives, it also carries potential harms. For this reason, the Centers for Medicare & Medicaid Services (CMS) covers lung cancer screening as a preventive service as long as certain requirements are met, including use of a decision aid during a shared decision-making visit between the patient and physician or qualified practitioner. The U.S. Preventive Services Task Force recommends the annual screening for adults aged 55 to 80 years who have a 30 pack-year smoking history, meaning the equivalent of smoking a pack of cigarettes a day for 30 years.

In the study published today in JAMA Network Open, the research team worked with 13 tobacco quitlines to identify callers eligible for screening. The 516 quitline clients who enrolled in the study were randomized to receive a decision aid video called "Lung Cancer Screening: Is It Right for Me?" or a standard lung cancer screening brochure for the control group.

Primary outcomes for the study were preparation for decision-making and decisional conflict. Secondary outcomes included knowledge of lung cancer screening, intentions to be screened and completion of screening within six months after receiving the decision aid or brochure.

At the one-week follow up, 67.4% of participants who received the decision aid reported they were well prepared to make a screening decision, compared to 48.2% of participants who received the standard educational material. Among those who received the decision aid, 50% felt informed about their decision choice and 68% reported being clear about their values related to the harms and benefits of screening, compared to 28.3% and 47.4%, respectively, among the control group.

"The quitline clients who received the decision aid were more assured about what was important to them in making the choice about screening and felt better informed," said Robert Volk, Ph.D., professor of Health Services Research and lead author of the study. "Their knowledge of the harms and benefits of screening was much greater than people who received standard educational information. The clients in the control group were making screening choices while being less prepared and aware of the trade-offs between benefit and harms. We want to head off the uninformed choice and help patients make good, informed decisions about screening."

The decision aid video explains eligibility for screening, lung cancer epidemiology and risk factors, and potential harms from screening, including false-positive results, radiation exposure and risks of invasive diagnostic procedures. The decision aid encourages patients to consider their values while weighing the benefits and harms of screening. The video also shows a patient receiving a CT scan.

By the six-month follow up, approximately 30% of participants in both groups had scheduled a lung cancer screening; the difference in screening rates between groups was not statistically significant. Nationally, about 6% of people at risk for lung cancer due to smoking undergo screening, according to the National Institutes of Health.

Based on the study results, funding was received to begin implementing the model nationally by training quitline staff to identify callers who are eligible for screening and provide them with the decision aid.

"We've demonstrated that this is a very effective way to identify people at risk for lung cancer," Volk said. "There's potential to reach thousands of people who are eligible for screening and already addressing their risk for lung cancer by seeking cessation services."

Limitations of the study included screening behaviors based on self-report. Quitline callers had to express interest in lung cancer screening when asked by quitline staff in order to participate in the study.

Credit: 
University of Texas M. D. Anderson Cancer Center

Virtual crossmatching improves quality of life for kidney transplant patients

image: Virtual antibody crossmatching is a safe and efficient way of selecting kidney transplant recipients.

Image: 
American College of Surgeons

CHICAGO (January 31, 2020): Virtual antibody crossmatching is a safe and efficient way of selecting kidney transplant recipients. Two years after implementing the process, the Medical University of South Carolina (MUSC) division of transplant surgery, Charleston, concluded that the technique was just as accurate and sensitive as physical crossmatch, the current gold standard, and much quicker. Virtual crossmatching reduced the time kidneys were kept on ice while awaiting identification of a suitable recipient, improved scheduling for surgeons and operating room staff, and alleviated emotional and logistical stress on patients who were called to the hospital only to be sent home hours later after a more suitable recipient was identified. A study of the process and its effects on clinical and surgical practice outcomes appears in an "article in press" on the Journal of the American College of Surgeons website in advance of print publication.

Since 1969, physical crossmatching has been used to determine if the immune systems of an organ donor and an intended recipient were compatible. Blood from donor and recipient are mixed in a test tube. If a match is incompatible, recipient blood cells attack and destroy donor cells.1

However, virtual crossmatching applies the recipient's serum to microbeads that detect signals from the donor's antibodies using a specialized device to find antibodies that may react. The process does not reveal an actual reaction between donor and recipient cells. Rather, it forecasts whether such a reaction would occur.2

A physical crossmatch is highly sensitive, but it eats up valuable time. Donor lymph nodes are shipped to the transplant center, cells are mixed with serum from the potential recipient, and surgeons, recipient, and the transplant center then must wait six hours to learn whether there is an antibody reaction before scheduling the operation.

Because of the concern about potential immune system incompatibility, transplant centers typically call in three possible recipients for every donated organ. "We ask three patients who are next on the transplantation list to come into the hospital just in case there are problems with incompatibility. Think about the time, effort, and stress that puts on a patient. It's not uncommon for a patient to be called in two, three, or four times before they actually go forward with transplantation," said David J. Taber, PharmD, a study coauthor and an associate professor of surgery in the division of transplant surgery at MUSC.

The delay prolongs the time a donated organ is kept on ice, called cold ischemia time (CIT). "You want to minimize the amount of time an organ is on ice because the longer it is outside the body, the lower quality it becomes. After prolonged CIT, there is the chance the organ will not function right away, called delayed graft function (DGF), which leads to poor outcomes in the long run," said Vinayak S. Rohan, MD, FACS, lead author of the study and an assistant professor of surgery in the division of transplant surgery at MUSC.

Delays are particularly problematic since the Kidney Allocation System was revised in 2014 and now allows transplantations to patients who are highly sensitized.3 "Previously, donor organs were obtained locally, so transplant centers could afford to wait six hours for the physical crossmatch results. Now, in South Carolina, we are getting organs from California. Shipping them takes long enough. If you cannot predict what is going to happen for six more hours, you may not be able to give the organ to the intended recipient and do a disservice to the purpose of the allocation system," said Dr. Rohan.

The study is a before-and-after comparison of patient outcomes two years after the transplant surgery team implemented virtual crossmatching. Of 825 patients who received a kidney transplant between 2014 and 2018, 505 underwent surgery before-- and 227 after--virtual crossmatching was instituted.

Standard measures of clinical quality were the same in both groups. The incidence of delayed graft function was 19 percent before and 17 percent after implementation; graft failure within a year was 4 percent before and 3 percent after; mortality within a year was 2 percent before and 1 percent after.

CIT for long-distance donor organs decreased by 2.4 hours, and delayed graft function declined by 26 percent. Importantly, despite a highly sensitized population, there were no hyperacute rejections. "Hyperacute rejection is an extremely rare but catastrophic event when an organ rejects while the surgeon is operating. With sizable segments of patients who were already highly sensitized, there were similar rates of graft survival, patient survival, and kidney graft survival, confirming at a patient-outcomes level that we can employ virtual crossmatching without inducing any potential harm or risk to the patient," Dr. Taber said.

"Because we don't need to do physical crossmatching for the majority of patients, we also can improve surgeons' quality of life by being able to schedule the operation even before an organ arrives," Dr. Rohan said.

"This technique also reduces the time and money spent on having backup patients traveling to and waiting in the hospital. Emotionally, this approach helps patients a lot," Dr. Taber said.

Dr. Rohan's coauthors are Nicole Pilch; PharmD, Omar Moussa; PhD, Satish N. Nadig, MD, FACS; Derek Dubay, MD, FACS; and Prabhakar K. Baliga, MD, FACS.

Credit: 
American College of Surgeons

Characterization of unique PMEPA1 gene splice variants (isoforms d and e) from RNA Seq profiling pro

image: Model for biological function categorization of PMEPA1 isoforms (c, d and e) in the context of prostate cancer. Our study suggested a model where evaluation of PMEPA1 isoforms revealed a potentially new mechanism of prostate cancer cell adaptation from androgen dependent to hormone independent, TGF-β controlled cell growth. PMEPA1-e were androgen responsive whereas the PMEPA1 isoform c and d were TGF-β responsive and only isoform d inhibited TGF-β signaling.

Image: 
Correspondence to - Hua Li - hli@cpdr.org and Shashwat Sharad - ssharad@cpdr.org

The cover for issue 4 of Oncotarget features Figure 8, "Model for biological function categorization of PMEPA1 isoforms (c, d, and e) in the context of prostate cancer," by Sharad, et al.

In addition to 4 reported PMEPA1 isoforms, one novel isoform PMEPA1-e was identified with RNA Seq analysis of hormone-responsive VCa P, LNCa P cells and human prostate cancer samples from The Cancer Genome Atlas dataset.

The researchers analyzed the structures, expressions, biological functions and clinical relevance of PMEPA1-e isoform and less characterized isoforms c and d in the context of prostate cancer and AR/TGF- signaling.

The expression of PMEPA1-e was induced by androgen and AR.

Taken together, their findings first defined the prostate tumorigenesis mediated by PMEPA1-d and -e isoforms, providing novel insights into the new strategies for prognostic evaluation and therapeutics of prostate tumor.

Dr. Hua Li and Dr. Shashwat Sharad from the Center for Prostate Disease Research, Department of Surgery, Uniformed Services University of the Health Sciences as well as the Walter Reed National Military Medical Center, Bethesda, Maryland, USA said in their Oncotarget article, "Prostate cancer is the most commonly diagnosed male malignancy and second leading cause of cancer-related deaths in the USA."

It was shown that the methylation of PMEPA1 gene promoter accounted for the silencing of PMEPA1 in prostate cancer cells in vitro and in vivo.

PMEPA1 was also reported as a TGF- regulated gene in the context of both prostate cancer and non-prostate solid tumors including colon, lung and breast cancers.

Further, a recent study showed that the loss of membrane-anchored PMEPA1 protein facilitated metastasis of prostate cancer via activating TGF- signaling by sequestering SMAD2/3 in proteasome independent way.

Cumulatively, these findings underscored the multi-function features of the PMEPA1 gene and further suggested its expressions and biological functions were dependent on the cellular context centering androgen and TGF- signaling.

The alternative splicing variant mechanism had also been shown to be important for diversifying functions of tumor-associated genes.

Further, earlier studies from their and other groups explored PMEAP1 gene isoforms in the initiation and development of prostate tumors via interrupting AR and/or TGF- signaling.

Here, the Oncotarget authors focused on defining the expressions, regulations and biological behaviors/functions of understudied PMEPA1 isoforms in the context of both androgen and TGF- signaling, and further exploration of the clinical significances and relevance of these isoforms in prostate tumor.

The Li/Sharad Research Team concluded in their Oncotarget study that gene isoform ratio could potentially predict the gene functional consequences and disease progression.

Credit: 
Impact Journals LLC

Do less and get stronger: Science proves you can lift less with better results

Weightlifters could do less and get stronger by changing the amount they lift each session, according to new research.

Sports scientists from the University of Lincoln, UK, compared the average weights lifted by two groups over six weeks: one using a traditional training method of a "one rep max" - the maximum weight an athlete could lift - and one using a load velocity profile, where the weights were tailored so they lift either more or less at each session.

All who used the load velocity profile became stronger despite lifting less overall during the six week period.

Traditionally, the one rep max would be used to dictate the weight load for all sessions.

Researchers established the one rep max in the two groups. They then used a linear positional transducer - essentially a specialised stopwatch and tape measure - to record the length of time it took to lift the weight, and the distance the weight was moved to establish a "velocity measurement" in one of the groups. That coupled with the one rep max established the load velocity profile for the athlete.

At each session, the load velocity group completed a warm-up consisting of a series of repetitions where the weight load was gradually increased and their velocity measurement taken. Each rep was recorded and compared with their pre-established load-velocity profile.

This comparison enabled the participants' training load to be adjusted based on their performance that day: if the athlete was moving the same load at a faster velocity, the weight was increased, but if they were lifting slower, the weight load would be reduced.

The findings can be used to improve muscular strength and power, and have positive implications for the management of fatigue during resistance training.

Dr Harry Dorrell from the University of Lincoln's School of Sport and Exercise Science led the study. He said: "There are a lot of factors which can contribute to an athletes' performance on a particular day, such as how much sleep they have had, nutrition, or motivational factors, but with traditional percentage-based methods we would have no insight into how this effects their strength.

"The velocity-based training enabled us to see if they were up or down on their normal performance and thus adjust the load accordingly. It's about making sure the athlete is lifting the optimal load for them, on that particular day. If you lift too little then you won't stimulate the body as you intend to; but if you lift too much you'll be fatigued, which increases the risk of injury.

"This fatigue won't necessarily happen immediately, either. You could lift too much regularly, and three weeks down the line this will catch up with you and you'll find that the muscles are too fatigued to manage what you believe should be in your ability."

Sixteen men aged between 18 to 29 years, with body masses ranging from 70kg to 120kg with at least two years' weight training experience, took part in the trial which included two training sessions a week over a course of six weeks.

They performed a back squat, bench press, strict overhead press, and a conventional deadlift, and the results at the start and end of the six weeks training were recorded.

Researchers also recorded the athlete's countermovement jump, a term used to describe the explosive lower-body power, and found that only the velocity group's had improved.

Following the trial, those using the velocity based training method could lift an average of 15kg more on the back squat than when they started, rising from 147kg to 162kg, despite their training loads being an average of nine per cent less at each session; they lifted six per cent less on the bench press per session but could take on an extra 8kg by the final session; the overhead press saw a 4kg increase in the one rep max despite lifting six per cent less during training; and the deadlift rose from 176kg to 188kg even with an average decrease of two per cent on their training loads.

Dr Dorrell added: "While some of these changes could be considered as only "small improvements" and were similar to the group using the traditional training method, the velocity group lifted significantly less in order to see the gains they made. The idea of velocity based training has been around for a while, but until now there hasn't been any science to prove that it actually works; the science has finally caught up."

Commercially available kinetic measuring devices, including apps, now mean that anyone could easily carry out the same training method at home or in the gym.

Credit: 
University of Lincoln

Pinpointing rare disease mutations

image: Artists interpretation of mouse and human genetic data and immune cells representing human diseases.

Image: 
Spencer Phillips EMBL-EBI

31 January 2020, Cambridge - Researchers at EMBL's European Bioinformatics Institute (EMBL-EBI) and Queen Mary University of London have led a study to categorise which genes are essential for supporting life. The results from this study could be a useful new resource to help researchers identify mutations responsible for rare childhood diseases.

Identifying which genes are linked to a rare disease is one of the most difficult challenges geneticists face. The low prevalence of these diseases within the population makes it difficult to research and fully understand their causes. However, huge advances in the diagnosis of rare diseases are now being made thanks to innovations in sequencing technology.

This research, published in Nature Communications, compares knockout mice viability and phenotyping data from the International Mouse Phenotyping Consortium (IMPC) with human cell lines provided by the Broad Institute's Project Achilles to create categories indicating how crucial a gene is to producing viable life.

The researchers also identified new mutations likely responsible for rare childhood diseases by comparing their data with unsolved cases of genetic disorders identified in the 100,000 Genomes Project and the Deciphering Developmental Disorders (DDD) datasets.

Defining genes essential for life

"Loss of gene function is often referred to as a binary concept; lethal or viable," says Violeta Muñoz-Fuentes, Biologist, Mouse Informatics at EMBL-EBI. "In this study we show that gene essentiality is more of a spectrum ranging from cellular lethal, developmental lethal, subviable, viable with a visible phenotype, and viable without a visible phenotype."

The scientists define these categories for 3819 genes to create an open access database that can be used to benefit other researchers and provide insight for clinical applications.

"When you sequence a person's genome it's not always one mutation that stands out as altering a gene's function," says Terry Meehan, Coordinator of Mouse Informatics at EMBL-EBI. "We currently don't have a handle on which genes are important for development and which have a minor impact."

Loss of function gene categories

Cellular lethal: genes essential for cell viability

Developmental lethal: genes essential for organism development

Viable: organisms fully develop

Subviable: organism survival is less than expected

Diagnosing rare disorders

"This study combines multiple sources of data from large scale projects to identify new candidate genes that, when mutated, are likely to have a causal relationship with rare human disorders," says Pilar Cacheiro, Research Fellow at Queen Mary University of London. "Nearly 6% of the population are affected by these diseases during their lives."

Advances in whole genome sequencing (WGS) are changing the way we research and diagnose rare genetic diseases. However, the majority of rare disease patients remain undiagnosed due to a lack of detection or because a previously unknown gene is disrupted. This study furthers our understanding of rare disease genes by providing clinicians and researchers with an open access resource, which can be used to identify high-quality candidates for rare disease mutations.

"Of particular interest for application to healthcare, we demonstrate that the set of genes that are essential for organism development is particularly associated with known human developmental disorders," says Damian Smedley, Reader in Computational Genomics at Queen Mary University of London. "This provides candidates for undiscovered causative genes for these conditions."

Several high-scoring candidates from this study have been added to the open access resource GeneMatcher, used by researchers and clinicians all over the globe to share gene information. You can also freely access the study data in EMBL-EBI's Biostudies and at the International Mouse Phenotyping Consortium.

Credit: 
European Molecular Biology Laboratory - European Bioinformatics Institute