Tech

Bone bandage soaks up pro-healing biochemical to accelerate repair

image: This graphic shows the healing progress of a fracture in a mouse treated with a new type of bone bandage that traps native adenosine (top), is preloaded with external adenosine (middle), and has no adenosine at all. The sequestered adenosine clearly helps the breaks heal faster to the naked eye up to 14 days, with cross sections of the bones showing more density and vascularization after 21 days.

Image: 
Shyni Varghese, Duke University

DURHAM, N.C. -- Researchers at Duke University have engineered a bandage that captures and holds a pro-healing molecule at the site of a bone break to accelerate and improve the natural healing process.

In a proof-of-principle study with mice, the bandage helped to accelerate callus formation and vascularization to achieve better bone repair by three weeks.

The research points toward a general method for improving bone repair after damage that could be applied to medical products such as biodegradable bandages, implant coatings or bone grafts for critical defects.

The results appear online on December 12 in the journal Advanced Materials.

In 2014, Shyni Varghese, professor of biomedical engineering, mechanical engineering and materials science, and orthopedics at Duke, was studying how popular biomaterials made of calcium phosphate promote bone repair and regeneration. Her laboratory discovered that the biomolecule adenosine plays a particularly large role in spurring bone growth.

After further study, they found that the body naturally floods the area around a new bone injury with the pro-healing adenosine molecules, but those locally high levels are quickly metabolized and don't last long. Varghese wondered if maintaining those high levels for longer would help the healing process.

But there was a catch.

"Adenosine is ubiquitous throughout the body in low levels and performs many important functions that have nothing to do with bone healing," Varghese said. "To avoid unwanted side effects, we had to find a way to keep the adenosine localized to the damaged tissue and at appropriate levels."

Varghese's solution was to let the body dictate the levels of adenosine while helping the biochemical stick around the injury a little bit longer. She and Yuze Zeng, a graduate student in Varghese's laboratory, designed a biomaterial bandage applied directly to the broken bone that contains boronate molecules that grab onto the adenosine. However, the bonds between the molecules do not last forever, which allows a slow release of adenosine from the bandage without accumulating elsewhere in the body.

In the current study, Varghese and her colleagues first demonstrated that porous biomaterials incorporated with boronates were capable of capturing the local surge of adenosine following an injury. The researchers then applied bandages primed to capture the host's own adenosine or bandages preloaded with adenosine to tibia fractures in mice.

After more than a week, the mice treated with both types of bandages were healing faster than those with bandages not primed to capture adenosine. After three weeks, while all mice in the study showed healing, those treated with either kind of adenosine-laced bandage showed better bone formation, higher bone volume and better vascularization.

The results showed that not only do the adenosine-trapping bandages promote healing, they work whether they're trapping native adenosine or are artificially loaded with it, which has important implications in treating bone fractures associated with aging and osteoporosis.

"Our previous work has shown that patients with osteoporosis don't produce adenosine when their bones break," Yuze said. "These early results indicate that these bandages could help deliver the needed adenosine to repair their injuries while avoiding potential side effects."

Varghese and Yuze see several other paths forward for biomedical applications as well. For example, they imagine a biodegradable bandage that traps adenosine to help heal broken bones and then decomposes into the body. Or for osteoporotic patients, a permanent bandage that can be reloaded with adenosine at sites that suffer from repeated injuries. They also envision a lubricating gel armed with adenosine that can help prevent bone injuries caused by the wear and tear associated with reconstructive joint surgeries or other medical implants.

"We've demonstrated that this is a viable approach and filed a patent for future devices and treatments, but we still have a long way to go," said Varghese. "The bandages could be engineered to capture and hold on to adenosine more efficiently. And of course we also have to find out whether these results hold in humans or could cause any side effects."

Credit: 
Duke University

Salmonella the most common cause of foodborne outbreaks in the European Union

Nearly one in three foodborne outbreaks in the EU in 2018 were caused by Salmonella. This is one of the main findings of the annual report on trends and sources of zoonoses published today by the European Food Safety Authority (EFSA) and the European Centre for Disease Prevention and Control (ECDC).

In 2018, EU Member States reported 5 146 foodborne outbreaks affecting 48 365 people. A foodborne disease outbreak is an incident during which at least two people contract the same illness from the same contaminated food or drink.

Slovakia, Spain and Poland accounted for 67% of the 1581 Salmonella outbreaks. These outbreaks were mainly linked to eggs.

"Findings from our latest Eurobarometer show that less than one third of European citizens rank food poisoning from bacteria among their top five concerns when it comes to food safety. The number of reported outbreaks suggests that there's room for raising awareness among consumers as many foodborne illnesses are preventable by improving hygiene measures when handling and preparing food" said EFSA's chief scientist Marta Hugas.

Salmonellosis was the second most commonly reported gastrointestinal infection in humans in the EU (91 857 cases reported), after campylobacteriosis (246,571).

West Nile virus and STEC infections at unusually high levels

By far the highest increase in the zoonoses covered by this report was in the number of West Nile virus infections. Cases of this zoonotic mosquito-borne disease were seven times higher than in 2017 (1605 versus 212) and exceeded all cases reported between 2011 and 2017.

"The reasons for the peak in 2018 are not fully understood yet. Factors like temperature, humidity or rainfall have been shown to influence seasonal activity of mosquitoes and may have played a role. While we cannot predict how intense the next transmission seasons will be, we know that the West Nile virus is actively circulating in many countries in the EU, affecting humans, horses and birds. ECDC is stepping up its support to countries in the areas of surveillance, preparedness, communication and vector control", said ECDC's chief scientist Mike Catchpole.

Most locally acquired West Nile virus infections were reported by Italy (610), Greece (315) and Romania (277). Czechia and Slovenia reported their first cases since 2013. Italy and Hungary have also registered an increasing number of West Nile virus outbreaks in horses and other equine species in recent years.

Shiga toxin-producing E. coli (STEC) has become the third most common cause of foodborne zoonotic disease with 8 161 reported cases - replacing yersiniosis with a 37% increase compared to 2017. This may be partly explained by the growing use of new laboratory technologies, making the detection of sporadic cases easier.

The number of people affected by listeriosis in 2018 is similar to 2017 (2 549 in 2018 against 2 480 the previous year). However, the trend has been upward over the past ten years. Of the zoonotic diseases covered by the report, listeriosis accounts for the highest proportion of hospitalised cases (97%) and highest number of deaths (229), making it one of the most serious foodborne diseases.

The report also includes data on Mycobacterium bovis, Brucella, Yersinia, Trichinella, Echinococcus, Toxoplasma, rabies, Coxiella burnetii (Q fever), and tularaemia.

Credit: 
European Centre for Disease Prevention and Control (ECDC)

Standard pathology tests outperform molecular subtyping in bladder cancer

image: Dr. Vinata B. Lokeshwar and graduate students Sarrah S. Lahorewala and Daley S. Morera.

Image: 
Phil Jones, Senior Photographer, Augusta University

While trying to develop a comparatively easy, inexpensive way to give physicians and their patients with bladder cancer a better idea of likely outcome and best treatment options, scientists found that sophisticated new subtyping techniques designed to do this provide no better information than long-standing pathology tests.

They looked at several sets of data on cancer specimens from patients with muscle invasive bladder cancer, a high-grade cancer associated with high mortality rates. The datasets included the one used to determine emerging molecular subtypes, and had outcome information on patients.

They consistently found that molecular subtyping of bladder tumors, which is currently being offered to patients, was outperformed by standard tests long-used by pathologists to characterize cancer as low- or high-grade and to determine the extent of its invasion into the bladder wall, surrounding fat, lymph nodes, blood vessels and beyond, they report in a study featured on the cover of the Journal of Urology.

"Muscle invasive bladder cancer is aggressive, it often has a very bad prognosis," says Dr. Vinata B. Lokeshwar, chair of the Department of Biochemistry and Molecular Biology at the Medical College of Georgia at Augusta University. "Everyone is trying to find out how to improve diagnosis, treatment and survival."

"Genetic profiling of a patient's tumor definitely has value in enabling you to discover the drivers of growth and metastasis that help direct that individual's treatment, even as it helps to identify new treatment targets," says Lokeshwar, the study's corresponding author and a member of the Georgia Cancer Center. "But using this information to subtype tumors does not appear to add diagnostic or prognostic value for patients."

Rather the investigators suggest that more study is needed before molecular subtypes are used to help guide patient care.

Evolving diagnostic approaches include compiling databanks on gene expression and mutations present in a cancer type to find patterns of gene expression that are then used to subtype tumors that "pathologically look similar" but are molecularly different. The idea is that molecular subtypes are better equipped to indicate which cancer is more or less aggressive and to help steer treatment options like whether chemotherapy before surgery to remove a diseased bladder is better.

It was RNA sequencing, or RNA-Seq, and a federal databank of genetic material from a wide range of cancers that enabled investigators from around the globe to examine gene expression in a particular tumor type, looking for common expression of some genes that correlate with a particular clinical outcome. Two subtypes, luminal, which predicts better survival, and basal, which predicts poor prognosis, were first identified for muscle invasive bladder cancer, and a total of six subtypes have now emerged. The first paper on subtypes in muscle invasive bladder cancer was published in the journal Nature in 2014.

But in their search to find a simpler, cheaper, widely available test to provide similar insight, investigators found that these emerging subtypes were outperformed by the usual clinical parameters like the tumor's grade and its spread to lymph nodes or blood vessels, Lokeshwar says.

Their work began in earnest with an exhaustive review by graduate students Daley S. Morera and Sarrah S. Lahorewala of the datasets on patients and differing classification methods used to identify the molecular subtypes.

They found 11 genes that were common in all subtype classification methods. They thought, if they were going to develop a widely available test, subtyping based on these common genes might suffice. They decided to call their new subtyping panel, MCG-1.

Instead of doing RNA-Seq, which costs several thousand dollars, they used the readily available reverse transcription quantitative PCR method costing less than $10, which also looks at gene expression and is actually used to verify RNA-Seq data, Lokeshwar says.

They first looked at their own cohort of 52 patients with bladder cancer, 39 of whom had muscle invasive disease. They found MCG-1 was only 31-36% accurate at predicting important indicators like likelihood of metastasis; disease specific survival, meaning surviving bladder cancer; or overall survival, meaning survival from all causes of death from the time of cancer diagnosis or beginning of treatment until the study's end.

Recognizing that the dataset they used was comparatively small and that they did not use RNA-Seq for analysis, they then used three patient datasets from the cancer database ONCOMINE which had more patients -- 151 with muscle invasive bladder cancer -- and also used RNA-Seq to look at gene expression.

"We found the same thing: MCG-1 could not predict disease-specific mortality," Lokeshwar says. On some patients in this dataset, information on response to chemotherapy, like commonly used cisplatin-based chemotherapy following surgical removal of the bladder, was available but subtypes could not predict chemotherapy response either, she says.

Next they looked at the dataset that has been used by a large network of investigators to identify the subtypes, The Cancer Genome Atlas, or TCGA. TCGA is a project of the National Cancer Institute and National Human Genome Research Institute that started in 2006, and has collected genetic material for 33 different cancers. The dataset includes routine pathology information on 402 specimens from patients with muscle invasive bladder cancer. It also includes these patients' overall survival and recurrence-free survival - that is when or if their cancer returned or progressed.

"Up until this point, we had been looking at patients that other groups had not looked at," Lahorewala says.

In this dataset MCG-1 predicted overall survival similar to findings reported from subtypes in several high profile publications.

"We were intrigued why MCG-1 could not predict anything in our cohort or the ONCOMINE dataset but predicted overall survival in the TCGA dataset," says Morera.

So they looked again at the 402 patients whose specimens were in the dataset and found that 21 patients' tumors were actually low-grade. Patients with low-grade tumors have higher survivability and a better prognosis than patients with high-grade muscle invasive disease.

When they removed the low-grade cases from the TCGA dataset, MCG-1 accurately predicted essentially nothing, not even overall survival. Then they included some patients with low-grade tumors into their own dataset, which they had looked at originally, and MCG-1 was now able to predict metastasis and disease specific survival, the investigators say.

All the existing subtypes are categorized as bad or better based on the cancer prognosis, the investigators say. The presence of the low-grade tumors in the classification of subtypes skewed the data to make it look like subtypes were predicting overall survival when really it was the grade of the cancer itself that was predictive.

"As investigators the first thing we did was to question our findings, since the results were so different than those reported by others," says Lokeshwar.

With the help of MCG biostatistician and coauthor Dr. Santu Ghosh, they also went back and looked at the same patients in the TCGA datasets and the subtypes they had been assigned by three different classification methods established by a network of bladder cancer researchers.

"Even with these established classification methods, the subtypes were accurate only about 50% of the time in predicting patients' overall survival. And once again, routine pathology parameters like invasion into lymph nodes or blood vessels were more accurate than the established subtypes in predicting patients' prognosis," says Lahorewala.

A recent study by investigators at Sweden's Lund University published in the journal Urologic Oncology supports the MCG investigators' findings. Their study of 519 patients who had their bladders removed because of bladder cancer found subtypes were not associated with cancer-specific survival.

Part of the problem with subtyping may be the inherent heterogeneity of tumors, says Morera. There is tremendous heterogeneity in the gene expression of tumors, even among the same tumor type, like bladder cancer, and within different parts of the same tumor as well. Furthermore, this pattern of heterogeneity can change both during tumor growth and treatment.

"Just because it's bladder cancer does not mean it's the same in all patients. We know that tumors are very dynamic and so there is heterogeneity," Lokeshwar says.

"Because there is heterogeneity, there could be problems when you want to categorize a tumor into a single subtype," says Morera.

As the name indicates, muscle invasive bladder cancer has already spread from the lining of the sac-like organ to its muscular wall. High-grade tumors, if not detected early, will spread into bladder muscle, whereas, low-grade tumors are rarely invasive. Painless blood in the urine is the most common sign of bladder cancer although only a small percentage of the individuals with it have cancer. Smoking is the major risk factor for bladder cancer.

Credit: 
Medical College of Georgia at Augusta University

A test of a customized implant for hip replacement

image: General distribution of stresses in the "endoprosthesis-skeleton" model of the biomechanical structure when a patient is standing on two legs. The voltage range is from 1 to 100 MPa.

Image: 
Peter the Great St.Petersburg Polytechnic University

A team of scientists from the Advanced Manufacturing Technologies Center of the National Technology Initiative (NTI) of Peter the Great St.Petersburg Polytechnic University (SPbPU) (head -- Prof. A.I. Borovkov, the Vice-rector for Advanced Projects of Peter the Great St. Petersburg Polytechnic University) developed a mathematical model of an "endoprosthesis-skeleton" system. Special attention was paid to the geometry and internal structure of hip bones. Using advanced computer modeling technologies, the team assessed the integrity of the biomechanical structure for a typical case (a patient standing on two legs). Currently the team is working on a methodology that would reduce the time of such analysis to several days. The results of the study were published in the Vibroengineering PROCEDIA journal and presented at the 12th All-Russia Congress on Fundamental Issues of Theoretical and Applied Mechanics.

Hip joint arthroplasty is a relatively common procedure today. During arthroplasty the upper part of a patient's hip bone is replaced with a metal stem with a spherical joint element, and a cup to allow the head of the joint to rotate inside the pelvis. Medical companies manufacture standard elements with different parameters for ordinary hip replacement operations. However, after some time a certain share of patients experiences issues with implants and requires their replacement. As a rule, this happens due to the insufficient (or excessive) load the endoprosthesis puts on the hip bone causing its tissue to change. Moreover, bone strength can be affected by osteoporosis and other diseases. By the time of the second surgery (the removal of the initial implant and the installation of a revision one) a part of the hip bone becomes unfit as it is unable to bear the load. Therefore, when a patient comes to a secondary operation with a damaged hip bone, standard implants are of no use for them, and a regular cup (even if it is of a bigger size) might not work.

The manufacturers produce special sets of elements that can be combined with each other in different ways to be used in revision operations as well as in patients with compound fractures or cancer. However, such surgeries have high risk rates: any issue with a revision structure or additional bone tissue loss may cause grave health issues. It is extremely important to understand if the prosthesis is able to bear the load, and if the damage to the patient's bone can be avoided. Virtual testing before installation could help eliminate numerous post-surgical complications. However, there is currently no universal assessment method to do so. It takes a long time to build a model on the basis of bone CT results, while the patient's health parameters keep constantly changing. Therefore, the window between diagnostics and the surgery should be as short as possible.

A team of engineers from the Advanced Manufacturing Technologies Center of the National Technology Initiative (NTI) of Peter the Great St. Petersburg Polytechnic University (SPbPU) analyzed the integrity of an "endoprosthesis-skeleton" system for a case of hip joint revision arthroplasty and assessed the durability of the implanted structure and pelvis bones, as well as the distribution of load when a patient stands on two feet. The work describes the peculiarities of simulation model preparation. Currently, this process takes a long time, but the team is working on a method to reduce the whole calculation to several days.

Other groups tend to entirely ignore pelvic bones in their studies or to consider only their simplified models. However, in this case the researchers paid special attention to detailed description of pelvic bones including their external and porous internal layers. This was done due to the fact that the pelvis is often at risk in its entirety.

"If we consider the work done by us as a virtual test, the article described the load we put on the patient's skeleton and the implant, as if they were tested in reality. Studies like this help reduce the risk of complications in patients with individually designed implants. Hopefully, they would help prioritize prevention over cure," commented Mikhail Zhmaylo, a lead engineer at the Advanced Manufacturing Technologies Center of the National Technology Initiative (NTI) of Peter the Great St. Petersburg Polytechnic University (SPbPU)

Credit: 
Peter the Great Saint-Petersburg Polytechnic University

Better studying superconductivity in single-layer graphene

Made up of 2D sheets of carbon atoms arranged in honeycomb lattices, graphene has been intensively studied in recent years. As well as the material's diverse structural properties, physicists have paid particular attention to the intriguing dynamics of the charge carriers its many variants can contain. The mathematical techniques used to study these physical processes have proved useful so far, but they have had limited success in explaining graphene's 'critical temperature' of superconductivity, below which its' electrical resistance drops to zero. In a new study published in EPJ B, Jacques Tempere and colleagues at the University of Antwerp in Belgium demonstrate that an existing technique is better suited for probing superconductivity in pure, single-layer graphene than previously thought.

The team's insights could allow physicists to understand more about the widely varied properties of graphene; potentially aiding the development of new technologies. Typically, the approach they used in the study is used to calculate critical temperatures in conventional superconductors. In this case, however, it was more accurate than current techniques in explaining how critical temperatures are suppressed with lower densities of charge carriers, as seen in pure, single-layer graphene. In addition, it proved more effective in modelling the conditions which give rise to interacting pairs of electrons named 'Cooper pairs', which strongly influence the electrical properties of the material.

Tempere's team made their calculations using the 'dielectric function method' (DFM), which accounts for the transfer of heat and mass within materials when calculating critical temperatures. Having demonstrated the advantages of the technique, they now suggest that it could prove useful for future studies aiming to boost and probe for superconductivity in single and bilayer graphene. As graphene research continues to be one of the most diverse, fast-paced fields in materials physics, the use of DFM could better equip researchers to utilise it for ever more advanced technological applications.

Credit: 
Springer

Colliding molecules and antiparticles

Antiparticles - subatomic particles that have exactly opposite properties to those that make up everyday matter - may seem like a concept out of science fiction, but they are real, and the study of matter-antimatter interactions has important medical and technological applications. Marcos Barp and Felipe Arretche from the Universidade Federal de Santa Catarina, Brazil have modelled the interaction between simple molecules and antiparticles known as positrons and found that this model agreed well with experimental observations. This study has been published in EPJ D.

Positrons, the antimatter equivalent of electrons, are the simplest and most abundant antiparticles, and they have been known and studied since the 1930s. Particle accelerators generate huge quantities of high-energy positrons, and most lab experiments require this energy to be reduced to a specific value. Typically, this is achieved by passing the positrons through a gas in an apparatus called a buffer-gas positron trap, so they lose energy by colliding with the molecules of the gas. However, we do not yet fully understand the mechanisms of energy loss at the atomic level, so it is difficult to predict the resulting energy loss precisely.

Some of this energy is lost as rotational energy, when the positrons collide with gas molecules and cause them to spin. Barp and Arretche developed a model to predict this form of energy loss when positrons collide with molecules often used in buffer-gas positron traps: the tetrahedral carbon tetrafluoride (CF4) and methane (CH4), and the octahedral sulphur hexafluoride (SF6). They found that this model compared very well to experimental results.

This model can be applied to collisions between positrons and any tetrahedral or octahedral molecules. Barp and Arretche hope that this improved understanding of how positrons interact with molecules will be used to improve techniques for positron emission tomography (PET) scanning in medicine, for example.

Credit: 
Springer

New methods could help researchers watch neurons compute

Since the 1950s at least, researchers have speculated that the brain is a kind of computer in which neurons make up complex circuits that perform untold numbers of calculations every second. Decades later, neuroscientists know that these brain circuits exist, yet technical limitations have kept most details of their computations out of reach.

Now, neuroscientists reported December 12 in Cell, they may finally be able to reveal what circuits deep in the brain are up to, thanks in large part to a molecule that lights up brighter than ever before in response to subtle electrical changes that neurons use to perform their compuations.

Currently, one of the best ways to track neurons' electrical activity is with molecules that light up in the presence of calcium ions, a proxy for a neuron spike, the moment when one neuron passes an electrical signal to another. But calcium flows too slowly to catch all the details of a neuron spike, and it doesn't respond at all to the subtle electrical changes that lead up to a spike. (One alternative is to implant electrodes, but those implants ultimately damage neurons, and it isn't practical to place electrodes in more than a handful of neurons at once in living animals.)

To solve those problems, researchers led by Michael Lin, an associate professor of neurobiology and of bioengineering and a member of the Wu Tsai Neurosciences Institute, and Stéphane Dieudonné, an INSERM research director at the École Normale Supérieure in Paris, focused on fluorescent molecules whose brightness responds directly to voltage changes in neurons, an idea Lin and his team had been working on for years.

Still, those molecules had a problem of their own: Their brightness hasn't always been that responsive to voltage, so Lin and his team at Stanford turned to a well-known method in biology called electroporation. In that technique, researchers use electrical probes to zap holes in cell membranes, with the side effect that their voltage drops rapidly to zero like a punctured battery. By zapping a library of candidate molecules, Lin and colleagues could then select those whose brightness was most responsive to the voltage shift. The resulting molecule, called ASAP3, is the most responsive voltage indicator to date, Lin said.

Dieudonné and his lab focused on another problem: how to scan neurons deep in the brain more efficiently. To make fluorescent molecules such as ASAP3 light up deep in the brain, researchers often use a technique called two-photon imaging, which employs infrared laser beams that can penetrate through tissue. Then, in order to scan multiple neurons fast enough to see a spike, which itself lasts only about a thousandth of a second, researchers must move the laser spot quickly from neuron to neuron -- something hard to do reliably in moving animals. The solution, Dieudonné and colleagues found, was a new algorithm called ultrafast local volume excitation, or ULoVE, in which a laser rapidly scans several points in the volume around a neuron, all at once.

"Such strategies, where each laser pulse is shaped and sent to the right volume within the tissue, constitute the optimal use of light power and will hopefully allow us to record and stimulate millions of locations in the brain each second," Dieudonné said.

that illuminates multiple points at once.

Putting those techniques together, the researchers showed in mice that they could track fine details of brain activity in much of the mouse cortex, the top layers of the brain that control movement, decision making and other higher cognitive functions.

"You can now look at neurons in living mouse brains with very high accuracy, and you can track that for long periods of time," Lin said. Among other things, that opens the door to studying not only how neurons process signals from other neurons and how they decide, so to speak, when to spike, but also how neurons' calculations change over time.

In the meantime, Lin and colleagues are focused on further improving on their methods. "ASAP3 is very usable now, but we're confident there will be an ASAP4 and ASAP5," he said.

Credit: 
Stanford University

IU School of Medicine team learns how to predict triple negative breast cancer recurrence

Indiana University School of Medicine researchers have discovered how to predict whether triple negative breast cancer will recur, and which women are likely to remain disease-free. They will present their findings on December 13, 2019, at the San Antonio Breast Cancer Symposium, the most influential gathering of breast cancer researchers and physicians in the world.

Milan Radovich, PhD, and Bryan Schneider, MD, discovered that women whose plasma contained genetic material from a tumor - referred to as circulating tumor DNA - had only a 56 percent chance of being cancer-free two years following chemotherapy and surgery. Patients who did not have circulating tumor DNA, or ctDNA, in their plasma had an 81 percent chance that the cancer would not return after the same amount of time.

Triple negative breast cancer is one of the most aggressive and deadliest types of breast cancer because it lacks common traits used to diagnose and treat most other breast cancers. Developing cures for the disease is a priority of the IU Precision Health Initiative Grand Challenge.

The study also examined the impact of circulating tumor cells, or CTCs, which are live tumor cells that are released from tumors somewhere in the body and float in the blood.

"What we found is that if patients were negative for both ctDNA and CTC, 90 percent of the women with triple negative breast cancer remained cancer-free after two years," said Radovich, who is lead author of this study and associate professor of surgery and medical and molecular genetics at IU School of Medicine.

Advocates for breast cancer research say they are excited to hear about these results.

"The implications of this discovery will change the lives of thousands of breast cancer patients," said Nadia E. Miller, who is a breast cancer survivor and president of Pink-4-Ever, which is a breast cancer advocacy group in Indianapolis. "This is a huge leap toward more favorable outcomes and interventions for triple negative breast cancer patients. To provide physicians with more information to improve the lives of so many is encouraging!"

Radovich and Schneider are researchers in the Indiana University Melvin and Bren Simon Cancer Center and the Vera Bradley Foundation Center for Breast Cancer Research. They lead the Precision Health Initiative's triple negative breast cancer team.

The researchers, along with colleagues from the Hoosier Cancer Research Network, analyzed plasma samples taken from the blood of 142 women with triple negative breast cancer who had undergone chemotherapy prior to surgery. Utilizing the FoundationOne Liquid Test, circulating tumor DNA was identified in 90 of the women; 52 were negative.

The women were participants in BRE12-158, a clinical study that tested genomically directed therapy versus treatment of the physician's choice in patients with stage I, II or III triple negative breast cancer.

Detection of circulating tumor DNA was also associated with poor overall survival. Specifically, the study showed that patients with circulating tumor DNA were four times more likely to die from the disease when compared to those who tested negative for it.

The authors say the next step is a new clinical study expected to begin in early 2020, which utilizes this discovery to enroll patients who are at high risk for recurrence and evaluates new treatment options for them.

"Just telling a patient they are at high risk for reoccurrence isn't overly helpful unless you can act on it," said Schneider, who is senior author of this study and Vera Bradley Professor of Oncology at IU School of Medicine. "What's more important is the ability to act on that in a way to improve outcomes."

Organizers of the San Antonio Breast Cancer Symposium selected the research to highlight from more than 2,000 scientific submissions.

This study was funded by the Vera Bradley Foundation for Breast Cancer and the Walther Cancer Foundation. It is part of the Indiana University Precision Health Initiative Grand Challenge. The study was managed by the Hoosier Cancer Research Network and enrolled at 22 clinical sites across the United States.

What they're saying:

IU School of Medicine Dean Jay L. Hess, MD, PhD, MHSA: "While we have made extraordinary progress in treating many types of breast cancer, triple negative disease remains a formidable challenge. We are dedicating substantial expertise and resources to this disease, and this discovery is an important step forward. We will continue to press ahead until we have new therapies to offer women with this most aggressive form of breast cancer."

IU School of Medicine Executive Associate Dean for Research Anantha Shekhar, MD, PhD: "I could not be more proud of our research team here at IU School of Medicine and the IU Precision Health Initiative Grand Challenge. A few years ago, I gave the teams the challenge to come up with targeted treatments, cures and preventions for triple negative breast cancer, where there had been none. The findings, announced today, show we are well on our way to achieving these bold goals."

Indiana University Melvin and Bren Simon Cancer Center Director Patrick J. Loehrer, MD: "Addressing an issue of importance in Indiana and globally, our IU cancer researchers are making novel discoveries that have the real potential to impact women with triple negative breast cancer. This work does not happen in a vacuum, but is a product of 'team science,' which characterizes the fabric of our National Cancer Institute-designated Comprehensive Cancer Center."

Credit: 
Indiana University School of Medicine

Success in metabolically engineering marine algae to synthesize valuable antioxidant astaxanthin

image: Estimated metabolic changes induced by astaxanthin production.

Image: 
Kobe University

A research group led by Professor HASUNUMA Tomohisa of Kobe University's Engineering Biology Research Center have succeeded in synthesizing the natural pigment astaxanthin using the fast-growing marine cyanobacterium Synechococcus sp. PCC7002.

This process required light, water and CO2 to produce the valuable antioxidant astaxanthin from the cyanobacterium host at a faster rate and with lower contamination risks than previous methods of biologically synthesizing this useful substance. In addition, dynamic metabolic analysis revealed that astaxanthin production enhances the central metabolism of Synechococcus sp. PCC7002.

It is hoped that these developments could be utilized to meet the demand for natural astaxanthin in the pharmaceutical and nutritional industries, amongst others, in the future.

The results of this study were first published in the international journal 'ACS Synthetic Biology' on October 25 2019.

Introduction

Carotenoids are pigments found in nature- the most well-known being the orange β-carotene (beta-carotene), which is found in carrots among other vegetables, fruits and plants. Various studies on different carotenoids have suggested that they can protect against cancers, premature aging and degenerative diseases.

Astaxanthin (pink carotenoid) is the strongest antioxidant among known carotenoids. It is used as a natural coloring in aquaculture, cosmetic, nutrition and pharmaceutical industries among others, due to its enhancement of immune responses and anti-inflammatory properties. For example, it is utilized as an additive in chicken and fish feed.

Currently, the majority of commercial astaxanthin is chemically synthesized from petrochemicals. This enables large amounts to be produced in order to meet demand. However, there are concerns about the safety of consuming astaxanthin synthesized from petrochemicals, and as a result the demand for natural astaxanthin is increasing.

Research Background

The freshwater alga Haematococcus pluvialis produces astaxanthin naturally and is responsible for the pink spots of astaxanthin commonly seen in birdbaths. For commercial astaxanthin production, Haematococcus requires a complex 2-stage process (Figure 1, top). After the first growth stage, Haematococcus is placed under inductive stress conditions such as nitrogen starvation or high light irradiation. This induces the plant to form hematocysts and produce astaxanthin in the second stage. However the slow growth in the first stage and subsequent cell deterioration in the second stage increase the risk of contamination. Furthermore, high light irradiation extrapolates production costs.

Consequently, current methods of producing natural astaxanthin are not commercially viable enough for large-scale production. It is hoped that this powerful antioxidant carotenoid could be further utilized for human consumption in the nutrition and pharmaceutical industries if more efficient ways of producing it biologically are developed.

The current study sped up the growth rate and reduced the contamination risks in biosynthesizing astaxanthin. The researchers succeeded in producing astaxanthin using the fast-growing marine blue-green algae species, or cyanobacterium, Synechococcus sp. PCC7002 as a host. This algae does not inherently produce astaxanthin, however by integrating β -carotene encoding genes into the Synechococcus, the expressed genes only require water, light and CO2 in order to produce astaxanthin. This single stage method (Figure 1, bottom) does not require subjecting the cells to stress conditions and enabled astaxanthin to be produced in a shorter time period compared to the Haematococcus method. In addition, it is proposed that the rich salt concentration in Synechococcus could also lower the risk of contamination.

Research Methodology:

As previously mentioned, Synechococcus sp. PCC7002 does not inherently produce astaxanthin. Therefore it was necessary to take the encoding genes for β-carotene hydroxylase and β-carotene ketolase from the marine bacterium Brevundimonas sp. SD212 and integrate them into the Synechococcus. The genes were then expressed to biosynthesize astaxanthin. The host Synechococcus sp. PCC7002 was then able to produce astaxanthin via photosynthesis under a stress-free level of light similar to natural sunlight. As shown in Figure 2, the production of pink astaxanthin makes the solution turn a darker green color.

This study is thought to be the first in the world to succeed in producing astaxanthin utilizing this particular marine cyanobacterium. With CO2 as the sole carbon source, this modified strain of Synechococcus sp. PCC7002 yielded 3mg/g dry cell weight astaxanthin, a production speed of 3.35mg/L/day. This is believed to be highest rate achieved so far using green algae.

A dynamic metabolic profiling method developed by Professor Hasunuma et al. was used to analyze the metabolics inside the cells during astaxanthin production. This analysis revealed an increase in fractions of phosphates- in particular deoxyxylulose 5-phosphate (DXP), glyceladehyde 3-phosphate (GAP); and methylerithritol 4-phosphate (MEP). Furthermore, the level of the intermediary deoxyxylulose 5-phosphate (DXP) in the nonmevalonate pathway (a precursor metabolic pathway for astaxanthin production) also increased (Figure 3). In vivo C-labelling revealed that the carbon flow in the central metabolism was enhanced by the expression of the β-carotene encoding genes.

These results indicate that astaxanthin producing cells are active in the initial metabolic stage. The reason for this is believed to be due to the cyanobacteria enhancing its central metabolism and the nonmevalonate pathway in order to try to replenish pigments such as β-carotene that are being used to synthesize the astaxanthin. In other words, astaxanthin production enhances the central metabolism of Synechococcus sp. PCC7002 to compensate for the lack of light harvesting pigments.

It is hoped that further metabolic pathway engineering could reduce potential bottlenecks and further increase astaxanthin production.

Further Research:

Overall, this study showed that the modified Synechococcus sp. PCC7002 is a promising host for producing astaxanthin biologically through photosynthesis. This could be investigated further by trying to synthesize various other useful substances utilizing Synechococcus sp. PCC7002.

In addition, it is hoped that the dynamic metabolic profiling method developed during this research could be utilized to improve understanding of metabolic processes in microorganisms, plants and animals.

Credit: 
Kobe University

Transformative change can save humans and nature

The survival of Earth's life is not a battle of humans versus nature. In this week's Science, an independent group of international experts, including one from Michigan State University (MSU), deliver a sweeping assessment of nature, concluding victory needs both humans and nature to thrive.

"Pervasive human-driven decline of life on Earth points to the need for transformative change" explores how human impacts on life on Earth are unprecedented, requiring transformative action to address root economic, social and technological causes.

It's a notable assessment not just for its unflinching examination of "living nature" - Earth's fabric of life which provides food, water, energy and healthy security. The Science article "Pervasive human-driven decline of life on Earth points to the need for transformative change" takes up where the recent Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services Global Assessment (IPBES) leaves off by following an intergovernmental process from start to end. The report covers not only the history of humanity's interactions with nature - with particular focus on the last 50 years - but also how these might change in the future.

"We cannot save the planet - and ourselves - until we understand how tightly woven people and the natural benefits that allow us to survive are," said Jianguo "Jack" Liu, MSU Rachel Carson Chair in Sustainability and a co-author. "We have learned new ways to understand these connections, even as they spread across the globe. This strategy has given us the power to understand the full scope of a problem, which allows us to find true solutions."

Nature's capacity to provide beneficial regulation of environmental processes, such as modulating air and water quality, sequestering carbon, building healthy soils, pollinating crops, and providing coastal protection from hazards such as storms and storm surges, has decreased globally, but not evenly. Scientists, the paper notes, have gotten better collecting information and modeling situations to more accurately reflect how the world truly works.

Among that methodology increasingly adopted by scientists across the world is telecoupling, introduced by Liu in 2008 and the framework since applied to more than 500 scientific papers. The telecoupling framework is an integrative way to study coupled human and natural systems that are linked over long distances. The framework keeps both the humans and the natural in focus and shows how changes can reverberate far beyond, and then even double back.

Their group's dedication to integrative approaches has produced a litany of human impact: 70% of land surfaces altered, 77% of major rivers no longer flow from source to sea, the tally of animal species going extinct is rising, biodiversity is being lost.

The group applies different scenarios to see how plausible changes have an effect. Starkly, they note nothing on Earth ultimately wins in the "business as usual" scenario.

They say that what our planet needs - quickly - is transformative change. A new way of doing business, what they term "a system-wide reorganization across technological, economic and social factors, making sustainability the norm rather than the altruistic exception."

"We humans have advanced to the point where we are able to understand our world as never before," Liu said. "Now we must use that knowledge wisely, quickly. The stakes are high, the benefits can be enormous, but true sustainability will absolutely involve informed change."

Credit: 
Michigan State University

Significant potential demonstrated by digital agricultural advice

Boston, MA, USA -- 6 December 2019
The near ubiquitous penetration of mobile phones among smallholder farmers in developing countries has enabled a powerful new tool for dispensing agricultural advice to farmers. Low acquisition and marginal costs make digital extension scalable at low cost when compared to traditional in person extension practices.

A new paper co-authored by Nobel Prize winner and Precision Agriculture for Development (PAD) co-founder, Michael Kremer, and his colleagues Raissa Fabregas (University of Texas) and Frank Schilbach (MIT), published today in Science, demonstrates that practices recommended through digital extension are adopted at rates that compare well with those adopted through the course of traditional in-person extension practices, and at significantly lower cost.

The paper emphasizes the research utility implicit in digital extension and the potential for research and experimentation to further improve the impact of digital advisory systems and the advice it delivers: "Running these systems at scale allows for testing variations... and feedback loops to improve accuracy and effectiveness of messages over time". The authors posit that realizing the "full promise of digital agriculture... will require sustained cycles of iteration and testing".

Dr. Tomoko Harigaya, PAD's Chief Economist and Director of Research, remarked that "Understanding the impact of an agricultural intervention can be challenging because of a large fluctuation in agricultural outcomes across seasons. This paper provides an extremely useful insight on the potential value of digital agricultural extension services by taking stock of the existing experimental evidence and highlighting unexploited opportunities for digital interventions. The impact estimates, with the declining marginal cost of service per farmer PAD has seen, suggest a very high benefit-cost ratio of digital extension. As PAD continues to scale, innovate, and iterate, we see huge opportunities to enhance our impact and the inclusiveness of our services."

Shawn Cole, Co-Founder of PAD, and John G. McLean Professor of Business Administration at Harvard Business School, reflected that, "PAD's mission is to design, evaluate, and assist with the scaling of mobile phone based agricultural advice to help smallholder farmers. This paper suggests there is potential for tremendous welfare by delivering mobile phone-based advice to improve farmers' lives, though it also shows there is significantly more research and development to be done. Two things in particular excite me about the potential: first, trusted high-quality advice could change behavior in a number of important domains (e.g., health, education, etc.); and second, the unique value of digital delivery--it can reach anywhere, including conflict areas; and at scale may have close to zero marginal cost."

Credit: 
Precision Agriculture for Development

Why whales are so big, but not bigger

Whales' large bodies help them consume their prey at high efficiencies, a more than decade-long study of around 300 tagged whales now shows, but their gigantism is limited by prey availability and foraging efficiency. These results, though seemingly intuitive, have been difficult to confirm with quantitative data because of challenges studying these gargantuan mammals in the field. However, this information is a necessary beginning to efforts to preserve these endangered giants, says Terrie Williams in a related Perspective. Growing large depends on the delicate balance between energy gained from food and energy expended. On land, typically this balance works out so that small creatures feed on small prey and big creatures feed on big prey. However, this paradigm breaks down in the ocean, where the largest predators in the world feed on tiny prey. Explanations for this phenomenon remain inconclusive. Using submersible wildlife tags designed with microprocessor technology, Jeremy Goldbogen and colleagues tagged toothed and filter-feeding whales - from the smallest porpoise to the largest animal on Earth, the blue whale - and calculated their energetic efficiency (energy from captured prey divided by energy spent). They found that larger body size for both kinds of whales increased their energy efficiency by allowing greater consumption of prey and effective prey capture; however, for toothed whales of all sizes, the energy gained from deep-sea hunting was ultimately constrained by limited abundance of prey attainable during one dive. By contrast, filter feeders consistently exhibited rapid increases in energy from food, with the total biomass and energetic content of their tiny prey exceeding on average those of the largest toothed whale prey. For these toothless whales, size might be limited by their biology (ability to gulp as much krill-enriched water as quickly as possible), rather than prey availability, the authors postulated. Altogether, their analyses suggest that filter feeding fueled an evolutionary pathway to gigantism not available to toothed whales, by exploiting vast quantities of small prey at high efficiencies.

Credit: 
American Association for the Advancement of Science (AAAS)

New algorithm detects even the smallest cancer metastases across the entire mous

video: Details of how DeepMACT technology can help pre-clinical drug development.

Image: 
©Helmholtz Zentrum München

Cancer is one of the leading causes of death worldwide. More than 90% of cancer patients die of distal metastases rather than as a direct result of the primary tumor. Cancer metastases usually develop from single disseminated cancer cells, which evade the body's immune surveillance system. Up to now, comprehensive detection of these cells within the entire body has not been possible, owing to the limited resolution of imaging techniques such as bioluminescence and MRI. This has resulted in a relative lack of knowledge of the specific dissemination mechanisms of diverse cancer types, which is a prerequisite for effective therapy. It has also hampered efforts to assess the efficacy of new drug candidates for tumor therapy.

Transcending human detection capabilities with deep learning

In order to develop new techniques to overcome these hurdles, the team led by Dr. Ali Ertürk, Director of the Institute for Tissue Engineering and Regenerative Medicine at Helmholtz Zentrum München, had previously developed vDISCO - a method of tissue clearing and fixation which transforms mouse bodies into a transparent state allowing the imaging of single cells. Using laser-scanning microscopes, the researchers were able to detect the smallest metastases down to individual cancer cells in cleared the tissue of the mouse bodies.

However, manually analyzing such high-resolution imaging data would be a very time-consuming process. Given the limited reliability and processing speed of currently available algorithms for this kind of data analysis, the teams have developed a novel deep-learning based algorithm called DeepMACT. The researchers have now been able to detect and analyze cancer metastases and map the distribution of therapeutic antibodies in vDISCO preparations automatically. The DeepMACT algorithm matched the performance of human experts in detecting the metastases - but did so more than 300 times faster. "With a few clicks only, DeepMACT can do the manual detection work of months in less than an hour. We are now able to conduct high-throughput metastasis analysis down to single disseminated tumor cells as a daily routine", says Oliver Schoppe, co-first-author of the study and Ph.D. student in the group of Prof. Dr. Bjoern Menze at TranslaTUM, the Center for Translational Cancer Research at TUM.

Detecting cells, gathering data, learning about cancer

Using DeepMACT, the researchers have gained new insights into the unique metastatic profiles of different tumor models. Characterization of the dissemination patterns of diverse cancer types could enable tailored drug targeting for different metastatic cancers. By analyzing the progression of breast-cancer metastases in mice, DeepMACT has uncovered a substantial increase in small metastases throughout the mouse body over time. "None of these features could be detected by conventional bioluminescence imaging before. DeepMACT is the first method to enable the quantitative analysis of metastatic process at a full-body scale", adds Dr. Chenchen Pan, a postdoctoral fellow at Helmholtz Zentrum München and also joint first author of the study. "Our method also allows us to analyze the targeting of tumor antibody therapies in more detail."

How effective are current cancer therapies?

With DeepMACT, the researchers now have a tool with which to assess the targeting of clinical cancer therapies that employ tumor-specific monoclonal antibodies. As a representative example, they have used DeepMACT to quantify the efficacy of a therapeutic antibody named 6A10, which had been shown to reduce tumor growth. The results demonstrated that 6A10 can miss up to 23% of the metastases in the bodies of affected mice. This underlines the importance of the analysis of targeting efficacy at the level of single metastases for the development of novel tumor drugs. The method can potentially also track the distribution of small-molecule drugs when they are conjugated to fluorescent dyes.

On the way to stop the metastatic process

Taken together, these results show that DeepMACT not only provides a powerful method for the comprehensive analysis of cancer metastases, but also provides a sensitive tool for therapeutic drug assessment in pre-clinical studies. "The battle against cancer has been underway for decades and there is still a long way to go before we can finally defeat the disease. In order to develop more effective cancer therapies, it is critical to understand the metastatic mechanisms in diverse cancer types and to develop tumor-specific drugs that are capable to stop the metastatic process," explains Ertürk.

DeepMACT is publicly available and can be easily adopted in other laboratories focusing on diverse tumor models and treatment options. "Today, the success rate of clinical trials in oncology is around 5%. We believe that the DeepMACT technology can substantially improve the drug development process in preclinical research. Thus, could help finding much more powerful drug candidates for clinical trials and hopefully help to save many lives".

Credit: 
Helmholtz Munich (Helmholtz Zentrum München Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH))

Depression, anxiety may hinder healing in young patients with hip pain

image: Physiatrist Abby Cheng, MD, examines a young patient with hip pain. Cheng has found that when patients with hip pain have depression or anxiety, they also may have worse outcomes following arthroscopic surgery to correct their hip problems.

Image: 
Matt Miller

New research suggests that physicians evaluating young patients with hip pain should consider more than such patients' physical health. They also should consider screening those patients for clinical depression and anxiety -- impairments that researchers at Washington University School of Medicine in St. Louis have found can have a negative impact on outcomes following hip surgery, such as pain, slower recoveries and inadequate return to activity.

The findings are published online Dec. 12 in the American Journal of Sports Medicine.

In one of the first large studies to focus on mental health effects associated with hip pain, the researchers analyzed data gathered in 12 smaller studies conducted since 2014. The results suggest it may be advisable to start screening young patients with hip pain for depression and anxiety, especially before they undergo arthroscopic hip procedures.

"In a perfect world, we would screen patients for anxiety and depression before surgery and offer treatment, if needed," said first author Abby L. Cheng, MD, an assistant professor of orthopedic surgery. "But that's not usually what happens with these patients right now. Plus, many patients think that if their pain goes away, their anxiety or depression will go away, too. But that doesn't seem to be the case."

Cheng, a physiatrist trained in physical medicine and rehabilitation, works with patients who have hip pain, but she does not perform hip surgery herself. She analyzed data from more than 5,600 hip surgery patients, ages 29 to 41.

All of the studies in the analysis included evaluations of the effects of depression or anxiety on postsurgical clinical outcomes, such as use of pain-killing drugs after an operation, return to pre-surgery activities, and overall patient satisfaction following surgery. In every study, patients with anxiety and depression prior to surgery were statistically less likely to have good outcomes after their operations.

All of the patients had undergone arthroscopic surgery to correct hip problems, the most common of which was femoroacetabular impingement, a condition in which the hip socket is too deep, causing the thigh bone to rub against the socket. The condition can be painful and can significantly increase arthritis risk and the need for eventual hip-replacement surgery. Cheng said patients with these hip problems also often have unexpectedly high rates of depression and anxiety.

"There are people who may have anxiety or depression, who then develop a hip problem, and that can make things worse because rather than having the mindset that the hip problem is a small, fixable issue, their extra worry actually can increase the impact of the hip problem on their lives," she said. "Or sometimes people who have hip problems may then develop new depressive or anxious symptoms because their hip issues are preventing them from doing things they want and need to do. That combination of things can play a role in making each of the problems worse."

In all of the studies reviewed, the patients were otherwise healthy and active before being limited by hip pain. The patients in the studies were young primarily because of the type of hip surgery studied. Older patients tend to be candidates for more extensive operations, such as hip-replacement surgery. But in younger patients, doctors often perform arthroscopic procedures to correct defects, attempting to delay or prevent the need for total joint-replacement operations that are so common in older adults

"These young people often were involved in sports activities such as soccer or dance, but their pain prevented them from participating in these things they had enjoyed," Cheng said. "Often those activities are good outlets for stress, so the inability to participate affects quality of life."

Doctors who treat young hip patients don't routinely screen for depression or anxiety, much less refer their patients to behavioral health services as part of the treatment plan for hip pain. But Cheng now proposes research into accessible, affordable behavioral-health interventions for these patients, especially before considering hip surgery.

"We need to start screening for symptoms of psychological impairment, and we need to be able to offer our high-risk patients easier access to behavioral health professionals," Cheng said. "There's an understanding, for example, that back pain is associated with stress, but we're just now starting to examine the relationships between anxiety and hip pain, as well as shoulder pain or pain in other parts of the body. The more we look at it, the more it becomes clear that the mind and the body are connected, and we can't separate them and treat one without treating the other."

Credit: 
Washington University School of Medicine

Students do better in school when they can understand, manage emotions

WASHINGTON -- Students who are better able to understand and manage their emotions effectively, a skill known as emotional intelligence, do better at school than their less skilled peers, as measured by grades and standardized test scores, according to research published by the American Psychological Association.

"Although we know that high intelligence and a conscientious personality are the most important psychological traits necessary for academic success, our research highlights a third factor, emotional intelligence, that may also help students succeed," said Carolyn MacCann, PhD, of the University of Sydney and lead author of the study. "It's not enough to be smart and hardworking. Students must also be able to understand and manage their emotions to succeed at school."

The research was published in the journal Psychological Bulletin.

The concept of emotional intelligence as an area of academic research is relatively new, dating to the 1990s, according to MacCann. Although there is evidence that social and emotional learning programs in schools are effective at improving academic performance, she believes this may be the first comprehensive meta-analysis on whether higher emotional intelligence relates to academic success.

MacCann and her colleagues analyzed data from more than 160 studies, representing more than 42,000 students from 27 countries, published between 1998 and 2019. More than 76% were from English-speaking countries. The students ranged in age from elementary school to college. The researchers found that students with higher emotional intelligence tended to get higher grades and better achievement test scores than those with lower emotional intelligence scores. This finding held true even when controlling for intelligence and personality factors.

What was most surprising to the researchers was the association held regardless of age.

As for why emotional intelligence can affect academic performance, MacCann believes a number of factors may come into play.

"Students with higher emotional intelligence may be better able to manage negative emotions, such as anxiety, boredom and disappointment, that can negatively affect academic performance," she said. "Also, these students may be better able to manage the social world around them, forming better relationships with teachers, peers and family, all of which are important to academic success."

Finally, the skills required for emotional intelligence, such as understanding human motivation and emotion, may overlap with the skills required to master certain subjects, such as history and language, giving students an advantage in those subject areas, according MacCann.

As an example, MacCann described the school day of a hypothetical student named Kelly, who is good at math and science but low in emotional intelligence.

"She has difficulty seeing when others are irritated, worried or sad. She does not know how people's emotions may cause future behavior. She does not know what to do to regulate her own feelings," said MacCann.

As a result, Kelly does not recognize when her best friend, Lucia, is having a bad day, making Lucia mad at her for her insensitivity. Lucia then does not help Kelly (as she usually does) later in English literature class, a class she often struggles in because it requires her to analyze and understand the motivations and emotions of characters in books and plays.

"Kelly feels ashamed that she can't do the work in English literature that other students seem to find easy. She is also upset that Lucia is mad at her. She can't seem to shake these feelings, and she is not able to concentrate on her math problems in the next class," said MacCann. "Because of her low emotion management ability, Kelly cannot bounce back from her negative emotions and finds herself struggling even in subjects she is good at."

MacCann cautions against widespread testing of students to identify and target those with low emotional intelligence as it may stigmatize those students. Instead, she recommends interventions that involve the whole school, including additional teacher training and a focus on teacher well-being and emotional skills.

"Programs that integrate emotional skill development into the existing curriculum would be beneficial, as research suggests that training works better when run by teachers rather than external specialists," she said. "Increasing skills for everyone - not just those with low emotional intelligence - would benefit everyone."

Credit: 
American Psychological Association