Culture

Common liverwort study has implications for crop manipulation

A new study on genetic pathways in the common liverwort could have future implications for crop manipulation.

The findings of the US-led study, co-authored by genetic biologist Professor John Bowman from the Monash University School of Biological Sciences, are published today in Nature Plants.

Earlier this year researchers confirmed a new role for the well-known plant molecule known as 1-aminocyclopropane-1-carboxylic acid (ACC), providing the first clear example of it acting on its own as a likely plant hormone.

ACC is the precursor of the plant hormone ethylene which has many roles in growth and development.

"Ethylene was the first gaseous hormone identified over 100 years ago in flowering plants," explains Professor Bowman.

"It is the 'ripening hormone', that is, the one bad apple spoils the lot," he said.

"The ethylene signalling pathway has been characterised in flowering plants, and its disruption results in a number of defects, including fruit ripening.

However, land plants evolved from an aquatic alga, and genes encoding the ethylene signalling pathway can be found in extant algae, and were likely acquired from the cyanobacterial endosymbiont that evolved into the chloroplast, suggesting the pathway long predated the evolution of fruit ripening.

"Being able to understand and control ethylene production has major implications for agriculture and horticulture."

Professor Bowman and collaborators investigated the ethylene signalling pathway in the liverwort Marchantia, and showed that while ethylene acted as a signalling molecule similar to the situation in flowering plants, the enzymatic precursor to ethylene in flowering plants, ACC, was a biologically active molecule as well.

"As liverworts do not make ethylene via ACC, it suggests that ACC was a biologically active molecule in the ancestral land plant, and that the ethylene pathway as we know it in flowering plants evolved via co-option of pre-existing pathways," Professor Bowman said.

"These pathways likely still exist in flowering plants and may be able to be manipulated to affect ethylene signalling, and its incumbent biological processes in crop plants," he said.

Credit: 
Monash University

Haunted house researchers investigate the mystery of playing with fear

video: Haunted houses, horror movies, and ghost stories can be chilling delights, provided the fear they evoke remains in a "Goldilocks zone" that is neither too terrifying nor too tame.

Image: 
APS

Chainsaw-wielding maniacs and brain-munching zombies are common tropes in horror films and haunted houses, which, in normal years, are popular Halloween-season destinations for thrill seekers. But what makes such fearsome experiences so compelling, and why do we actively seek them out in frightful recreational settings?

New research accepted for publication in the journal Psychological Science reveals that horror entertains us most effectively when it triggers a distinct physical response--measured by changes in heart rate--but is not so scary that we become overwhelmed. That fine line between fun and an unpleasant experience can vary from person to person.

"By investigating how humans derive pleasure from fear, we find that there seems to be a 'sweet spot' where enjoyment is maximized," said Marc Malmdorf Andersen, a researcher at the Interacting Minds Center at Aarhus University and lead author of the paper. "Our study provides some of the first empirical evidence on the relationship between fear, enjoyment, and physical arousal in recreational forms of fear."

For years, researchers have suspected that physiological arousal, such as a quickening pulse and a release of hormones in the brain, may play a key role in explaining why so many people find horror movies and haunted houses so attractive.

Until now, however, a direct relationship between arousal and enjoyment from these types of activities has not been established. "No prior studies have analyzed this relationship on subjective, behavioral, as well as physiological levels," said Andersen.

To explore this connection, Andersen and his colleagues studied how a group of 110 participants responded to a commercial haunted house attraction in Vejle, Denmark. The researchers fitted each participant with a heart rate monitor, which recorded real-time data as they walked through the attraction. The nearly 50-room haunted house produced an immersive and intimate live-action horror experience. The attraction used a variety of scare tactics to frighten guests, including frequent jump scares, in which zombies or other monstrous abominations suddenly appeared or charged toward the guest.

The researchers also studied the participants in real time through closed-circuit monitors inside the attraction. This enabled the team to make first-hand observations of participants' reactions to the most frightening elements, and, subsequently, to have independent coders analyze participants' behavior and responses. After the experience, participants evaluated their level of fright and enjoyment for each encounter. By comparing these self-reported experiences with the data from the heart rate monitors and surveillance cameras, the researchers were able to compare the fear-related and enjoyment-related elements of the attraction on subjective, behavioral, and physiological levels.

What Is Recreational Fear?

Recreational fear refers to the mixed emotional experience of feeling fear and enjoyment at the same time. Fear is generally considered to be an unpleasant emotion that evolved to protect people from harm. Paradoxically, humans sometimes seek out frightening experiences for purely recreational purposes. "Past studies on recreational fear, however, have not been able to establish a direct relationship between enjoyment and fear," said Andersen.

Studies on fearful responses to media, for example, have mostly been conducted in laboratory settings with relatively weak stimuli, such as short video clips from frightening films. Such experimental setups can sometimes make it difficult to measure physiological arousal because responses may be modest in a laboratory context.

"Conducting our study at a haunted attraction, where participants are screaming with both fear and delight, made this task easier," said Andersen. "It also presented unique challenges, such as the immensely complex logistics associated with conducting empirical studies in a 'messy' real-world context like a haunted house."

Discovering the "Goldilocks Zone"

Plotting the relationship between self-reported fear and enjoyment, the researchers discovered an inverted U-shape trend, revealing an apparent sweet spot for fear where enjoyment is maximized.

"If people are not very scared, they do not enjoy the attraction as much, and the same happens if they are too scared," said Andersen. "Instead, it seems to be the case that a 'just-right' amount of fear is central for maximizing enjoyment."

The data also showed a similar inverted U-shape for the participants' heart rate signatures, suggesting that enjoyment is related to just-right deviations from a person's normal physiological state. However, when fearful encounters trigger large and long-lasting deviations from this normal state, as measured by pulse rates going up and down frequently over a longer period of time, unpleasant sensations often follow.

"This is strikingly similar to what scientists have found to characterize human play," said Andersen. "We know, for instance, that curiosity is often aroused when individuals have their expectations violated to a just-right degree, and several accounts of play stress the importance of just-right doses of uncertainty and surprise for explaining why play feels enjoyable."

In other words, when horror fans are watching Freddy Krueger on TV, reading a Stephen King novel, or screaming their way through a haunted attraction, they are essentially playing with fear.

Credit: 
Association for Psychological Science

Phytoplasma effector proteins devastate host plants through molecular mimicry

Phytoplasma are a type of bacteria that live within the cells and cause devastating diseases with damaging effects. For example, in many cases plants infected with phytoplasma are no longer able to develop flowers. These plants have actually been described as "zombies," since they allow the reproduction of phytoplasma but are unable to reproduce themselves anymore. A group of biologists based at Friedrich Schiller University and the Fritz Lipmann Institute in Germany are working to help better understand exactly how phytoplasma cells bring about the so-called zombification of plants.

"Our group has been studying the proteins that are targeted by the phytoplasma effector proteins for almost 30 years," said Günter Theißen, one of the scientists involved in the study. "In our latest research, based on just few data and some simple assumptions, we predicted the structure of the respective effector protein (termed SAP54) about 5 years ago. With the new work, we tested our hypothesis experimentally, and found that our prediction was quite accurate."

Phytoplasma cells bring about devastating changes in plants by secreting effector proteins that interact with some molecules of the plant host, which leads to developmental abnormalities. This interaction is very specific as only very special host molecules are recognized by the phytoplasma effector molecules.

"This specificity is achieved by the effector proteins adopting a special structure that somewhat mimics part of the structure of the host molecules bound," explained Theißen. "This way, structural analyses at the molecular level help explain an important group of plant diseases."

Almost simultaneously, two other groups of scientists determined the crystal structure of very similar and highly related proteins, providing strong confirmation of the findings of Theißen and his colleagues. The team at Friedrich Schiller University also found that the effector protein SAP54 binds better to multimeric complexes of the target proteins than to protein dimers (pairs of proteins), suggesting an exciting avenue for future research.

"We are doing basic research," said Theißen. "However, there is no effective cure for phytoplasma infections that can be used in agronomy yet so, for example, when an orchard is affected, the only solution is to cut down all the infected trees, with dramatic economic ramifications. We hope that the more we know about how phytoplasma cells affect their hosts, the more we can help avoid the damage."

Credit: 
American Phytopathological Society

Single brain region links depression and anxiety, heart disease, and treatment sensitivity

image: The researchers used brain imaging to explore other brain regions affected by sgACC over-activity during threat. Over-activation of sgACC increased activity within the amygdala and hypothalamus, two key parts of the brain's stress network. By contrast, it reduced activity in parts of the lateral prefrontal cortex - a region important in regulating emotional responses and shown to be underactive in depression.

"The brain regions we identified as being affected during threat processing differed from those affected during reward processing," said Professor Angela Roberts in the University of Cambridge's Department of Physiology, Development and Neuroscience, who led the study.

"This is key, because the distinct brain networks might explain the differential sensitivity of threat-related and reward-related symptoms to treatment."

Image: 
Laith Alexander

Over-activity in a single brain region underlies multiple symptoms of stress-related disorders

Targeting this region with ketamine only treats some of the symptoms

The region disrupts different brain networks: one involved in threat responses and one involved in responses to rewards

Distinct brain networks may explain the differential sensitivity of symptoms to treatments

Over-activity in a single brain region called the subgenual anterior cingulate cortex (sgACC) underlies several key symptoms of mood and anxiety disorders, but an antidepressant only successfully treats some of the symptoms. A new study, published today in the journal Nature Communications, suggests that sgACC is a crucial region in depression and anxiety, and targeted treatment based on a patient's symptoms could lead to better outcomes.

Depression is a debilitating disorder affecting hundreds of millions of people worldwide, but people experience it differently. Some mainly have symptoms of elevated negative emotion like guilt and anxiety; some have a loss of ability to experience pleasure (called anhedonia); and others a mix of the two.

Research at the University of Cambridge has found that increased activity in sgACC - a key part of the emotional brain- could underlie increased negative emotion, reduced pleasure and a higher risk of heart disease in depressed and anxious people. More revealing still is the discovery that these symptoms differ in their sensitivity to treatment with an antidepressant, despite being caused by the same change in brain activity.

Using marmosets, a type of non-human primate, the team of researchers infused tiny concentrations of an excitatory drug into sgACC to over-activate it. Marmosets are used because their brains share important similarities with those of humans and it is possible to manipulate brain regions to understand causal effects.

The researchers found that sgACC over-activity increases heart rate, elevates cortisol levels and exaggerates animals' responsiveness to threat, mirroring the stress-related symptoms of depression and anxiety.

"We found that over-activity in sgACC promotes the body's 'fight-or-flight' rather than 'rest-and-digest' response, by activating the cardiovascular system and elevating threat responses," said Dr Laith Alexander, one of the study's first authors from the University of Cambridge's Department of Physiology, Development and Neuroscience.

"This builds on our earlier work showing that over-activity also reduces anticipation and motivation for rewards, mirroring the loss of ability to experience pleasure seen in depression."

To explore threat and anxiety processing, the researchers trained marmosets to associate a tone with the presence of a rubber snake, an imminent threat which marmosets find innately stressful. Once marmosets learnt this, the researchers 'extinguished' the association by presenting the tone without the snake. They wanted to measure how quickly the marmosets could dampen down and 'regulate' their fear response.

"By over-activating sgACC, marmosets stayed fearful for longer as measured by both their behaviour and blood pressure, showing that in stressful situations their emotion regulation was disrupted," said Alexander.

Similarly, when the marmosets were confronted with a more uncertain threat in the form of an unfamiliar human, they appeared more anxious following over-activation of sgACC.

"The marmosets were much more wary of an unfamiliar person following over-activation of this key brain region - keeping their distance and displaying vigilance behaviours," said Dr Christian Wood, one of the lead authors of the study and senior postdoctoral scientist in Cambridge's Department of Physiology, Development and Neuroscience.

The researchers used brain imaging to explore other brain regions affected by sgACC over-activity during threat. Over-activation of sgACC increased activity within the amygdala and hypothalamus, two key parts of the brain's stress network. By contrast, it reduced activity in parts of the lateral prefrontal cortex - a region important in regulating emotional responses and shown to be underactive in depression.

"The brain regions we identified as being affected during threat processing differed from those we've previously shown are affected during reward processing," said Professor Angela Roberts in the University of Cambridge's Department of Physiology, Development and Neuroscience, who led the study.

"This is key, because the distinct brain networks might explain the differential sensitivity of threat-related and reward-related symptoms to treatment."

The researchers have previously shown that ketamine - which has rapidly acting antidepressant properties - can ameliorate anhedonia-like symptoms. But they found that it could not improve the elevated anxiety-like responses the marmosets displayed towards the human intruder following sgACC over-activation.

"We have definitive evidence for the differential sensitivity of different symptom clusters to treatment - on the one hand, anhedonia-like behaviour was reversed by ketamine; on the other, anxiety-like behaviours were not," Professor Roberts explained.

"Our research shows that the sgACC may sit at the head and the heart of the matter when it comes to symptoms and treatment of depression and anxiety."

Credit: 
University of Cambridge

Emerging treatment helps reverse heart failure in some patients

For the more than 6.2 million Americans living with heart failure, the disease is a cruel thief. It robs patients of vitality, making even the simplest tasks seem exhausting and stealing years off of their lives.

But a glimmer of hope is rising. In a new multicenter study, researchers report that an emerging heart failure treatment could potentially reverse structural damage to the heart, allowing it to heal itself over time. They say the treatment could eliminate the need for heart transplants and long-term use of artificial heart pumps in certain cases.

The research team, co-led by University of Utah Health physicians, concluded that this approach, which combines the use of medications with temporary use of an artificial heart pump, known as a left ventricular assist device (LVAD), could help a select group of patients live longer, healthier, and more productive lives.

"For decades, heart transplantation and LVADs have been the therapeutic cornerstones of advanced heart failure," says Stavros Drakos, M.D., PhD, co-corresponding author of the study, cardiologist and director of Cardiovascular Research for the U of U Health Division of Cardiology. "But this alternative approach is different. It appears to be a bridge to heart recovery without requiring transplantation or long-term use of an artificial heart pump."

The study, which was conducted at six specialized medical centers nationwide, appears in Circulation, an American Heart Association journal.

What hadn't been appreciated until relatively recently is that LVADs significantly reduce the strain on failing hearts. In some cases, using LVADs for limited periods of time has allowed hearts to "rest" and remodel their damaged structures. As a result of these repairs, described as "reverse remodeling," heart function improves to the point that the LVAD can be removed.

In initial trials, researchers developed a treatment that combines LVAD uses with standard heart failure drugs known to enhance repair of heart tissue. The treatment appeared to be successful, with several patients surviving more than three years after their LVADs were removed.

The new study sought to broaden the reach of the research with a multicenter trial involving physicians and scientists at U of U Health, the University of Louisville, University of Pennsylvania, the Albert Einstein College of Medicine/Montefiore Medical Center, the Cleveland Clinic, and the University of Nebraska Medical Center.

The researchers recruited 40 advanced heart failure patients between the ages of 18 to 59 who were so severely ill that they required surgical implantation of a LVAD pump to remain alive. Overall, 19 patients (40%) who were treated with this protocol that combined LVAD support with standard heart failure medications had sufficient improvement in their cardiovascular health that the LVAD could be removed. These improvements included successfully completing a six-minute treadmill test and having sustained blood flow and blood pressure with the LVAD at low pump settings.

Following up, the researchers found that 90% of these patients were still alive one year after the removal of the LVAD, while 77% survived and were doing very well at two and three years of follow up. This rate of recovery is significantly higher than expected for this stage of the heart failure disease.

The research team concluded that this strategy of LVAD support combined with standardized medication and regular cardiovascular checkups led to high rates of LVAD removal, demonstrating it as a feasible substitute for transplantation or lifelong LVAD use. LVADs are physically burdensome and can be a source of complications, while heart transplants are difficult to come by.

"People that look at heart disease are usually trying to figure out why a heart is becoming worse," says Craig Selzman, M.D., a study co-author and chief of U of U Health's Division of Cardiothoracic Surgery. "This research is important because it's an alternative approach that is trying to figure out why these failing hearts are actually getting better."

The study had limitations, including a small number of patients, a lack of a control group, and a focus on patients younger than 60 years old. However, the researchers note that this does not necessarily mean that older patients and those who have varying degrees of heart failure would not benefit from this treatment.

Moving forward, the U of U Health researchers in conjunction with others plan to continue exploring how and why this treatment works. In particular, they have been focused on the metabolic pathways in the heart. Based on recent findings the team also published in Circulation, Drakos says these pathways appear to play a key role in the restructuring and remodeling that occurs.

"This multi-center study builds on and validates more than a decade of research conducted by Utah physician-scientists in this exciting field of heart recovery," Drakos says. "We are invested in figuring how to help a failing heart recover its function with just a little help from us."

Credit: 
University of Utah Health

Oncotarget: An integrative microenvironment approach for follicular lymphoma

image: Haplotype estimates in follicular lymphoma patients for (A) chromosome 1 (IL10) and (B) chromosome 3 (IL12A). In each square, the linkage disequilibrium (LD) was estimated between the groups of single nucleotide polymorphisms. The higher LD values (expressed as D?) are shown in the red squares.

Image: 
Correspondence to - Guilherme Rossi Assis-Mendonça - guilhermeram13@yahoo.com.br

Oncotarget Volume 11, Issue 33 features Figure 8, "Haplotype estimates in follicular lymphoma patients," by Assis-Mendonça, et al. which reported that the authors tested associations between SNPs, clinicopathological features and TME composition, and proposed survival models in R-CHOP/R-CVP-treated patients.

Presence of the IL12A rs568408 “A” allele associated with the follicular pattern of FOXP3 cells.

The IL12A AA haplotype included rs583911 and rs568408 and was an independent predictor of worse survival, together with the follicular patterns of T-cells and high IL-17F tumor levels.

The patterns of CD3, CD4, and CD8 cells displayed as a principal component, also associated with survival.

The survival of FL patients who were treated in the rituximab era shows a strong dependence on TME signals, especially the T-cell infiltration patterns and IL-17F tumor levels.

Dr. Guilherme Rossi Assis-Mendonça from The University of Campinas said, "Follicular lymphoma (FL) is the most common low-grade non-Hodgkin lymphoma (NHL) subtype and is characterized by an indolent clinical course and frequent relapses."

For instance, SNPs in key inflammatory genes, such as IL10 and IL2, are more consistently studied in large NHL cohorts and have been implicated in disease risk or in prognosis in the pre-rituximab era, including certain FL cohorts.

Nevertheless, the respective SNPs have been examined in a few studies and in a non-integrated fashion with other components of the TME.

No study has yet evaluated the function of SNPs within immune genes in the TME composition of FL.

This study aimed to verify whether SNPs in immune response genes modify the TME composition and clinical features of FL.

"This study aimed to verify whether SNPs in immune response genes modify the TME composition and clinical features of FL"

Another goal was to test the prognostic impact of these SNPs and the TME components in a cohort of patients who have been treated with rituximab-containing regimens.

The Assis-Mendonça Research Team concluded in their Oncotarget Research Paper that, we demonstrated the adverse prognostic role of several TME elements in FL, considering both protein and genetic data.

Importantly, the follicular pattern of the T-CD8 cells, the high expression of IL-17F, and the AA haplotype of IL2A were all independently associated with a worse prognosis.

The latter variable was also associated with the pattern of FOXP3 cell infiltration, which was validated in this study as a prognostic factor in the rituximab era.

The Oncotarget authors believe that the simultaneous study of the tumor biopsies and immune response-related SNPs in patients' peripheral blood allowed for a novel and more integrative approach that will provide new insights on follicular lymphoma biology.

Credit: 
Impact Journals LLC

New York City's coronavirus outbreak spread from more European sources than first reported

The COVID-19 pandemic started earlier than previously thought in New York City and Long Island by dozens of people infected mostly with strains from Europe. A new analysis also shows that most of the spread was within the community, as opposed to coming from people who had traveled.

Previous testing had detected the first case of the virus on March 3 before infections exploded throughout the metropolitan area, leading to 260,600 positive cases by mid-May.

Led by NYU Grossman School of Medicine researchers, the new study used gene testing to trace the origins of SARS-CoV-2, the pandemic virus, throughout the New York City region in the spring. It showed that the virus first took root in late February, seeded by at least 109 different sources that burst into chains of infection, rather than from a single "patient zero."

Notably, the study authors say, more than 40 percent of people who tested positive had no known contact with another infected person before they contracted the virus.

"Our findings show that New York's early screening test methods missed the onset and roots of the outbreak by several days at the minimum," says study co-lead author Matthew Maurano, PhD, an assistant professor in the Department of Pathology at NYU Langone Health. "The work strongly suggests that to nip future outbreaks in the bud, we need a system of rapid, plentiful real-time genetic surveillance as well as traditional epidemiologic indicators."

The investigators also found that more than 95 percent of New Yorkers with COVID-19 had a strain of the virus with a mutation that may make it easier to transmit to others. This finding, Maurano says, helps explain why the virus spread so aggressively in New York, even when accounting for the city's high population density.

In gene sequencing, researchers compare small snips of genetic code to identify mutations that are only found in a particular strain of the virus. These "flags," Maurano says, can then be used to track how the strain has spread over time, similarly to tests used to trace ancestry in people. Experts have previously used this technique to follow outbreaks of influenza, methicillin-resistant Staphylococcus aureus (MRSA), and Ebola, among other infections.

The new study, published online Oct. 22 in the journal Genome Research, is the largest effort to date to trace the COVID-19 pandemic using gene sequencing, according to Maurano.

For the study, the researchers collected viral genetic information on 864 nasal swabs taken from New Yorkers who had tested positive for COVID-19 between March 12 and May 10. Most of the people were from Manhattan, Brooklyn, and Nassau County on Long Island. Then, the investigators compared the gene sequences of the virus from these samples to those seen in the original strain isolated last winter from patients in Wuhan, China, where the pandemic is believed to have begun.

The study revealed that the genetic codes of the virus in New York more closely matched those of strains from Europe or other U.S. states rather than those from China, where the virus originated. In addition, some of the early chains of infection from person to person ran at least 50 people long.

"Our gene sequencing techniques allowed us to evaluate the precision of our screening tests," says senior study author Adriana Heguy, PhD, a professor in the Department of Pathology at NYU Langone. "Based on these results, it is clear that we need a system of plentiful testing that provides rapid results."

She notes that the sequences analyzed in the study accounted for just 10 percent of COVID-19 patients within a single hospital system in New York. The true scale of the community infection was therefore likely much higher, and the original introduction of the virus to New York City was possibly earlier.

Heguy says the team next plans to investigate whether the mutations uncovered by genetic testing could affect the coronavirus in other ways, such as causing new or more severe symptoms of COVID-19.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

Liver cancer diagnoses and deaths impacted by geography and household income

An analysis of information from a large U.S. cancer database indicates that patients with liver cancer from rural regions and lower income households often have more advanced cancer at the time of diagnosis and face a higher risk of death compared with other patients. The findings are published early online in CANCER, a peer-reviewed journal of the American Cancer Society (ACS).

Screening for liver cancer is important for detecting tumors at an early stage, when treatment is most effective. To explore the impact of different factors on liver cancer stage at the time of diagnosis and on survival of patients with the disease, Robert J. Wong, MD, MS, of the Veterans Affairs Palo Alto Health Care System and Stanford University School of Medicine, and his colleagues analyzed the most recently updated Surveillance, Epidemiology and End Results (SEER) cancer database from the National Cancer Institute. This database includes information from 21 U.S. regions, representing approximately 35 percent of the U.S. population.

From 2004 to 2017, there were 83,237 adults with liver cancer, among which 49.1 percent had localized disease at the time of diagnosis and 14.4 percent had advanced disease that had spread.

The team found that compared with patients in large metro areas with a population of more than 1 million people, patients in more rural regions had 10 percent higher odds of having advanced liver cancer at the time of diagnosis and 5 percent higher odds of dying. Also, compared with patients with an annual household income of at least $70,000, patients with an annual household income below $40,000 had 15 percent higher odds of having advanced cancer at the time of diagnosis and 23 percent higher odds of dying.

"While our study could not specifically investigate the reasons for the worse liver cancer outcomes, we hypothesize that patients living in more rural regions and among lower income households likely experience healthcare disparities leading to sub-optimal access to high quality liver disease care, including timely receipt of liver cancer surveillance and access to liver disease specialists," said Dr. Wong. "Our study highlights the need to focus on understanding the drivers of poor liver cancer outcomes among underserved and vulnerable populations, including those in rural geographic regions or among low income households, so that targeted quality improvement interventions can more specifically address the needs of these populations. We also hope that our findings will raise greater awareness of challenges and limited resources that contribute to sub-optimal liver disease care experienced by patients from low-income and rural households."

October is Liver Cancer Awareness Month.

Credit: 
Wiley

Concrete structure's lifespan extended by a carbon textile

image: Failure test of a concrete slab strengthened with TRM panel

Image: 
Korea Institute of Civil Engineering and Building Technology (KICT)

The Korea Institute of Civil Engineering and Building Technology (KICT) has announced the development of an effective structural strengthening method using a noncombustible carbon textile grid and cement mortar, which can double the load-bearing capacities of structurally deficient concrete structures and increase their usable lifespan by threefold.

More than 90% of infrastructures in South Korea, such as bridges and tunnels, as well as residential buildings were initially constructed out of concrete. For deteriorated or structurally deficient concrete structures in need of structural strengthening, carbon fiber sheets are typically applied to the surface of the concrete structure using organic adhesives. However, organic adhesives are susceptible to fire and cannot be applied to structures with wet surfaces. These carbon fiber sheets may detach and fall from the structure if they are exposed to moisture.

A research team in KICT, led by Dr. Hyeong-Yeol Kim, has developed an effective as well as efficient strengthening method for deteriorated concrete structures. With the developed method, thin precast textile reinforced mortar (TRM) panels, which are made of a carbon textile grid and a thin layer of cement mortar, are used. Furthermore, the TRM strengthening method can be applied in the form of cast-in-place construction. Employing KICT's method, 20 mm-thick TRM panels are attached to the surface of the existing structure, and then the space between the existing structure and the panels is filled with cement grout, with the cement grout serving as the adhesive.

Both the carbon textile and cement mortar are noncombustible materials that have a high resistance to fire, meaning that they can be effectively used to strengthen concrete buildings that may be exposed to fire hazards. The construction method can also be applied to wet surfaces as well as in the winter, and the panels do not fall off even in the event of water ingress. Additionally, unlike steel reinforcing bars, the carbon textile does not corrode, and thus it can be effectively used to strengthen highway facilities and parking buildings, where deicing agents are often used, as well as to strengthen offshore concrete structures that are exposed to a chloride-rich environment.

A failure test conducted in KICT indicates that the failure load of concrete structures strengthened with the TRC panel increased by at least 1.5 times compared to that of an unstrengthened structure. Furthermore, the chloride resistance of the TRM panel has been evaluated in order to assess its service life in a chloride-rich environment. The durability test and analysis of the TRM panel indicates that the lifespan of the panel is more than 100 years. This increase can be attributed to the cement mortar, developed by KICT, which contains 50% ground granulated blast furnace slag, an industrial byproduct generated at ironworks. The cement mortar, which has a higher fire resistance than conventional cement mortar, is also advantageous because its cost is half that of conventional mortar. In terms of economical efficiency, the newly developed method can reduce construction costs by about 40% compared to existing carbon sheet attachment methods.

The newly developed strengthening method uses thin TRM panels that are very versatile and can be used as building facades, repair and strengthening materials, and in other applications. In the future, if the panels can be fabricated with thermal insulators, it is expected that they will replace building insulation materials that are susceptible to fires.

Dr. Kim said, "For easier production and shipping, the TRM panels are manufactured in a relatively small size of 1 m by 2 m and must be connected at the construction site. A method for effectively connecting the panels is currently being developed, and performance tests of the method will be conducted by the end of 2020."

Credit: 
National Research Council of Science & Technology

Next generation BRAF inhibitor cancer drug shows promise in early patient trial

A new drug designed to work on cancers with an altered BRAF gene has shown promise in an early patient trial presented at the 32nd EORTC-NCI-AACR [1] Symposium on Molecular Targets and Cancer Therapeutics, which is taking place online.

The BRAF gene is involved in telling healthy cells when to grow and form new cells, but it is also known to go wrong, or mutate, in several types of cancer, including types of bowel, brain and skin cancer. A few BRAF inhibitor drugs have already proved effective in treating patients. However, these 'first generation' BRAF inhibitors do not work on all BRAF mutated cancers and in other cases cancers become resistant to the treatment.

The new drug, PLX8394, is a 'next generation' BRAF inhibitor, designed to avoid this resistance and work against cancers with a wider range of BRAF mutations.

Results of the phase I/II trial were presented to the Symposium by Dr Filip Janku, Associate Professor for Investigational Cancer Therapeutics (Phase I Clinical Trials Program) and Center Medical Director for Clinical and Translational Research Center at The University of Texas MD Anderson Cancer Center in Houston, Texas, USA.

In the trial so far, 75 patients have been treated with the next generation BRAF inhibitor PLX8394, taken twice a day by mouth, with or without another drug called cobicistat. Data on 45 of these patients with BRAF alterations, who received PLX8394 and cobicistat, were available for researchers to evaluate. These patients had advanced cancers and most had already received three different types of treatments before joining the trial.

The researchers reported that the addition of cobicistat resulted in doubling to tripling the level of PLX8394 in the blood.

Ten of the 45 patients (22%) had a partial response to the new drug, meaning their tumours shrank by at least 30%. This included three people with glioma (a type of brain tumour), two with ovarian cancer and others with bowel cancer, thyroid cancer or melanoma (a type of skin cancer). Ten of the 45 patients had remained on the treatment for at least two years when the data were analysed.

Serious side effects of the treatment experienced by some patients were high levels of a liver enzyme and bilirubin in the blood, indicating a risk of liver damage; these levels lowered when PLX-8394 was interrupted and the dose reduced. Some patients also experienced diarrhoea.

Dr Janku said: "Although we already have some BRAF inhibitor drugs, unfortunately they do not work for all patients with BRAF mutated cancers. In some cases, even when these drugs do work at first, cancers develop resistance. First generation BRAF inhibitors can also cause unpleasant skin lesions and skin cancers in some patients.

"The next generation BRAF inhibitor that we gave to patients in this trial was designed to avoid those problems. These results suggest that the combination of drugs we tested is relatively safe and may be effective for some patients."

Dr Janku and his colleagues continue to study the combination of PLX8394 and cobicistat for treating patients, particularly to discover the optimum dose of the drugs.

William R. Sellers, Professor of Medicine at the Dana-Farber Cancer Institute, Harvard Medical School, USA, is co-chair of the EORTC-NCI-AACR Symposium on behalf of the NCI and was not involved with the research. He commented: "Understanding which genes go wrong in cancer and how they are mutated is a crucial step towards finding treatments that are targeted to work effectively in individual patients. BRAF is a gene mutated in approximately half of melanoma patients as well as in smaller fractions of colorectal and lung cancer. It is, therefore, an important therapeutic target and, indeed, BRAF inhibitors have significant clinical benefit in such patients.

"This trial shows positive signs for using a next generation BRAF inhibitor to treat patients with a variety of different cancer types and we look forward to hearing further results from the next stage of this research."

Credit: 
European Organisation for Research and Treatment of Cancer

Cause of Alzheimer's disease traced to mutation in common enzyme

image: The mutant MARK4 creates a form of tau which accumulates easily in brain cells, causing neurons to die.

Image: 
Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University have discovered a new mechanism by which clumps of tau protein are created in the brain, killing brain cells and causing Alzheimer's disease. A specific mutation to an enzyme called MARK4 changed the properties of tau, usually an important part of the skeletal structure of cells, making it more likely to aggregate, and more insoluble. Getting to grips with mechanisms like this may lead to breakthrough treatments.

Alzheimer's disease is a life-changing, debilitating condition, affecting tens of millions of people worldwide. According to the World Health Organization, it is the most common cause of senile dementia, with numbers worldwide expected to double every 20 years if left unchecked.

Alzheimer's is said to be caused by the build-up of tangled clumps of a protein called "tau" in brain cells. These sticky aggregates cause neurons to die, leading to impairment in memory and motor functions. It is not yet clear how and why tau builds up in the brain cells of Alzheimer's patients. Understanding the cause and mechanism behind this unwanted clumping would open up the way to new treatments and ways to prevent the disease.

A team led by Associate Professor Kanae Ando of Tokyo Metropolitan University has been exploring the role played by the MARK4 (Microtubule Affinity Regulating Kinase 4) enzyme in Alzheimer's disease. When everything is working normally, the tau protein is an important part of the structure of cells, or the cytoskeleton. To keep the arms of the cytoskeleton or microtubules constantly building and disassembling, MARK4 actually helps tau detach from the arms of this structure.

Problems start when a mutation occurs in the gene that provides the blueprint for making MARK4. Previous work had already associated this with an increased risk of Alzheimer's, but it was not known why this was the case. The team artificially introduced mutations into transgenic drosophila fruit flies that also produce human tau, and studied how the proteins changed in vivo. They discovered that this mutant form of MARK4 makes changes to the tau protein, creating a pathological form of tau. Not only did this "bad" tau have an excess of certain chemical groups that caused it to misfold, they found that it aggregated much more easily and were no longer soluble in detergents. This made it easier for tau to form the tangled clumps that causes neurons to degenerate.

MARK4 has also been found to cause a wide range of other diseases which involve the aggregation and buildup of other proteins. That's why the team's insights into tau protein buildup may lead to new treatments and preventative measures for an even wider variety of neurodegenerative conditions.

Credit: 
Tokyo Metropolitan University

Oncotarget: Evaluation of cellular alteration & inflammatory profile of cells

image: Percentage of apoptosis in PMC, A549 and/or MCF7 after 24 hours exposed to talc. PMC = pleural mesothelial cells; NC = neoplastic cells. *p < 0.05 when compared 100% A549 and MCF7 when 100% PMC; #p < 0.05 when compared 100% A549 when A549 mixed, MCF7 mixed and 100% PMC.

Image: 
Correspondence to - Milena Marques Pagliarelli Acencio - milena.acencio@incor.usp.br

Oncotarget recently published “Evaluation of cellular alterations and inflammatory profile of mesothelial cells and/or neoplastic cells exposed to talc used for pleurodesis” which reported that in this study, PMC cultures, human lung and breast adenocarcinoma cells were divided in 5 groups: 100% PMC, 100% NC, 25% PMC 75% NC, 50% of each type and 75% PMC 25% NC. High IL-6, IL-1β and TNFRI levels were found in PMC and NC exposed to talc. In pure cultures TNFRI was higher in A549 followed by PMC and MCF7. LDH was higher in A549 than PMC. Apoptotic cells exposed to talc were higher in pure cultures of NC than in PMC. Mixed cultures of PMC and A549 showed lower levels of apoptosis in cultures with more NC.

Dr. Milena Marques Pagliarelli Acencio from the University of de São Paulo said, "Metastatic neoplasms are the most common type of pleural neoplastic disease and the principal primary sites are lung, breast, stomach and ovary."

The Oncotarget authors described that in an experimental model of pleurodesis acute inflammatory reaction to talc was observed with an increase in pleural fluid concentrations of IL-8, VEGF and TGF-β detected after intrapleural injection of talc and noted that the mesothelial cell layer was preserved.

Thus, mesothelial cells appear to participate in the response to talc and contribute to the acute inflammatory response.

Some authors discuss the importance of cell death caused mainly by apoptosis in mesothelial cells and/or neoplastic cells leading to the success or failure of pleurodesis, or even acting to decrease the tumor.

They explain that in preliminary experimental studies it has also been suggested that talc can induce apoptosis in tumor cells and inhibit angiogenesis, thus contributing to a better control of malignant pleural effusion.

The ultimate hypothesis of the Oncotarget study is to determine the role of mesothelial and/or neoplastic cells in the initiation and regulation of the acute inflammatory response following the instillation of talc in the pleural space, evaluating cellular aspects such as apoptosis and inflammatory mediators.

"The ultimate hypothesis of the Oncotarget study is to determine the role of mesothelial and/or neoplastic cells in the initiation and regulation of the acute inflammatory response."

The Acencio Research Team concluded in their Oncotarget Research Paper that these results permit them to infer that the normal mesothelium in contact with the talc particles is the main stimulus in the genesis of the inflammatory process.

From the mesothelial activation the production of molecular mediators occurs, and that probably contributes to the dynamics of the local inflammatory process and subsequent production of pleural fibrosis; these mechanisms are necessary to induce effective pleurodesis.

These data also allow them to observe that talc has an action in the neoplastic cells inducing higher rates of apoptosis than observed in normal mesothelial cells; this may even contribute in a modest way to tumoral decrease.

Also that different types of tumor cells may respond differently to exposure to talc.

Credit: 
Impact Journals LLC

Coating implants with 'artificial bone' to prevent inflammation

image: Schematic diagram of the laser-induced single-step coating induced hydroxyapatite synthesis, HAp-substrate mixed molten layer, and HAp coating layer formation process. In the figure, the green and red circles indicate "Ca2+", and "PO43." ion, respectively.

Image: 
Korea Institue of Science and Technology(KIST)

Bone disease is becoming increasingly prevalence in modern society due to population aging among other factors, and the use of dental and orthopedic implants to treat bone disease has been on the rise. The history of implants can be traced back all the way to A.D. 1 when wrought iron dental implants were used in Ancient Rome. Despite the long history, however, there are still a number of issues associated with implant procedures such as a loose implant resulting from slow integration into the bone tissue or an inflammation necessitating a secondary surgical procedure.

To mitigate these issues, there has been an attempt to coat the implant material with "artificial bone" that has the same composition as the actual human bone. Conventional coating methods, however, require a synthesis process to manufacture the artificial bone material and a separate coating process, which takes a long time. Plus, the binding between the substrate and the artificial bone coating layer tends to be weak, resulting in damage or even drop-off, and strong coating methods that could be applied to actual patients in a clinical setting were rare.

Under these circumstances, Dr. Hojeong Jeon's research team at the Korea Institute of Science and Technology (KIST) Center for Biomaterials announced that they have developed a ceramic artificial bone coating with triple the adhesion strength compared to conventional coating materials.

The research team developed a technology to induce artificial bone coating, which had taken at least a day and required dozens of steps, in just one hour using a single process. By using the technique, there is no need to synthesize the raw material for artificial bone coating in a separate process, and it is possible to create the coating with a nanosecond laser without any expensive equipment or heat treatment process.

Not only that, it is possible to form a coating layer with a stronger binding power than the few artificial bone coating techniques applied clinically today. Also, in case of using this process, it results in robust coating not only on metal surfaces but even on the surfaces of polymer materials such as orthopedic plastic implants, which has not been possible with conventional processes.

In order to reduce the number of steps involved in the process as well as the duration and at the same time ensure robust coating, Dr. Jeon's team positioned the material to be coated in a solution containing calcium and phosphorous, the main components of the bone, and irradiated it with laser. The temperature was raised in a localized manner at the target site of the laser, causing a reaction involving the calcium and phosphorous to produce ceramic artificial bone (hydroxyapatite) and the formation of a coating layer.

Unlike the conventional coating methods, the synthesis of the artificial bone component is induced by laser and, at the same time, the surface of the substrate is heated above the melting point for the artificial bone material to get adsorbed on the melted surface and get hardened as is, which maximizes the binding strength.

Dr. Jeon said, "The hydroxyapatite coating method using nanosecond laser is a simple way to induce bioactivity in non-bio-active materials such as titanium and PEEK that are commonly used as biomaterials. I anticipate that it will become a game changer in that it will have wide applications to diverse medical devices where osseointegration is needed.

Credit: 
National Research Council of Science & Technology

'Patient activation' may improve quality of life in individuals with kidney disease

Highlights

In individuals with chronic kidney disease who received online peer mentoring, improved patient activation correlated with improvements in various aspects of quality of life.

Results from the study will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25.

Washington, DC (October 23, 2020) -- Researchers previously demonstrated that online peer mentoring for individuals with chronic kidney disease (CKD) improves patient activation--or patients' willingness and ability to take actions to manage their health and care--and quality of life (QOL). Now the investigators have looked at the correlation between QOL and patient activation among patients with CKD who participated in an online peer mentoring program, which provides guidance from others who live with CKD. The study that will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25.

The study randomized 155 patients with stage 4 or stage 5 CKD to online peer mentoring, face-to-face peer mentoring, or usual care. Among the online peer mentoring group, improvements in patient activation correlated with improvements in various aspects of QOL related to physical symptoms and burdens of kidney disease. There was no correlation between patient activation and mental aspects of QOL.

"Results from our study suggest that improved QOL in patients with CKD who received online peer mentoring may be a result of improved patient activation," said co-author Nasrollah Ghahramani, MD, MS (Pennsylvania State University).

Study: "The Correlation between Patient Activation and Quality of Life among Patients with Chronic Kidney Disease"

This research was funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (CDR-1310-07055). The statements presented are solely the responsibility of the authors and do not necessarily represent the views of PCORI or its Board of Governors or Methodology Committee.

ASN Kidney Week 2020 Reimagined, the largest nephrology meeting of its kind, will provide a forum for more than 13,000 professionals to discuss the latest findings in kidney health research and engage in educational sessions related to advances in the care of patients with kidney and related disorders. Kidney Week 2020 Reimagined will take place October 19-October 25.

Credit: 
American Society of Nephrology

Why do minorities have higher rates of kidney failure?

Highlights

A new study indicates that Blacks and Hispanics have experienced higher rates of kidney failure compared with whites due to more rapid kidney function decline.

Results from the study will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25.

Washington, DC (October 23, 2020) -- A new study investigates the reasons behind higher incidences of kidney failure among US minorities. The findings will be presented online during ASN Kidney Week 2020 Reimagined October 19-October 25.

In the United States, Blacks and Hispanics have higher incidences of kidney failure than whites, but it's unclear if this is driven by inherently faster progression to kidney failure or by lower death rates prior to kidney failure (with more Blacks and Hispanics living longer to develop kidney failure).

To investigate, Guofen Yan, PhD (University of Virginia) and her colleagues examined information on 834,270 individuals who were diagnosed with chronic kidney disease (CKD) in the US Veterans Health Administration between 2002 and 2015 and were followed through 2016.

Ten years after CKD onset, the cumulative incidence of developing kidney failure were 1.3-2.5 times greater for Blacks and Hispanics compared with whites across 6 age groups. The kidney failure risk was 2.1-2.9 times greater for Blacks and 1.2-2.7 times greater for Hispanics vs. whites. Risk of death before kidney failure was similar for Blacks and only modestly lower for Hispanics vs. whites across ages.

"Following CKD onset, Blacks and Hispanics were 2 times more likely than whites to develop kidney failure, and this was truly driven by a greater risk of kidney failure due to faster decline in kidney function after CKD onset, rather than because of lower risks of death prior to kidney failure," said Dr. Yan. "Delineation and elimination of the causes of faster kidney function declines in Blacks and Hispanics are therefore the appropriate strategies to improve clinical outcomes in Blacks and Hispanics with CKD. Slowing the faster progression in Blacks and Hispanics with CKD should be a major focus in research, practice, and healthcare policy to achieve the goal of reducing the disparities in CKD."

Study: "Mechanism of Higher Incidence of End-Stage Kidney Disease (ESKD) among Blacks and Hispanics versus Whites in the U.S."

ASN Kidney Week 2020 Reimagined, the largest nephrology meeting of its kind, will provide a forum for more than 13,000 professionals to discuss the latest findings in kidney health research and engage in educational sessions related to advances in the care of patients with kidney and related disorders. Kidney Week 2020 Reimagined will take place October 19-October 25.

Credit: 
American Society of Nephrology