Tech

Huge potential for electronic textiles made with new cellulose thread

image: The cellulose yarn, which the researchers present in the article, is practical to work with and could be used to make clothing with smart functions. Using a standard household sewing machine, researchers have sewn the electrically conductive cellulose yarn into a fabric and succeeded in producing a thermoelectronic textile, which can produce a small amount of electricity when the textile is heated on one side, for example by a person's body heat - typically 0.2 microwatt at a temperature difference of 37 degrees Celsius.

Image: 
Anna-Lena Lundqvist/Chalmers University of Technology

Electronic textiles offer revolutionary new opportunities in various fields, in particular healthcare. But to be sustainable, they need to be made of renewable materials. A research team led by Chalmers University of Technology, Sweden, now presents a thread made of conductive cellulose, which offers fascinating and practical possibilities for electronic textiles.

"Miniature, wearable, electronic gadgets are ever more common in our daily lives. But currently, they are often dependent on rare, or in some cases toxic, materials. They are also leading to a gradual build-up of great mountains of electronic waste. There is a real need for organic, renewable materials for use in electronic textiles," says Sozan Darabi, doctoral student at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology and the Wallenberg Wood Science Center, and lead author of the scientific article which was recently published in ASC Applied Materials & Interfaces.

Together with Anja Lund, researcher in the same group, Sozan Darabi has been working with electrically conductive fibres for electronic textiles for several years. The focus was previously on silk, but now the discoveries have been taken further through the use of cellulose.

The results now presented by the researchers show how cellulose thread offers huge potential as a material for electronic textiles and can be used in many different ways.

Sewing the electrically conductive cellulose threads into a fabric using a standard household sewing machine, the researchers have now succeeded in producing a thermoelectric textile that produces a small amount of electricity when it is heated on one side - for example, by a person's body heat. At a temperature difference of 37 degrees Celsius, the textile can generate around 0.2 microwatts of electricity.

"This cellulose thread could lead to garments with built-in electronic, smart functions, made from non-toxic, renewable and natural materials," says Sozan Darabi.

The production process for the cellulose thread has been developed by co-authors from Aalto University in Finland. In a subsequent process, the Chalmers researchers made the thread conductive through dyeing it with an electrically conductive polymeric material. The researchers' measurements show that the dyeing process gives the cellulose thread a record-high conductivity - which can be increased even further through the addition of silver nanowires. In tests, the conductivity was maintained after several washes.

Electronic textiles could improve our lives in several ways. One important area is healthcare, where functions such as regulating, monitoring, and measuring various health metrics could be hugely beneficial.

In the wider textile industry, where conversion to sustainable raw materials is a vital ongoing question, natural materials and fibres have become an increasingly common choice to replace synthetics. Electrically conductive cellulose threads could have a significant role to play here too, the researchers say.

"Cellulose is a fantastic material that can be sustainably extracted and recycled, and we will see it used more and more in the future. And when products are made of uniform material, or as few materials as possible, the recycling process becomes much easier and more effective. This is another perspective from which cellulose thread is very promising for the development of e-textiles," says Christian Müller, research leader for the study and a professor at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology.

This work of the research team from Chalmers is performed within the national research center Wallenberg Wood Science Center, in cooperation with colleagues in Sweden, Finland and South Korea.

More about: Developing expertise in conductive fibres

Both Sozan Darabi and Christian Müller believe the research has resulted in much more than just the latest scientific publication. Sozan Darabi has developed from a student into a foremost expert in electrically conductive fibre materials, something Christian Müller views as very rewarding, and a great strength for their research team.

Through the national Swedish research center Wallenberg Wood Science Center, a group from Stockholm's Royal Institute of Technology (KTH) has also been involved in the research and publication of the study. The KTH researchers focus on the electrochemical aspects of the fibres. Together with this group from KTH, the Chalmers research team is now planning ways to take the ideas to the next level.

Read earlier press release: Electric textile lights a lamp when stretched

More about: The cellulose thread

The electrically conductive yarn is produced in a "layer-on-layer" coating process with an ink based on the biocompatible polymer "polyelectrolyte complex poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT: PSS)". The e-textile thread developed by the researchers measures a record-high conductivity for cellulose thread in relation to volume of 36 S/cm-, which can be increased to 181 S/cm by adding silver nanowires. The thread coated with PEDOT: PSS can handle at least five machine washes without losing its conductivity. By integrating the cellulose yarn into an electrochemical transistor, the researchers have also been able to demonstrate its electrochemical function.

More about: textiles from nature and fashion industry interest

Throughout human history, textiles have been made from natural fibre and cellulose. But since the middle of the 20th century, synthetic fibres have become more common in our clothing, particularly in the fashion industry. With the greater focus and awareness now on sustainable alternatives, interest in natural fibres and textiles is returning and growing. Large Swedish chains such as H&M and Lindex have set high goals for increasing the proportion of garments produced from more sustainable materials.

The cellulose fibre that the researchers have used is of the Ioncell® type, developed by the Finnish group, led by professor and co-author Herbert Sixta.

Credit: 
Chalmers University of Technology

The mystery of the missing energy - solved

image: Yuttapoom Puttisong, Senior Lecturer in the Department of Physics, Chemistry and Biology at Linköping University.

Image: 
Thor Balkhed

The efficiency of solar cells can be increased by exploiting a phenomenon known as singlet fission. However, unexplained energy losses during the reaction have until now been a major problem. A research group led by scientists at Linköping University, Sweden, has discovered what happens during singlet fission and where the lost energy goes. The results have been published in the journal Cell Reports Physical Science.

Solar energy is one of the most important fossil-free and eco-friendly sustainable sources of electricity. The silicon-based solar cells currently in use can at most use approximately 33% of the energy in sunlight and convert it to electricity. This is because the packets of light, or photons, in the sun's beams have an energy that is either too low to be absorbed by the solar cell, or too high, so that part of the energy is dissipated to waste heat. This maximum theoretical efficiency is known as the Shockley-Queisser limit. In practice, the efficiency of modern solar cells is 20-25%.

However, a phenomenon in molecular photophysics known as singlet fission can allow photons with higher energy to be used and converted to electricity without heat loss. In recent years, singlet fission has attracted increasing attention from scientists, and intense activity is under way to develop the optimal material. However, unexplained energy losses during singlet fission have until now made it difficult to design such a material. Researchers have not been able to agree on the origin of these energy losses.

Now, researchers at Linköping University, together with colleagues in Cambridge, Oxford, Donostia and Barcelona, have discovered where the energy goes during singlet fission.

"Singlet fission takes place in less than a nanosecond, and this makes it extremely difficult to measure. Our discovery allows us to open the black box and see where the energy goes during the reaction. In this way we will eventually be able to optimise the material to increase the efficiency of solar cells", says Yuttapoom Puttisong, senior lecturer in the Department of Physics, Chemistry and Biology at Linköping University.

Part of the energy disappears in the form of an intermediate bright state, and this is a problem that must be solved to achieve efficient singlet fission. The discovery of where the energy goes is a major step on the way to significantly higher solar cell efficiency - from the current 33% to over 40%.

The researchers used a refined magneto-optical transient method to identify the location of energy loss. This technique has unique advantages in that it can examine the 'fingerprint' of the singlet fission reaction at a nanosecond timescale. A monoclinic crystal of a polyene, diphenyl hexatriene (DPH), was used in this study. However, this new technique can be used to study singlet fission in a broader material library. Yuqing Huang is a former doctoral student in the Department of Physics, Chemistry and Biology at Linköping University, and first author of the article now published in a newly established journal, Cell Reports Physical Science:

"The actual singlet fission process takes place in the crystalline material. If we can optimise this material to retain as much as possible of the energy from the singlet fission, we will be significantly closer to application in practice. In addition, the singlet fission material is solution processable, which makes it cheap to manufacture and suitable for integration with existing solar cell technology", says Yuqing Huang.

Credit: 
Linköping University

Multicellular liver-on-a-chip for modeling fatty liver disease

image: Liver microtissues growing as multicellular spheroids in custom-designed microwells. The spheroids are used in modeling non-alcoholic fatty liver disease.

Image: 
Khademhosseini Lab

(LOS ANGELES) - Non-alcoholic fatty liver disease (NAFLD) is the most prevalent chronic liver disease worldwide. It is found in 30% of people in developed countries and occurs in approximately 25% of people in the United States. Risk factors for the disease include obesity, diabetes, high cholesterol and poor eating habits, although this does not exclude individuals without these risk factors.

There is normally a small amount of fat found in the liver; however, if the amount of fat makes up 5% or more of the liver, this is considered to be NAFLD and it must be managed to avoid serious disease progression. In its worst-case scenario, NAFLD can result in swelling and inflammation of the liver, fibrosis, or scarring, of the liver, liver failure or liver cancer.

The liver is a complex organ, and the physiological events involved in NAFLD progression involve multiple steps and different kinds of cells found in the liver. Abnormal accumulation of fatty acids in the liver triggers immune cells called Kupffer cells, as well as endothelial cells lining the liver's blood vessels, to release both signaling and chemically reactive molecules. These molecules stimulate cells called stellate cells to synthesize and deposit fibrosis proteins, which lead to scar formation in the liver.

NAFLD can be managed by a healthy diet and exercise and by losing and/or controlling weight, but there is currently no definitive treatment or cure. One of the main difficulties in developing treatments is that there are no accurate NAFLD models with which to test treatment efficacy. Animal models do not truly represent the steps involved in human NAFLD and human models developed in the past have also shown limitations. A relatively advanced "artificial fibrosis model" to stimulate the final, fibrosis-development part of the disease sequence has been created, but it did not fully correlate with observations of the full disease progression.

A collaborative team from the Terasaki Institute for Biomedical Innovation (TIBI) has developed an advanced, multicellular, structurally representative liver-on-a-chip model which mimics the full progression sequence of NAFLD. The model contains all four types of human primary cells which are involved in the sequence: Kupffer cells, endothelial cells, stellate cells and hepatocytes.

The cells were mixed together and grown as multicellular microtissues in an array of pyramid-shaped microwells; the cell numbers and proportions were adjusted to yield microtissues of optimum shape to maintain nutrient and oxygen concentrations similar to naturally-occurring liver tissues. The microtissues were then enclosed within a gelatinous substance to enable the tissues to more fully match the structure and cell-to-cell interactions of native liver tissues.

The team used their multicellular liver-on-a-chip model in tests designed to validate the role of stellate cells in NAFLD. They concluded that stellate cells appear to be involved with blood vessel formation, production of chemically reactive compounds and to some degree in fatty acid accumulation. They also concluded that stellate cells may interact with Kupffer and other liver cells to produce certain signaling molecules. All these events are crucial components in full NAFLD progression.

The team also tested the effects of fatty acid addition to their model. The results indicated that the presence of fatty acids and active stellate cells accelerated inflammatory responses and increased the production of fibrotic proteins, further validating the model's mimicry of the natural progression of NAFLD.

Subsequent tests compared TIBI's NAFLD model to the previously mentioned artificial fibrosis model driven by transforming growth factor (TGF)-β. When comparing both models, TIBI's fibrosis model mimicked the natural progression of NAFLD by exhibiting higher levels of fat accumulation and fibrosis molecules than in the artificial fibrosis model.

In addition, the team tested the response to anti-fibrotic drugs in both models. The tests demonstrated that TIBI's NAFLD model demonstrated a more robust drug transport and metabolic response upon delivery of the drugs. This finding is significant if the model is to be used for screening of potential drugs for the treatment of NAFLD.

"Our multicellular liver-on-a-chip system is a laboratory model that is more advanced and more fully represents the natural progression of NAFLD than in previous models," said Junmin Lee, Ph.D., a member of the TIBI team. "This is supported by the experimental results from our comparison studies."

It is the hope that TIBI's NAFLD model can be used for elucidating the mechanisms behind this disease and for testing possible drug treatments for efficacy. It also has the potential to personalize treatments by obtaining the full array of cells needed from individual patients and using them to create custom disease models.

"The multicellular model described here is an exemplary demonstration of the level of work that we do in developing personalized physiological models," said Ali Khademhosseini, Ph.D., director and CEO of the Terasaki Institute. "It has the potential for helping to discover treatments for a prevalent and potentially fatal disease."

Credit: 
Terasaki Institute for Biomedical Innovation

Soft contact lenses eyed as new solutions to monitor ocular diseases

image: New contact lens technology to help diagnose and monitor medical conditions may soon be ready for clinical trials. The team enabled commercial soft contact lenses to be a bioinstrumentation tool for unobtrusive monitoring of clinically important information associated with underlying ocular health conditions.

Image: 
Purdue University/Chi Hwan Lee

WEST LAFAYETTE, Ind. - New contact lens technology to help diagnose and monitor medical conditions may soon be ready for clinical trials.

A team of researchers from Purdue University worked with biomedical, mechanical and chemical engineers, along with clinicians, to develop the novel technology. The team enabled commercial soft contact lenses to be a bioinstrumentation tool for unobtrusive monitoring of clinically important information associated with underlying ocular health conditions.

The team's work is published in Nature Communications. The Purdue Research Foundation Office of Technology Commercialization helped secure a patent for the technology and it is available for licensing.

"This technology will be greatly beneficial to the painless diagnosis or early detection of many ocular diseases including glaucoma" said Chi Hwan Lee, the Leslie A. Geddes assistant professor of biomedical engineering and assistant professor of mechanical engineering at Purdue who is leading the development team. "Since the first conceptual invention by Leonardo da Vinci, there has been a great desire to utilize contact lenses for eye-wearable biomedical platforms."

Sensors or other electronics previously couldn't be used for commercial soft contact lenses because the fabrication technology required a rigid, planar surface incompatible with the soft, curved shape of a contact lens.

The team has paved a unique way that enables the seamless integration of ultrathin, stretchable biosensors with commercial soft contact lenses via wet adhesive bonding. The biosensors embedded on the soft contact lenses record electrophysiological retinal activity from the corneal surface of human eyes, without the need of topical anesthesia that has been required in current clinical settings for pain management and safety.

"This technology will allow doctors and scientists to better understand spontaneous retinal activity with significantly improved accuracy, reliability, and user comfort," said Pete Kollbaum, the Director of the Borish Center for Ophthalmic Research and an associate professor of optometry at Indiana University who is leading clinical trials.

Credit: 
Purdue University

New Study Shows 24-72 Hours of Poor Oral Hygiene Impacts Oral Health

image: Longitudinal multi-omics and microbiome meta-analysis identify an asymptomatic gingival state that links gingivitis, periodontitis and aging

Image: 
LIU Yang

Poor oral hygiene produces gum-disease bacteria and accelerates oral microbiome aging faster than previously thought.

A new study shows that within 24-72 hours of the interruption of oral hygiene, there was a steep decrease in the presence of 'good oral bacteria' and the beneficial anti-inflammatory chemicals they are associated with. An increase of 'bad bacteria' typically present in the mouths of patients with periodontitis, a severe gum disease which can lead to tooth damage or loss, was also discovered.
The research team, led by scientists from Single-Cell Center, Qingdao Institute of BioEnergy and Bioprocess Technology (QIBEBT) of the Chinese Academy of Sciences (CAS) and Procter & Gamble Company (P&G), published their findings in the journal mBio on Mar. 9, 2021.

The researchers asked 40 study participants with different levels of naturally occurring gingivitis to perform optimal oral hygiene for three weeks. This led to reduced gingivitis and a healthy baseline for the study. Gingivitis was then induced when their oral hygiene routine was interrupted over the course of four weeks. Restart of oral hygiene leads to recovery due to the reversible nature of gingivitis.
The researchers performed genetic analyses on the population of bacteria in the participants' gum as it changed. Chemical analyses of the molecules produced by the bacteria were performed and immune responses of the study participants were recorded.

Within just 24-72 hours of the cessation of oral hygiene, the researchers found there was a steep decrease in the presence of multiple Rothia species as well as the chemical betaine, which was reported to play an anti-inflammatory role in several inflammatory diseases.

In addition, there was a swift, full activation of multiple salivary cytokines - proteins and other molecules produced by immune system cells associated with inflammation. And just as the presence of the 'good bacteria' had declined, there was a sharp increase in the presence of the types of bacteria typically present in the mouths of patients with periodontitis even though there weren't any symptoms of the illness yet.

Taken together, the positive association with betaine and the negative association with gingivitis suggest that Rothia may be 'good bacteria' beneficial to gum health, contributing to the production of betaine in some way.

"We also found a sudden 'aging' of the bacteria in the mouth," said XU Jian, Director of Single-Cell Center at QIBEBT and senior author of the study. "Their oral microbiome had aged the equivalent of about a year in less than a month."

Previous studies have demonstrated that the composition of the population of oral bacteria (the oral microbiome) is a good predictor of the age of a patient. As one ages, one sees less of some species of bacteria and more of others. Older people, for example, tend to have far fewer Rothia species of bacteria.

"After only 28 days of gingivitis, we found the study participants had the 'oral microbial age' of those a year older," said HUANG Shi, one investigator leading this study.
The researchers now want to continue to study the link between Rothia, betaine and inflammation to see if they can come up with better early-stage responses to gingivitis.

Credit: 
Chinese Academy of Sciences Headquarters

'Big' step towards improved healthcare: new strategy makes big data analytics easier

image: Researchers propose a novel, state-of-the-art architecture, called "Med-BDA," to address the challenges underlying the management of big data analytics in healthcare.

Image: 
IEEE/CAA Journal of Automatica Sinica

The efficient provision of medical care is integral to society. Over time, the healthcare industry has tapped into modern technology in order to keep up its quality of service. This has, unsurprisingly, led to huge volumes of patient data. But it's not just patients whose data need to be stored; doctors, physicians, clinical staff, and even smart wearable gadgets are contributing to what is coming to be known as "healthcare big data."

Big data analytics (BDA), which involves the use of special design architectures to manage, store, and analyze complex data, is an important tool in healthcare. But it is hard to implement, owing to its high failure rate, resource-intensive process, and--most importantly--a lack of a clear guideline to aid practitioners.

In a recent study published in IEEE/CAA Journal of Automatica Sinica (Volume 8, Issue 1, January 2021), researchers from Pakistan and Australia addressed this issue, offering a roadmap for the successful implementation of BDA in healthcare. They proposed a standard architecture that promises to solve all the challenges currently associated with BDA. Prof. Tariq Mahmood from Institute of Business Administration (IBA), Pakistan, lays down his motivation behind the study, "In reality, healthcare analytics has been in use for more than two decades but has not yet catered for healthcare big data. In our study, we proposed an architecture that has the potential to solve a large number of healthcare analytics problems in the next 5 years."

In the past, researchers have attempted to summarize the research work on BDA applications to improve patient healthcare. The summarization is ideally accomplished through a systematic and standardized review of published academic research papers. In the present study, the team took this approach to the next level by conducting the review through five different activities: 1) focusing on the use of all big data technologies, 2) identifying all limitations and challenges mentioned in previous studies, 3) proposing a novel, state-of-the-art design architecture called "Med-BDA" to solve these challenges, 4) identifying strategies for its successful implementation in the healthcare domain, and 5) comparing their work with all previously published studies.

The new Med-BDA architecture uses "Apache Spark technology" to analyze not only data in real time but also non-real-time data along with social network data to understand the bottlenecks of treatment process and make critical predictions regarding, for instance, in-patient cost estimates and expected mortality. Doctors can use these predictions to anticipate the patient's condition in real time and provide them with better and effective treatments. Moreover, by comparing their work with selected papers, the researchers confirmed that their Med-BDA architecture was unique, with no similar strategies proposed previously.

The research team is excited about the future prospects of Med-BDA. Prof. Mahmood says, "Med-BDA can digest gigabytes of information collectively from different sources and analyze them concurrently to build a clear picture of the patients' treatment processes for both real-time and batch-level analyses. Moreover, with the increasing popularity of 'Internet-of-Things' (IoT) in healthcare, Med-BDA will have the potential to digest big IoT data and thus improve BDA."

Perhaps, we're on the brink of a "big" revolution in healthcare!

Credit: 
Chinese Association of Automation

School closures may have wiped out a year of academic progress for pupils in Global South

As much as a year's worth of past academic progress made by disadvantaged children in the Global South may have been wiped out by school closures during the COVID-19 pandemic, researchers have calculated.

The research, by academics from the University of Cambridge and RTI International, attempts to quantify the scale of learning loss that children from poor and marginalised communities in the Global South may have experienced, and the extent to which home support and access to learning resources could ameliorate it. While it is known that the education of these children has suffered disproportionately during the pandemic, it is much harder to measure exactly how much their academic progress has been impeded while schools have been closed.

The researchers used data from Ghana to model the likely impact of closures for children in remote and deprived parts of that country. They found that on average, 66% of the learning gains made in foundational numeracy during the academic year are lost during three months out of school. The outcome is, however, far worse for children without adequate home learning resources or support.

The authors suggest these findings provide a glimpse of a much wider pattern of learning loss that is being experienced by millions of disadvantaged children around the world.

Co-author Professor Ricardo Sabates, from the REAL Centre in the University's Faculty of Education, said: "Despite teachers' best efforts, we know school closures have held up, or reversed, the progress of millions of children. This study is one approach to estimate how much learning could have been lost, and how much worse this may have been for children from disadvantaged settings."

"These figures represent an estimate of learning loss for children who spent 3 to 4 months out of school. We expect that as schools remained closed for longer, losses could be higher. We also acknowledge the important support that many families and communities provided with supplementary learning, which may have in turn limited the potential loss overall."

The study built on earlier research that highlighted the significant learning losses that occur when certain groups of children in developing countries move from one academic year to the next, particularly those who change language of instruction, and disadvantaged girls.

The researchers used data charting the progress of more than 1,100 students on Ghana's Complementary Basic Education (CBE) programme between 2016 and 2018. This programme supports children aged eight to 14 who would not normally attend school, providing them with education in their own language and at flexible times. On completion, students are encouraged to enrol at a local government school, but the start of that school year occurs after a three-month gap, during which they receive no education.

The researchers compared participants' scores in foundational maths tests at four stages: when they started the CBE, when they finished, when they joined a government school, and after their first year in government school. They also accessed data about how much home learning support the students had - for example, whether they had books at home, or could seek help from an adult when struggling with homework.

During the CBE programme, the students' test scores improved, on average, by 27 percentage points. When they were tested again after the three-month gap, however, their scores had reduced by an average of 18 percentage points. Two-thirds of the gains these students had made during the previous academic year were therefore lost while they were out of school. The researchers argue that this is an upper estimate of the expected scale of loss during an equivalent period of school closures due to COVID-19. Fortunately, during the pandemic community efforts to enhance learning may have mitigated this effect for some children.

In spite of this, they also found that the basic learning loss was compounded among children who lacked support to study at home. For example:

* Children without access to reading and learning resources at home (such as books) experienced a learning loss above 80%.

* Children who said that they never asked adults in their household for help experienced a learning loss of around 85%.

Encouragingly, the study showed that in the first year of formal education, students not only recouped their learning loss, but improved, while the attainment gap between more and less advantaged students narrowed.

In many countries, however, it is becoming clear that many disadvantaged students - especially marginalised groups such as disabled children and many girls - are not returning to school. Therefore, the researchers suggest supporting access to diverse forms of education for students from less-advantaged backgrounds. There is evidence to show that community-based programmes, for example, can enhance a range of learning skills for these children. "Learning at home and in communities has to be reimagined if rapid gains are to be achieved as we continue to face the COVID-19 situation," the authors say.

The pattern of learning loss charted in Ghana may also apply far beyond the Global South. "This is an international challenge," said co-author Emma Carter, also from the REAL Centre. "In Europe and the US, children from lower socio-economic backgrounds will similarly be experiencing severe learning loss. The levels of attainment may differ between countries, but it is highly likely that the pattern of loss remains."

The evaluation data used in the study was commissioned and funded by FCDO Ghana. The research is published in the International Journal of Educational Development.

Credit: 
University of Cambridge

JNCCN: New evidence on need to address muscle health among patients with cancer

image: JNCCN March 2021 Cover

Image: 
JNCCN

PLYMOUTH MEETING, PA [March 9, 2021] -- New research in the March 2021 issue of JNCCN--Journal of the National Comprehensive Cancer Network from Mass General Hospital Cancer Center, Harvard Medical School, and Dana-Farber Cancer Institute finds muscle mass (quantity) correlated with survival, while muscle radiodensity (quality) was associated with symptom burden, healthcare use, and survival in patients with advanced cancer undergoing an unplanned hospitalization. The researchers also found nearly two-thirds of the patients in that population had significant muscle loss (sarcopenia), and that those with a higher body mass index (BMI) tended to have lower muscle quality despite higher quantity. They highlight the need for additional work to continue investigating how best to utilize computerized tomography (CT) scans to measure muscle mass and density to improve clinical outcomes.

"We hope that our work leads to future efforts for assessing patients' muscle health--potentially using CT scans--as a strategy for identifying patients who may benefit from fitness or nutrition interventions, in order to enhance clinical outcomes," said lead researcher Ryan D. Nipp, MD, MPH, Mass General Hospital Cancer Center and Harvard Medical School. "These findings build upon existing research showing unfavorable outcomes associated with poor muscle health in cancer patients, while also underscoring the added utility of assessing muscle radiodensity to measure muscle health. Muscle radiodensity provides information on the amount of intramuscular adipose tissue (fatty tissue within the muscle), and our findings suggest that higher BMI may contribute to that infiltration, resulting in lower muscle radiodensity."

The researchers evaluated muscle data from the CT scans of 677 patients with advanced cancer who had an unplanned hospitalization between September 2014 and May 2016. The CT scans were performed as part of routine clinical care within 45 days before study enrollment, and results were compared against clinical outcomes as well as patient reported psychological assessments. Findings showed older, female patients tended to have lower muscle mass and radiodensity. Sixty-four percent of patients met the criteria for sarcopenia. Higher muscle radiodensity was significantly associated with better patient outcomes--including lower physical symptom burden and less depression and anxiety. However, it remains unclear whether poorer muscle radiodensity was a result of other symptoms that limit mobility, or vice versa.

"It's possible that lower muscle radiodensity could lead to functional decline, and thus exacerbates physical and psychological symptoms," said Dr. Nipp, who was a recipient of an NCCN Foundation® Young Investigators Award in 2016 for research on perioperative geriatric care intervention for older patients with gastrointestinal cancers undergoing surgical resection. "Conversely, patients with a higher symptom burden could have lower physical activity, which could have an effect on their muscle quality and quantity."

"Increasing the quality of one's weight through muscle development could be more important than simply trying to regain body weight to address cancer-related sarcopenia," commented Scott J. Capozza, PT, MSPT, Board Certified Clinical Specialist in Oncologic Physical Therapy, Smilow Cancer Hospital and Yale Cancer Center, who was not involved with this research. "Skilled clinicians, such as oncology certified dietitians and physiotherapists, are able to develop evidence-based interventions to safely increase the quality of muscle mass. Having dietitians and physiotherapists included in the care team aligns with recommendations in the NCCN Guidelines for Survivorship. I look forward to future studies where these clinicians can be incorporated to address the quality of life and overall survival of patients with advanced cancers through nutrition, exercise, and physical rehabilitation."

Credit: 
National Comprehensive Cancer Network

Breaking the warp barrier for faster-than-light travel

image: Artistic impression of different spacecraft designs considering theoretical shapes of different kinds of "warp bubbles".

Image: 
E Lentz

If travel to distant stars within an individual's lifetime is going to be possible, a means of faster-than-light propulsion will have to be found. To date, even recent research about superluminal (faster-than-light) transport based on Einstein's theory of general relativity would require vast amounts of hypothetical particles and states of matter that have "exotic" physical properties such as negative energy density. This type of matter either cannot currently be found or cannot be manufactured in viable quantities. In contrast, new research carried out at the University of Göttingen gets around this problem by constructing a new class of hyper-fast 'solitons' using sources with only positive energies that can enable travel at any speed. This reignites debate about the possibility of faster-than-light travel based on conventional physics. The research is published in the journal Classical and Quantum Gravity.

The author of the paper, Dr Erik Lentz, analysed existing research and discovered gaps in previous 'warp drive' studies. Lentz noticed that there existed yet-to-be explored configurations of space-time curvature organized into 'solitons' that have the potential to solve the puzzle while being physically viable. A soliton - in this context also informally referred to as a 'warp bubble' - is a compact wave that maintains its shape and moves at constant velocity. Lentz derived the Einstein equations for unexplored soliton configurations (where the space-time metric's shift vector components obey a hyperbolic relation), finding that the altered space-time geometries could be formed in a way that worked even with conventional energy sources. In essence, the new method uses the very structure of space and time arranged in a soliton to provide a solution to faster-than-light travel, which - unlike other research - would only need sources with positive energy densities. No "exotic" negative energy densities needed.

If sufficient energy could be generated, the equations used in this research would allow space travel to Proxima Centauri, our nearest star, and back to Earth in years instead of decades or millennia. That means an individual could travel there and back within their lifetime. In comparison, the current rocket technology would take more than 50,000 years for a one-way journey. In addition, the solitons (warp bubbles) were configured to contain a region with minimal tidal forces such that the passing of time inside the soliton matches the time outside: an ideal environment for a spacecraft. This means there would not be the complications of the so-called "twin paradox" whereby one twin travelling near the speed of light would age much more slowly than the other twin who stayed on Earth: in fact, according to the recent equations both twins would be the same age when reunited.

"This work has moved the problem of faster-than-light travel one step away from theoretical research in fundamental physics and closer to engineering. The next step is to figure out how to bring down the astronomical amount of energy needed to within the range of today's technologies, such as a large modern nuclear fission power plant. Then we can talk about building the first prototypes," says Lentz.

Currently, the amount of energy required for this new type of space propulsion drive is still immense. Lentz explains, "The energy required for this drive travelling at light speed encompassing a spacecraft of 100 meters in radius is on the order of hundreds of times of the mass of the planet Jupiter. The energy savings would need to be drastic, of approximately 30 orders of magnitude to be in range of modern nuclear fission reactors." He goes on to say: "Fortunately, several energy-saving mechanisms have been proposed in earlier research that can potentially lower the energy required by nearly 60 orders of magnitude." Lentz is currently in the early-stages of determining if these methods can be modified, or if new mechanisms are needed to bring the energy required down to what is currently possible.

Credit: 
University of Göttingen

Deforestation's effects on malaria rates vary by time and distance

Deforestation may cause an initial increase in malaria infections across Southeast Asia before leading to later decreases, a study published today in eLife suggests.

The results may help malaria control programs in the region develop better strategies for eliminating malaria infections and educating residents on how to protect themselves from infection.

Mosquitos spread the malaria parasite to humans causing infections that can be severe and sometimes deadly. In the area along the Mekong river in Southeast Asia, many residents hunt or harvest wood in the surrounding forests, which can increase their risk of infection. Yet recent outbreaks of malaria in the region have also been linked to deforestation.

"As countries in the region focus their malaria control and elimination efforts on reducing forest-related transmission, understanding the impact of deforestation on malaria rates is essential," says first author Francois Rerolle, Graduate Student Researcher at the University of California San Francisco (UCSF), US, who works within the UCSF Malaria Elimination Initiative.

To better understand the effects of deforestation on malaria transmission, Rerolle and colleagues examined both forest cover data and village-level malaria incidence data from 2013-2016 in two regions within the Greater Mekong Sub-region.

They found that in the first two years following deforestation activities, malaria infections increased in villages in the area, but then decreased in later years. This trend was mostly driven by infections with the malaria parasite Plasmodium falciparum. Deforestation in the immediate 1-10-kilometer radius surrounding villages did not affect malaria rates, but deforestation in a wider 30-kilometer radius around the villages did. The authors say this is likely due to the effect that wider deforestation can have on human behaviour. "We suspect that people making longer and deeper trips into the forest results in increased exposure to mosquitoes, putting forest-goers at risk," Rerolle explains.

Previously, studies on the Amazon in South America have found increased malaria infections in the first 6-8 years after deforestation, after which malaria rates fall. The difference in timing may be due to regional differences. The previous studies in the Amazon looked at deforestation driven by non-indigenous people moving deeper into the forest, while communities in the current study have long lived at the forest edges and rely on subsistence agriculture.

"Our work provides a more complete picture of the nuanced effects of deforestation on malaria infections," says senior author Adam Bennett, Program Lead at the UCSF Malaria Elimination Initiative. "It may encourage more in-depth studies on the environmental and behavioural drivers of malaria to help inform strategies for disease elimination."

Credit: 
eLife

Therapy sneaks into hard layer of pancreatic cancer tumor and destroys it from within

image: Andrew Lowy, MD, is the co-corresponding author of the study, professor of surgery at UC San Diego School of Medicine and chief of the Division of Surgical Oncology at Moores Cancer Center at UC San Diego Health.

Image: 
UC San Diego Health Sciences

Every 12 minutes, someone in the United States dies of pancreatic cancer, which is often diagnosed late, spreads rapidly and has a five-year survival rate at approximately 10 percent. Treatment may involve radiation, surgery and chemotherapy, though often the cancer becomes resistant to drugs.

Researchers at University of California San Diego School of Medicine and Moores Cancer Center, in collaboration with Sanford-Burnham-Prebys Medical Discovery Institute and Columbia University, demonstrated that a new tumor-penetrating therapy, tested in animal models, may enhance the effects of chemotherapy, reduce metastasis and increase survival.

The study, published online March 9, 2021 in Nature Communications, showed how a tumor-targeting peptide, called iRGD, can sneak inside the armor that the tumor built to protect itself and use the fibrous tissue as a highway to reach deeper inside, destroying the tumor from within.

The pancreas is a large gland located behind the stomach. It makes enzymes that aid digestion and hormones that regulate blood-sugar levels. Pancreatic ductal adenocarcinoma (PDAC) is a subtype of pancreatic cancer that is highly drug-resistant due, in part, by the hard shell-like outer layer surrounding the tumor.

"This type of tumor is made up of a dense fibrous tissue that acts as a barrier to drugs trying to get through. Many drugs can reach the vessels of the tumor, but they are not able to get deep into the tissue, making treatment less effective, and that is one reason why this type of cancer is so challenging to treat," said Tatiana Hurtado de Mendoza, PhD, first author of the study and assistant project scientist at UC San Diego School of Medicine and Moores Cancer Center.

"Our study found that the tumor-penetrating peptide iRGD is able to use this fibrous network to deliver chemotherapy drugs deep into the tumor and be more effective."

The research team examined the microenvironment of PDAC tumors in a mouse model. They found that after targeting the tumor blood vessels, iRGD binds to high levels of β5 integrin, a protein produced by cells known as carcinoma-associated fibroblasts (CAFs) that produce much of the tumor's protective fibrous cover.

"We were able to closely replicate human disease in our mouse model and found that when iRGD was injected with chemotherapy in mice with high levels of β5 integrin, there was a significant increase in survival and a reduction in the cancer spreading to other organs in the body compared to chemotherapy alone. This could be a powerful treatment strategy to target aggressive pancreatic cancer," said Andrew Lowy, MD, co-corresponding author of the study, professor of surgery at UC San Diego School of Medicine and chief of the Division of Surgical Oncology at Moores Cancer Center at UC San Diego Health.

"What is also exciting about this finding is the iRGD therapy did not produce any additional side effects. This is critically important when considering treatments for patients."

The researchers said next steps include a national human clinical trial. They estimate the trial could begin in one year.

"The knowledge gained from our study has the potential to be directly applied to patient care. We also believe that the levels of β5 integrin within a pancreatic cancer could tell us which patients would benefit the most from iRGD-combination therapy," said Lowy.

Credit: 
University of California - San Diego

The neoliberal city needs to change, argues Concordia professor Meghan Joy

image: Meghan Joy: "The neoliberal urban model has had time to prove whether it works for all the people in a city. It is clear that it does not."

Image: 
Concordia University

What would a truly progressive city look like? A city that pays more than lip service to issues that directly affect low-income residents, seniors, marginalized communities and others whom neoliberal policies have seemingly left behind?

Meghan Joy, an assistant professor of political science, argues that urban studies, and particularly urban political scientists, should re-assess the concept of the progressive city. The once-widely embraced notion fell out of favour over the past several decades as local politicians embraced neoliberal policies that she says prioritized wealth generation over liveability and accessibility for all city residents. In a new paper recently published in Urban Affairs Review, Joy and co-author Ronald K. Vogel of Ryerson University, lay out a policy agenda for urban policy thinkers who believe it may be time to shift the thinking around how cities are run and for whose benefit.

"The neoliberal urban model has had time to prove whether it works for all the people in a city," Joy says. "It is clear that it does not, especially for vulnerable people or those living on low incomes."

Policy problems and progressive solutions

Joy believes many cities are at a point of crisis, especially in four key areas: housing, employment, transportation and climate change. The authors do not address issues of policing in this paper, though they do acknowledge that rethinking current approaches to crime and enforcement is essential to a progressive city policy agenda.

In their agenda, Joy and Vogel identify each area's major problem and offer progressive solutions.

They write that affordable housing often depends on incentives offered to developers who in turn wind up building more homes for residents of moderate rather than low income. Joy and Vogel point to creative solutions employed by cities as disparate as Vienna and Hong Kong: governments either own large amounts of housing stock directly or back non-market-driven developers to ensure low-income earners are not pushed out of their city.

Deindustrialization has had a major effect on the nature of employment in many cities, as well as their overall finances. Neoliberal policies have led to a surge in the service economy, but wage inequality and job precarity have led to increasing poverty and a squeezing of the middle class. The authors believe city governments should spend more on hiring employees to provide public services and rely more on in-house talent rather than contracting out to for-profit consultants.

Neoliberalism's effects on transportation include chronic underfunding and underservicing of areas that need public transit the most. The issue is closely tied to housing, with homes conveniently located near transit access points often priced well beyond the means of low-income earners. The authors call for the implementation of social equity transit planning to better serve disadvantaged communities, including subsidies and expanded access.

Finally, climate change has increased the probability and severity of natural disasters such as floods, hurricanes, winter storms and tornadoes, which often require mass evacuations or emergency assistance. Many low-income residents lack the ability to get out of harm's way even with advance warning and often lack relocation options. Joy and Vogel want to see options other than those relying on market forces and entrepreneurialism and push for policies that reduce the overall carbon footprint.

Grassroots movements growing

Joy does believe that grassroots resistance to neoliberalism is growing in cities and there is evidence of movements increasingly pushing on the levers of municipal power.

"We are certainly seeing a groundswell of movement building, especially with Black Lives Matter and the call to defund the police, and around COVID-19 and housing," she says.

"There is more visibility around the question of who we are thinking about when we make urban policy and who benefits in the city. We need to think about how to translate this movement building into an urban policy agenda."

Credit: 
Concordia University

Cochrane Review finds stopping smoking is linked to improved mental health

Evidence published in the Cochrane Library today will reassure people who want to stop smoking that quitting for at least 6 weeks may improve their mental wellbeing, by reducing anxiety, depression, and stress. People's social relationships are unlikely to suffer if they stop smoking.

Smoking is the world's leading cause of preventable illness and death. One in every two people who smoke will die of a smoking-related disease unless they quit. Some people believe that smoking helps reduce stress and other mental health symptoms, and that quitting smoking might make their mental health problems worse. People who smoke may also worry that stopping smoking will have a negative impact on their social lives and friendships.

The review found that people who stopped smoking for at least 6 weeks experienced less depression, anxiety, and stress than people who continued to smoke. People who quit also experienced more positive feelings and better psychological wellbeing. Giving up smoking did not have an impact on the quality of people's social relationships and it is possible that stopping smoking may be associated with a small improvement in social wellbeing.

The review summarises evidence from 102 observational studies involving over 169,500 people. The review authors combined the results from 63 of these studies that measured changes in mental health symptoms in people who stopped smoking with changes occurring in people who continued to smoke. They also combined results from 10 studies that measured how many people developed a mental health disorder during the study. The studies involved a wide range of people, including people with mental health conditions and people with long-term physical illnesses. The length of time the studies followed people varied, with the shortest being 6 weeks, but some studies followed people for up to 6 years. The certainty of the evidence ranged from very low to moderate.

The lead author of this Cochrane Review, Dr Gemma Taylor from the Addiction & Mental Health Group at the University of Bath, said, "We found stopping smoking was associated with small to moderate improvements in mood. The benefits of smoking cessation on mood seem to be similar in a range of people, and most crucially, there is no reason to fear that people with mental health conditions will experience a worsening of their health if they stop smoking. Our confidence in the precise size of the benefit is limited due to the way the studies were designed and future studies that can overcome those challenges will greatly strengthen the evidence about the impacts of smoking cessation on mental health."

Dr. Taylor continued, "Many people who smoke are concerned that quitting could disrupt their social networks, and lead to feelings of loneliness. People can be reassured that stopping smoking does not seem to have a negative impact on social quality of life. People may also be concerned that quitting is stressful. The evidence shows that stress is reduced in people who stop smoking and that there are likely longer-term benefits for peoples' mental health."

A team of researchers from the Universities of Bath, Birmingham, Oxford, and New York University worked together to produce this review.

Credit: 
Wiley

New lung cancer screening recommendation expands access but may not address inequities

image: "The revised U.S. Preventive Services Task Force's recommendations are sound and based on well-conceived evidence and modeling studies, but they alone are not enough, as we have seen limited uptake of the prior recommendations," Ethan Basch, MD, MSc, said. "Implementation will require broader efforts by payers, health systems and professional societies, and, in the future, a more tailored, individual risk prediction approach may be preferable."

Image: 
UNC Lineberger Comprehensive Cancer Center

CHAPEL HILL, NC -- Calling the U.S. Preventive Services Task Force's newly released recommendation statement to expand eligibility for annual lung cancer screening with low-dose computed tomography a step forward, UNC Lineberger Comprehensive Cancer Center researchers say future changes should address equity and implementation issues.

In an editorial published in JAMA, Louise M. Henderson, PhD, professor of radiology at UNC School of Medicine, M. Patricia Rivera, MD, professor of medicine at UNC School of Medicine, and Ethan Basch, MD, MSc, the Richard M. Goldberg Distinguished Professor in Medical Oncology and chief of oncology at the UNC School of Medicine, outlined their concerns and offered potential approaches to make the screening recommendation more inclusive of populations that have been historically underserved.

"The revised U.S. Preventive Services Task Force's recommendations are sound and based on well-conceived evidence and modeling studies, but they alone are not enough, as we have seen limited uptake of the prior recommendations," Basch said. "Implementation will require broader efforts by payers, health systems and professional societies, and, in the future, a more tailored, individual risk prediction approach may be preferable."

The task force has made two significant changes to the screening recommendation it issued in 2013: Annual screening will begin at age 50, instead of 55, and smoking intensity has been reduced from 30 to 20 pack-year history. These more inclusive criteria could more than double the number of adults eligible for lung cancer screening, from 6.4 million to 14.5 million, according to some estimates. This represents an 81% increase.

Henderson, Rivera and Basch are encouraged that lung cancer screening will be available to more people, and they point out that expanding access alone won't reduce racial inequities, especially as measured by lung cancer deaths prevented and life-years gained.

It may be possible to counter this shortcoming, they said, by adding risk-prediction models that identify high-benefit individuals who do not meet USPSTF criteria. This could reduce or eliminate some, though not all, racial disparities, according to one study. Also, future research should explore risks such as family history of lung cancer and genetic susceptibility to develop risk assessment strategies that may identify individuals who never smoked and still have a high risk for lung cancer but currently are not eligible to be screened.

Financial-based barriers are also an issue. Expanding screening access to include people as young as 50 may lead to greater inequities for those who are enrolled in Medicaid, the state-based public health insurance program.

"Medicaid is not required to cover the USPSTF recommended screenings and even when screening is covered, Medicaid programs may use different eligibility criteria," Henderson said. She adds this is problematic because people who receive Medicaid are twice as likely to be current smokers than those with private insurance (26.3% compared to 11.1%), and they are disproportionately affected by lung cancer. "This is a significant issue, particularly in the nine states where Medicaid does not cover lung cancer screening."

Putting the screening recommendation into practice will be a substantial challenge, Rivera said. Primary care providers are critical to implementing the screening process because they initiate the conversation with their patients about the potential benefits and risk of lung cancer screening and make the screening referral. However, Rivera said many already have an overburdened workload, and it may be unrealistic to expect them to be able to spend the necessary time to have these complex conversations.

"A significant barrier to implementation of lung cancer screening is provider time. Many primary care providers do not have adequate time to have a shared decision-making conversation and to conduct a risk assessment," Rivera said. "Although a lung cancer screening risk model that incorporates co-morbidities and clinical risk variables may be the best tool for selecting high risk individuals who are most likely to benefit from screening, such a model requires input of additional clinical information, thereby increasing the time a provider will spend; the use of such a model in clinical practice has not been established."

Despite these limitations and challenges, the new recommendation can expand access to lung cancer screening, the researchers said in the editorial. "Beyond implementation challenges, the future of screening strategies lies in individualized risk assessment including genetic risk. The 2021 USPSTF recommendation statement represents a leap forward in evidence and offers promise to prevent more cancer deaths and address screening disparities. But the greatest work lies ahead to ensure this promise is actualized."

Credit: 
UNC Lineberger Comprehensive Cancer Center

NYU Abu Dhabi study predicts motion sickness severity

image: Dr. Bas Rokers, Associate Professor and Director of the Neuroimaging Center at NYU Abu Dhabi

Image: 
NYU Abu Dhabi

"It was clear that the greater an individual's sensitivity to motion parallax cues, the more severe the motion sickness symptoms," says lead NYU Abu Dhabi researcher

Fast facts:

The visual system is often studied in relative isolation, but it has clear connections to other components of the nervous system.

A notable example of this is motion sickness, which affects certain people much more severely than others.

Motion sickness is typically associated with traveling in cars, boats, and airplanes, however discomfort or "cybersickness" also arises with technological use such as in virtual reality (VR).

Abu Dhabi, UAE, March 9, 2021: A new study led by Head of the Rokers Vision Laboratory and NYUAD Associate Professor of Psychology Bas Rokers explored why the severity of motion sickness varies from person to person by investigating sources of cybersickness during VR use.

In the new study, Variations in visual sensitivity predict motion sickness in virtual reality published in the journal Entertainment Computing, Rokers and his team used VR headsets to simulate visual cues and present videos that induced moderate levels of motion sickness. They found that a person's ability to detect visual cues predicted the severity of motion sickness symptoms. Specifically, discomfort was due to a specific sensory cue called motion parallax, which is defined as the relative movement of different parts of the environment.

A previously reported source of variability in motion sickness severity, gender, was also evaluated but not confirmed. The researchers conclude that previously reported gender differences may have been due to poor personalization of VR displays, most of which default to male settings.

These findings suggest a number of strategies to mitigate motion sickness in VR, including reducing or eliminating specific sensory cues, and ensuring device settings are personalized to each user. Understanding the sources of motion sickness, especially while using technology, not only has the potential to alleviate discomfort, but also to make VR technology a more widely accessible resource for education, job training, healthcare, and entertainment.

"As we tested sensitivity to sensory cues, a robust relationship emerged. It was clear that the greater an individual's sensitivity to motion parallax cues, the more severe the motion sickness symptoms," said Rokers. "It is our hope that these findings will help lead to the more widespread use of powerful VR technologies by removing barriers that prevent many people from taking advantage of its potential."

Credit: 
New York University