Tech

Human contact plays big role in spread of some hospital infections, but not others

An observational study conducted in a French hospital showed that human contact was responsible for 90 percent of the spread of one species of antibiotic-resistant bacteria to new patients, but less than 60 percent of the spread of a different species. These findings suggest hand hygiene is a key, but more methods are needed to fight multidrug-resistant infection. Audrey Duval of the Versailles Saint Quentin University and the Institut Pasteur in Paris, France, and colleagues present these results in PLOS Computational Biology.

People treated in hospitals and other health care settings are increasingly at risk of infection with multidrug-resistant bacteria. Many of these microbes produce enzymes called extended-spectrum β-lactamases (ESBLs), which make them resistant to antibiotics. Understanding how ESBL bacteria spread from person to person is key to developing effective prevention strategies.

In the new study, Duval and colleagues distributed wearable sensors to hundreds of patients and health care workers in a French hospital. Equipped with RFID tags, the sensors allowed the researchers to track patterns of human contact between patients over an eight-week period. Meanwhile, they systematically screened patients for ESBL-producing Escherichia coli and Klebsiella pneumonia.

The scientists found that 90 percent of the spread of ESBL K. pneumonia to new patients could be explained by direct or indirect contact with patients who had the same bacteria within the previous eight weeks; this figure was less than 60 percent for ESBL E. Coli. The findings suggest that contact-prevention strategies--primarily hand hygiene--can be very efficient in limiting transmission of ESBL K. pneumonia. However, additional measures, such as environmental decontamination or using antibiotics more appropriately, may be necessary to prevent spread of ESBL E. Coli.

The researchers suggest that the same kind of wearable-sensor analysis could be extended to other multidrug-resistant species. Investigation of more detailed genomic data could further illuminate how ESBL-producing bacteria spread.

"By combining digital epidemiology and rapid microbiological diagnostic tools, we may be entering a new era to understand and control the risk of hospital-acquired infection with multidrug-resistant bacteria," Duval says.

Credit: 
Institut Pasteur

Immortal quantum particles

image: Strong quantum interactions prevent quasiparticles from decay.

Image: 
K. Verresen / TUM

As the saying goes, nothing lasts forever. The laws of physics confirm this: on our planet, all processes increase entropy, thus molecular disorder. For example, a broken glass would never put itself back together again.

Theoretical physicists at the Technical University of Munich (TUM) and the Max Planck Institute for the Physics of Complex Systems have discovered that things which seem inconceivable in the everyday world are possible on a microscopic level.

"Until now, the assumption was that quasiparticles in interacting quantum systems decay after a certain time. We now know that the opposite is the case: strong interactions can even stop decay entirely," explains Frank Pollmann, Professor for Theoretical Solid-State Physics at the TUM. Collective lattice vibrations in crystals, so-called phonons, are one example of such quasiparticles.

The concept of quasiparticles was coined by the physicist and Nobel prize winner Lev Davidovich Landau. He used it to describe collective states of lots of particles or rather their interactions due to electrical or magnetic forces. Due to this interaction, several particles act like one single one.

Numeric methods open up new perspectives

Up until now, it wasn't known in detail which processes influence the fate of these quasiparticles in interacting systems," says Pollmann. "It is only now that we possess numerical methods with which we can calculate complex interactions as well as computers with a performance which is high enough to solve these equations."

"The result of the elaborate simulation: admittedly, quasiparticles do decay, however new, identical particle entities emerge from the debris," says the lead author, Ruben Verresen. "If this decay proceeds very quickly, an inverse reaction will occur after a certain time and the debris will converge again. This process can recur endlessly and a sustained oscillation between decay and rebirth emerges."

From a physical point of view, this oscillation is a wave which is transformed into matter, which, according to quantum mechanical wave-particle duality, is possible. Therefore, the immortal quasiparticles do not transgress the second law of thermodynamics. Their entropy remains constant, decay has been stopped.

The reality check

The discovery also explains phenomena which were baffling until now. Experimental physicists had measured that the magnetic compound Ba3CoSB2O9 is astonishingly stable. Magnetic quasiparticles, magnons, are responsible for it. Other quasiparticles, rotons, ensure that helium which is a gas on the earth's surface becomes a liquid at absolute zero which can flow unrestricted.

"Our work is purely basic research," emphasizes Pollmann. However, it is perfectly possible that one day the results will even allow for applications, for example the construction of durable data memories for future quantum computers.

Credit: 
Technical University of Munich (TUM)

A rapid, easy-to-use DNA amplification method at 37°C

image: A DNA strand (purple) primes exponential amplification of DNA (red) as signals for directing light emission of DNA nanodeveices.

Image: 
Organic and Biomolecular Chemistry

Scientists in Japan have developed a way of amplifying DNA on a scale suitable for use in the emerging fields of DNA-based computing and molecular robotics. By enabling highly sensitive nucleic acid detection, their method could improve disease diagnostics and accelerate the development of biosensors, for example, for food and environmental applications.

Researchers from Tokyo Institute of Technology (Tokyo Tech), Abbott Japan Co., Ltd, and the University of Electro-Communications, Japan, report a way to achieve million-fold DNA amplification and targeted hybridization[1] that works at body temperature (37°C/98.6°F).

The method, named L-TEAM (Low-TEmperature AMplification), is the result of more than five years of research and offers several advantages over traditional PCR[2], the dominant technique used to amplify DNA segments of interest.

With its easy-to-use, 'one-pot' design, L-TEAM avoids the need for heating and cooling steps and specialized equipment usually associated with PCR. That means it is an efficient, inexpensive method that can importantly prevent protein denaturation[3], thereby opening a new route to real-time analysis of living cells.

In their study published in Organic & Biomolecular Chemistry, the researchers introduced synthetic molecules called locked nucleic acids (LNAs) into the DNA strands, as these molecules are known to help achieve greater stability during hybridization.

The addition of LNA led to an unexpected, but beneficial, outcome. The team observed a reduced level of "leak" amplification, a type of non-specific amplification that has long been an issue in DNA amplification studies as it can lead to an error in disease diagnosis, that is, a false positive.

"We were surprised to discover the novel effect of LNA in overcoming the common leak problem in DNA amplification reactions," says Ken Komiya, assistant professor at Tokyo Tech's School of Computing. "We plan to investigate the mechanisms behind leak amplification in detail and further improve the sensitivity and speed of L-TEAM."

In the near future, the method could be used to detect short nucleic acids such as microRNA for medical diagnostics. In particular, it could facilitate point-of-care testing and early disease detection. MicroRNAs are now increasingly recognized as promising biomarkers for cancer detection and may hold the key to uncovering many other aspects of human health and environmental science.

In addition, Komiya explains that L-TEAM paves the way to practical use of DNA computing and DNA-controlled molecular robotics. "The original motivation behind this work was the construction of a novel amplified module that is essential to build advanced molecular systems," he says. "Such systems could provide insights into the operational principle behind living things."

Credit: 
Tokyo Institute of Technology

Development of durable MTJ under harsh environment for STT-MRAM at 1Xnm technology node

image: Figure 1: (a) The developed MTJ structure in this study compared with (b) the conventional MTJ structure.

Image: 
Tohoku University

Researchers at Tohoku University have announced the development of a new magnetic tunnel junction, by which the team has demonstrated an extended retention time for digital information without an increase of the active power consumption.

Non-volatile memories are essential constituents in integrated circuits, because they can offer low power consumption. Among proposed non-volatile memories, spin-transfer-torque magnetoresistive random access memory (STT-MRAM) has been intensively researched and developed, because of their high read/write speed, low voltage operation capability, and high endurance.

Currently, the application area of STT-MRAM is limited in consumer electronics. In order to use STT-MRAM in areas such as automotive and social infrastructure, it is vital to develop a magnetic tunnel junction (MTJ) with a high thermal stability factor that determines retention time for digital information, while keeping the power consumption low.

The research team, led by Professor Tetsuo Endoh, has developed a new magnetic tunnel junction with a highly reliability for STT-MRAM at reduced dimensions of 1Xnm technology node. To increase the thermal stability factor, it is necessary to increase the interfacial magnetic anisotropy originating at the CoFeB/MgO interface.

To increase the interfacial anisotropy, the research team has invented a structure with twice the number of CoFeB/MgO interfaces compared with a conventional one (Figs. 1a and 1b). Although the increase in the number of interfaces can enhance the thermal stability factor, it might also increase the writing current (the active power consumption) and degrade the tunnel magnetoresistance ratio of STT-MRAM cells, resulting in a lower reading operation frequency. The team has mitigated these effects by engineering the MTJ structure to keep the power consumption low and tunnel magnetoresistance ratio high.

The research team has demonstrated that the thermal stability factor can be increased by a factor of 1.5 - 2, without increasing the writing current and thus the active power consumption (Figs. 2a and 2b) or degrading the tunnel magnetoresistance ratio.

Therefore, the research team is optimistic that this new MTJ technology can lead to a widening of application areas of STT-MRAM at 1Xnm technology node in harsh environments such as automotive and social infrastructure. The team has also adopted the same material set as those used in the STT-MRAM currently mass-produced, retaining compatibility with the existing process. The technology will simultaneously achieve high cost-effectiveness for mass-production.

Credit: 
Tohoku University

One class in all languages

Advances in communication technology have had a major impact in all sorts of industries, but perhaps none bigger than in education. Now anyone from around the world can listen live to a Nobel Prize Laureate lecture or earn credits from the most reputable universities with nothing more than internet access. However, the possible information to be gained from watching and listening online is lost if the audience cannot understand the language of the lecturer. To solve this problem, scientists at the Nara Institute of Science and Technology (NAIST), Japan, presented a solution with new machine learning at the 240th meeting of the Special Interest Group of Natural Language Processing, Information Processing Society of Japan (IPSJ SIG-NL).

Machine translation systems have made it remarkably simple for someone to ask for directions to their hotel in a language they have never heard or seen before. Sometimes the systems can make amusing and innocent errors, but overall achieve coherent communication, at least for short exchanges usually only a sentence or two long. In the case of a presentation that can extend past an hour, for example, an academic lecture, they are far less robust.

"NAIST has 20% foreign students and, while the number of English classes is expanding, the options these students have are limited by their Japanese ability," explains NAIST Professor Satoshi Nakamura, who led the study.

Nakamura's research group acquired 46.5 hours of archived lecture videos from NAIST with their transcriptions and English translations, and developed a deep learning-based system to transcribe Japanese lecture speech and to translate it into English. While watching the videos, users would see subtitles in Japanese and English that matched the lecturer's speaking.

One might expect the ideal output would be simultaneous translations that could be done with live presentations. However, live translations limit the processing time and thus the accuracy.

"Because we are putting videos with subtitles in the archives, we found better translations by creating subtitles with a longer processing time," he says.

The archived footage used for the evaluation consisted of lectures from robotics, speech processing and software engineering. Interestingly, the word error rate in speech recognition correlated to disfluency in the lecturers' speech. Another factor from the different error rates was the length of time speaking without pause. The corpus used for the training was still insufficient and should be developed more for further improvements.

"Japan wants to increase its international students and NAIST has a great opportunity to be a leader in this endeavor. Our project will not only improve machine translation, it will also bring bright minds to the country," he continued.

Credit: 
Nara Institute of Science and Technology

New research on the prevalence of JUUL use and awareness amongst US youth age 13 to 17

Recently within the United States there has been mounting concern at the reported increase in the use of e-cigarettes by young people. According to widespread media reporting one particular product ( JUUL vaporizer) has been characterized as largely responsible for that increased use. At the Global Forum on Nicotine to be held in Warsaw 13th June, researchers from the Centre for Substance Use Research in Glasgow will outline the results of research assessing the extent of JUUL use and JUUL awareness amongst a representative sample of 13 to 17 year olds within the U.S.

The researchers found that approximately 45.5% of 15 to 17 year olds in the U.S. and 29.1% of 13 to 14 year olds had heard or seen a JUUL. Amongst the 15 to 17 year olds surveyed 7.6% had used a JUUL in the past and 4.0% had done so within the last 30 days. Amongst the 13 to 14 year olds surveyed 1.5% had used a JUUL in the past and 0.8% had done so in the last 30 days.

Any level of e-cigarette use by teens must be of concern, however, efforts aimed at tackling such use need to be based on accurate assessments of use prevalence. The research by the Glasgow team shows that, at present, JUUL use is less prevalent amongst young people in the U.S. than many media commentators have been suggesting. However, there is a need to measure e-cigarette prevalence by young people on a regular basis to assess the effectiveness of interventions aimed at reducing the extent of youth e-cigarette use.

Credit: 
Centre for Substance Use Research

Researchers take two steps toward green fuel

image: Researchers designed two-step process to break down rice straws into sugars for fuel

Image: 
Figure adapted from <em>Ind. Eng. Chem. Res.</em> 2019 58 (14), 5686-5697. Copyright © 2019 American Chemical Society

An international collaboration led by scientists at Tokyo University of Agriculture and Technology (TUAT) , Japan, has developed a two-step method to more efficiently break down carbohydrates into their single sugar components, a critical process in producing green fuel.

The researchers published their results on April 10th in the American Chemical Society journal, Industrial & Engineering Chemical Research.

The breakdown process is called saccharification. The single sugar components produced, called monosaccharides, can be fermented into bioethanol or biobutanol, alcohols that can be used as fuel.

"For a long time, considerable attention has been focused on the utilization of homogenous acids and enzymes for saccharification," said Eika W. Qian, paper author and professor in the Graduate School of Bio-Applications and Systems Engineering at the Tokyo University of Agriculture and Technology in Japan. "Enzymatic saccharification is seen to be a reasonable prospect since it offers the potential for higher yields, lower energy costs, and it's more environmentally friendly."

The use of enzymes to break down the carbohydrates could actually be hindered, especially in the practical biomass such as rice straw. A byproduct of rice harvest, rice straw consists of three complicated carbohydrates: starch, hemicellulose and cellulose. Enzymes cannot approach hemicellulose or cellulose, due to their cell wall structure and surface area, among other characteristics. They must be pre-treated to become receptive to the enzymatic activity, which can be costly.

One answer to the cost and inefficiency of enzymes is the use of solid acid catalysts, which are acids that cause chemical reactions without dissolving and becoming a permanent part of the reaction. They're particularly appealing because they can be recovered after saccharification and reused.

Still, it's not as easy as swapping the enzymes for the acids, according to Qian, as the carbohydrates are non-uniform. Hemicellulose and starch degrade at 180 degrees Celsius and below, and if the resulting components are heated further, the sugars produced discompose and are converted to other byproducts. On the other hand, degradation of cellulose only happens at temperatures of 200 degrees Celsius and above.

That's why, in order to maximize the resulting yield of sugar from rice straw, the researchers developed a two-step process - one step for the hemicellulose and another for the cellulose. The first step requires a gentle solid acid at low temperatures (150 degrees Celsius and below), while the second step consists of harsher conditions, with a stronger solid acid and higher temperatures (210 degrees Celsius and above).

Overall, the two-step process not only proved effective, it produced about 30 percent more sugars than traditional one-step processes.

"We are now looking for a partner to evaluate the feasibility of our two-step saccharification process in rice straw and other various materials such as wheat straw and corn stoke etc. in a pilot unit," Qian said. "Our ultimate goal is to commercialize our process to manufacture monosaccharides from this type of material in the future."

Credit: 
Tokyo University of Agriculture and Technology

Scientists investigate climate and vegetation drivers of terrestrial carbon fluxes

image: This is a photo of rainforest with a positive net carbon assimilation rate in Xishuangbanna, China.

Image: 
Shutao Chen

A better understanding of terrestrial flux dynamics will come from elucidating the integrated effects of climate and vegetation constraints on gross primary productivity (GPP), ecosystem respiration (ER), and net ecosystem productivity (NEP), according to Dr. Shutao Chen, Associate professor at Nanjing University of Information Science and Technology.

Dr. Chen and his team--a group of researchers from the Jiangsu Key Laboratory of Agricultural Meteorology/School of Applied Meteorology of Nanjing University of Information Science and Technology, College of Resources and Environmental Sciences of Nanjing Agricultural University and Climate Center of Anhui Weather Bureau, China--have had their findings published in Advances in Atmospheric Sciences and the study is featured on the cover of July issue of the journal.

"The terrestrial carbon cycle plays an important role in global climate change, but the vegetation and environmental drivers of carbon fluxes are poorly understood. Many more data on carbon cycling and vegetation characteristics in various biomes (e.g., forest, grassland, wetland) make it possible to investigate the vegetation drivers of terrestrial carbon fluxes," says Dr. Chen.

"We established a global dataset with 1194 available data across site-years including GPP, ER, NEP, and relevant environmental factors to investigate the variability in GPP, ER and NEP, as well as their covariability with climate and vegetation drivers. The results indicated that both GPP and ER increased exponentially with the increase in MAT [mean annual temperature] for all biomes. Besides MAT, AP [annual precipitation] had a strong correlation with GPP (or ER) for non-wetland biomes. Maximum LAI [leaf area index] was an important factor determining carbon fluxes for all biomes. The variations in both GPP and ER were also associated with variations in vegetation characteristics," states Dr. Chen.

"The model including MAT, AP and LAI explained 53% of the annual GPP variations and 48% of the annual ER variations across all biomes. The model based on MAT and LAI explained 91% of the annual GPP variations and 93% of the annual ER variations for the wetland sites. The effects of LAI on GPP, ER or NEP highlighted that canopy-level measurement is critical for accurately estimating ecosystem-atmosphere exchange of carbon dioxide."

"This synthesis study highlights that the responses of ecosystem-atmosphere exchange of CO2 to climate and vegetation variations are complex, which poses great challenges to models seeking to represent terrestrial ecosystem responses to climatic variation," he adds.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

'Power shift' needed to improve gender balance in energy research, report says

"Power shift" needed to improve gender balance in energy research, report says

Women still face significant barriers in forging successful and influential careers in UK energy research, a new high-level report has revealed.

A team of experts from the University of Exeter's Energy Policy Group has analysed gender balance within the crucial field of energy research and spoken to female researchers about their experiences of academic life. The study, launched today (14th June 2019), sets out how research funders and universities can ensure female talent and expertise is mobilised in transforming our energy systems.

The report is particularly timely as the UK parliament declares a climate emergency and the government commits to legislate for a 2050 net-zero greenhouse gas emissions target. It is clear that energy research needs to harness 100 per cent of available talent in order to meet the challenge of rapidly decarbonising energy systems.

The study revealed that women are still significantly under-represented in energy research and application rates from women are low. It also found that grants applied for and awarded to women tend to be of smaller value, when they do apply female academics are equally and sometimes more likely to be funded than male academics.

The report also highlighted the 'significant drop-off' between the number of female PhD students and funded researchers - meaning the sector loses a substantial pool of potential talent at an early stage.

The research presents four key ways in which funders and universities can work together to improve gender balance: look at the data, fund more women, stimulate career progression for female energy academics, and build on what's already working.

Jess Britton, a Postdoctoral Research Fellow at the University of Exeter and co-author of the report said: "Progress on gender balance in research has been too slow for too long, but we think now is the time to bring together action across funders and universities to ensure that female talent in capitalised on. Taking action across the funding, institutional and systemic issues we identify could drive a real shift in inclusion in the sector".

The new report, commissioned by the UK Energy Research Centre (UKERC) and funded by the Engineering and Physical Sciences Research Council (EPSRC) saw the researchers speak to 59 female academics conducting energy research and from various disciplines, institutions and career stages. They also analysed available data on gender and energy research funding.

Crucially, interviews with the researchers unearthed an array of issues that were felt to be holding women back from career progression - including the detrimental impact of part-time work or maternity leave, and inherent institutional and funding bias towards established, male academics.

While the report recognised that since 2017 there has been some progress in the gender balance of Peer Review Panel Members and small increases in awards granted to female researchers, progress has remained slow.

The study suggests that any progress should be accompanied by systemic change within the institutional structures and cultural environment of institutions involved with energy research.

Jim Watson, Director of UKERC added: "This report shows that there is an urgent need to address the poor gender balance within the UK energy research community - particularly with respect to leadership of grants and career progression.

"It not only reveals the extent of the problem with new evidence, but makes a series of practical recommendations should be required reading for funders and universities alike."

The research identified four key ways in which UKRI, other funders and universities can work to improve gender balance. They are:

Look at the data - There remain significant difficulties in accessing meaningful data on gender balance in energy research. Data should be published, used to set targets, monitor progress and provide annual updates. The report also suggested using quantitative and qualitative data to identify key intervention points, speaking to more female energy academics to identify biases and barriers, and continuing to improve gender balance in funding review processes.

Fund more women - the report identified that funding structures can be a barrier, and that both part-time working and career breaks are perceived to slow progress. It suggests that the assessment of part-time working and maternity leave needs to be standardised across funder eligibility criteria and in the review process. It also identified that a lack of diversity of funding types impacts on women, and suggested trialling innovative approaches to allocating funding and supporting early career researchers.

Stimulate career progression for female energy academics - The report highlighted the need to acknowledge and take action on the individualistic, long hours culture of academia and also overhaul existing institutional structures and cultures. Early career stages are often characterised by precarious fixed-term contracts and over reliance on quantitative measures of progress. It also recommended building suitable training, mentoring and support networks to help more women progress and ensure the visibility of female researchers.

Build on what is working - The study recommended identifying key points of engagement to build gender balance: combine specific targeted actions, such as UKRI and university frameworks and targeted funding initiatives, with long-term action on structural issues that promote cultural change in our institutions. It also identified the need to ensure equality of voice - so that female academic voices are heard.

Alison Wall, Deputy Director for Equality, Diversity and Inclusion at EPSRC said: "We welcome this report, its findings and recommendations. Many of the issues raised are ones we recognise more widely in our research community.

"Enhancing diversity and inclusion is one of the priorities in our new Delivery Plan. For example, we plan to make further progress on embedding EDI into the grant application process, developing our peer review processes, provision of further data and increased flexibility in our funding."

A copy of the report and the full list of recommendations can be found here: https://geography.exeter.ac.uk/media/universityofexeter/schoolofgeography/pdfs/Power_Shift.pdf

Credit: 
University of Exeter

Patients at a reduced risk of venous thromboembolism and persistent pain after partial versus total knee replacement

Annual European Congress of Rheumatology
(EULAR 2019)
Madrid, Spain, 12-15 June 2019

Madrid, Spain, 13 June 2019: The results of a study presented today at the Annual European Congress of Rheumatology (EULAR 2019) demonstrate reduced risk of venous thromboembolism and persistent pain, but increased risk of revision in partial versus total knee replacement in patients with osteoarthritis.1

In severe knee osteoarthritis, there are two main types of surgical intervention; partial or total knee replacement. In partial knee replacement only the part of the knee that has osteoarthritis is replaced, whereas in total knee replacement the entire joint is replaced. Although partial knee replacement has been associated with significant advantages, high rates of revision have been reported. This is where the implant components are removed, added or replaced. While partial knee replacement is cheaper than total knee replacement, there is uncertainty as to which surgery is better for patients. This is reflected by variability in the use of partial knee replacement with, for example, 50% of patients eligible for either procedure in the UK but less than 10% receiving a partial knee replacement.

"Our study clearly demonstrates significant short-term advantages of partial knee replacement over total knee replacement and although the long-term risk of revision is higher for partial knee replacement, this is likely, at least partly, explained by a greater willingness to revise a partial knee replacement", said Edward Burn, DPhil student, Centre for Statistics in Medicine, University of Oxford, United Kingdom. "The results of our study based on real-world data will complement those from a forthcoming randomised controlled trial comparing the two procedures, the Total Or Partial Knee Arthroplasty Trial (TOPKAT)".

The study by Mr Burn and colleagues from across Europe and the United States, replicated the design of the TOPKAT trial in real-world data and included 32,379 and 250,377 patients who received partial or total knee replacement respectively. They found partial knee replacement is associated with a 25-50% reduction in the 60-day risk of venous thromboembolism after surgery, and a 15-30% lower risk of persistent pain after surgery. However, partial knee replacement was also associated with an increased risk of revision, with the five-year risk of revision increasing from around 2.5-5% for total knee replacement to 5-7.5% for partial knee replacement.1

"There is a lack of clinical consensus on the profile of patients with osteoarthritis suitable for partial versus total knee replacement," said Professor Hans Bijlsma, President, EULAR. "We welcome these data as they will help inform both patients and physicians to support an individualised approach to care."

This multi-database propensity-score matched cohort study included data from four US claims databases (IBM MarketScan® Commercial Database (CCAE), IBM MarketScan® Medicare Supplemental Database (MDCR), Optum® de-identified Clinformatics Datamart Extended - Date of Death (Optum), and Pharmetrics) and one UK primary care electronic medical record database (THIN). All people aged 40 years or older at the time of first knee replacement surgery were included and followed for up to five years. Outcomes included short-term (60-day) post-operative complications (infection, venous thromboembolism, mortality, readmission), opioid use in the three to 12 months post-surgery as a proxy for persistent pain, and five-year revision risk. Propensity score matching (up to 1:10) was used to control for all available confounders, and negative control outcomes and calibration to minimise the impact of residual confounding.

Abstract number: OP0174

Credit: 
European Alliance of Associations for Rheumatology (EULAR)

Using data to decide when to transfer patients by medical helicopter

CLEVELAND--The increased use of medical helicopters over the last half-century has saved countless lives by quickly getting patients from trauma to the emergency room (ER) within the so-called "golden hour."

But a growing number of medical experts contend emergency helicopters may be overused in some transfer situations. Their concern: Patients stuck with an exorbitant cost for a service that may not have been necessary and isn't fully covered by their insurance.

Now, a researcher at the Frances Payne Bolton School of Nursing at Case Western Reserve University is leading a study he believes will support what he says is a much-needed change in how medical helicopters are used--especially for transfers between hospitals.

"For true emergencies, it is quicker and better to transport someone by air, but that's not the majority of the transfers being made," said researcher Andrew Reimer, an assistant professor and longtime flight nurse who has made hundreds of emergency flights before and during his nursing career at Case Western Reserve.

The ability to quickly move patients by medical helicopter is especially vital in what is known as the golden hour--that first hour after a traumatic injury, considered the most critical for successful emergency treatment.

"I have moved a lot of people who didn't necessarily benefit from moving, and many of us had felt it was too automatic to just make the helicopter transfer, but we didn't have the numbers to back it up," he said. "Now we do, so it's time to re-imagine the way we do non-time-sensitive transports."

Reimer said most previous studies of the issue were "confounded by not having pre-transfer data about the patients." An electronic medical record (EMR) dataset, developed by Reimer with Damon Kralovic, medical director of Critical Care transport at Cleveland Clinic, however, helped him advance existing research by having detailed medical conditions of thousands of patients before, during and after being transferred by helicopter from one hospital to another.

Reimer and his nursing and computer science students are feeding that information into a computer algorithm they had built to identify which patients actually need the speed of a helicopter transfer--and which could either stay put or be transferred by ambulance.

They hope to turn that information into a computer-generated checklist to determine which patients will benefit from an air transfer. Using data such as age, pre-existing illnesses and key vital signs would place patients in categories about their "very specific risk of mortality, based on every possible combination in the data," he said.

Eventually, Reimer said, this information could be included in each patient's EMR to provide guidance on whether he or she would benefit from an air transfer.

"There are obvious cases on both ends, but this guide helps us decide on a whole group of patients who are sort of in the middle," he said. "This data can help us know ahead of time who will benefit--and who won't."

Reimer recently wrote about the methods behind his current project in the journal Biomedical Informatics Insights. The ongoing work is being supported by a $500,000 National Institutes of Health grant.

His work builds on previous research, which has considered "overuse of helicopter transport (of) the minimally injured," and follows a spate of news reports over the last several years, from a story in The Atlantic in 2016, an National Public Radio report in 2016 and, more recently, a Consumer Reports story, which reported a spike in patient complaints in recent years.

"There's no doubt that this is becoming a hot-button topic," Reimer said, referencing an earlier 2015 New York Times report, "Air Ambulances Offer a Lifeline, and Then a Sky-High Bill."

"Let's face it: For many, many patients, a $5,000 ambulance trip is better than a $50,000 helicopter ride," he said. "So, if we can make better, data-driven decisions about who should be transferred by air and when, it will save money--and save the helicopter resource for when it is needed most."

Credit: 
Case Western Reserve University

The app teaching anorexics to eat again

video: Control of eating behavior using the Mandometer app.

Image: 
Copyright © 2018, <i>Journal of Visualized Experiments</i>

Swedish scientists say that eating disorders should be considered just that - eating disorders, rather than mental disorders. The proof, they say, is in the eating.

"Anorexic patients can normalize their eating rate by adjusting food intake to feedback from a smartphone app," says Professor Per Sodersten, lead author of an article in Frontiers in Neuroscience defending his pioneering method. "And in contrast to failing standard treatments, most regain a normal body weight, their health improves, and few relapse."

The approach is based on the theory that slow eating and excessive physical exertion, both hallmarks of anorexia, are evolutionarily conserved responses to short food supply that can be triggered by dieting - and reversed by practicing normal eating.

Which came first: the diet or the anorexia?

Attempts to treat anorexia as a mental illness have largely failed, claim the authors.

"The standard treatment worldwide, cognitive behavioral therapy (CBT), targets cognitive processes thought to maintain the disorder. The rate of remission from eating disorders is at most 25% one year after CBT, with unknown outcomes in the long-term. Psychoactive drugs have proven even less effective."

According to Sodersten, we need to flip the perspective: to target eating behaviors that maintain dysfunctional cognitive processes.

"This new perspective is not so new: nearly 40 years ago, it was realized that the conspicuous high physical activity of anorexia is a normal, evolutionarily conserved response - i.e., foraging for food when it is in short supply - that can be triggered dietary restriction.

"In striking similarity to human anorexics, rats and mice given food only once a day begin to increase their running activity and decrease their food intake further to the point at which they lose a great deal of body weight and can eventually die."

More recently, the theory has been elaborated and validated by studies of brain function.

"We find that chemical signaling in the starved brain supports the search for food, rather than eating itself," reports Sodersten.

How to eat

To prove that the evolutionary perspective works in practice, Sodersten and his team have put their money where their (patient's) mouth is. Their private clinics - which reinvest 100% of profits into research and development - are now the largest provider of eating disorders services in Sweden.

"We first proposed teaching anorexics to eat back in 1996. At the time, it was thought that this was misplaced and even dangerous; today, no-one can treat patients with eating disorders in the Region of Stockholm without a program for restoring their eating behavior."

At the Mandometer clinics, the control of eating behavior is outsourced to a machine that provides feedback on how quickly to eat.

"Subjects eat food from a plate that sits on a scale connected to their smartphone. The scale records the weight loss of the plate during the meal, and via an app creates a curve of food intake, meal duration and rate of eating," explains Sodersten. "At regular intervals, a rating scale appears on the screen and the subject is asked to rate their feeling of fullness."

"A reference curve for eating rate and a reference curve for the feeling of fullness are also displayed on the screen of the smartphone. The subject can thus adapt their own curves in real time to the reference curves, which are based on eating behavior recorded in healthy controls."

Through this feedback, patients learn to visualize what normal portions of food look like and how to eat at a normal rate.

Satisfying results

The method has now been used to treat over 1500 patients to remission by practicing eating.

"The rate of remission is 75% in on average one year of treatment, the rate of relapse is 10% over five years of follow-up and no patient has died."

This appears to be a vast improvement compared to the current best standard treatment of CBT. All the more so, considering that overall Sodersten's patients started off sicker than average.

"The difference in outcome is so big that, according to our medical statistician, a randomized control trial [RCT] is now redundant. Nevertheless, we invite a head-to-head RCT by independent researchers - so far, there are no takers."

Credit: 
Frontiers

Small currents for big gains in spintronics

image: This diagram shows how magnetization reverses in a GaMnAs crystal.

Image: 
© 2019 Tanaka-Ohya Laboratory

UTokyo researchers have created an electronic component that demonstrates functions and abilities important to future generations of computational logic and memory devices. It is between one and two orders of magnitude more power efficient than previous attempts to create a component with the same kind of behavior. This fact could help it realize developments in the emerging field of spintronics.

If you're a keen technophile and like to keep up to date with current and future developments in the field of computing, you might have come across the emerging field of spintronic devices. In a nutshell, spintronics explores the possibility of high-performance, low-power components for logic and memory. It's based around the idea of encoding information into the spin -- a property related to angular momentum -- of an electron, rather than by using packets of electrons to represent logical bits, 1s and 0s.

One of the keys to unlock the potential of spintronics lies in the ability to quickly and efficiently magnetize materials. University of Tokyo Professor Masaaki Tanaka and colleagues have made an important breakthrough in this area. The team has created a component -- a thin film of ferromagnetic material -- the magnetization of which can be fully reversed with the application of very small current densities. These are between one and two orders of magnitude smaller than current densities required by previous techniques, so this device is far more efficient.

"We are trying to solve the problem of the large power consumption required for magnetization reversal in magnetic memory devices," said Tanaka. "Our ferromagnetic semiconductor material -- gallium manganese arsenide (GaMnAs) -- is ideal for this task as it is a high-quality single crystal. Less ordered films have an undesirable tendency to flip electron spins. This is akin to resistance in electronic materials and it's the kind of inefficiency we try to reduce."

The GaMnAs film the team used for their experiment is special in another way too. It is especially thin thanks to a fabrication process known as molecular beam epitaxy. With this method devices can be constructed more simply than other analogous experiments which try and use multiple layers rather than single-layer thin films.

"We did not expect that the magnetization can be reversed in this material with such a low current density; we were very surprised when we found this phenomenon," concludes Tanaka. "Our study will promote research of material development for more efficient magnetization reversal. And this in turn will help researchers realize promising developments in spintronics."

Credit: 
University of Tokyo

One-fifth of US surgeons still overusing riskier procedure to create kidney dialysis access

image: A diagram showing how kidney dialysis is conducted after an arteriovenous (AV) fistula surgery has provided access to the circulatory system. A new Johns Hopkins Medicine study reveals that many physicians are still performing a different, riskier procedure with a prosthetic graft for gaining access.

Image: 
National Institute of Diabetes and Digestive and Kidney Diseases/National Institutes of Health

Long-term hemodialysis is a lifesaver for approximately half a million patients in the United States with kidney failure (also known as end-stage renal disease, or ESRD) who are either waiting on or unsuitable for a kidney transplant. But before the external machinery can take over the function of the kidneys -- filtering and cleansing wastes from the blood -- a minor surgical procedure is needed to create a stable, functional and reusable access to the circulatory system, usually through blood vessels in the arm.

Two surgical methods for creating this "vascular access" are available, one overwhelmingly preferred whenever possible for its better durability, performance and safety. However, in a study using Medicare claims data, Johns Hopkins Medicine researchers report that one-fifth of seasoned U.S. surgeons are statistically performing the less-preferred procedure too often, even when unnecessary, and that providing them with a peer evaluation of such performance may lead to improved practices.

The findings are reported in a new study in the current issue of the Journal of the American Medical Association Surgery.

"The good news from our survey data is that progress has been made in the last decade toward reducing the number of inappropriate vascular access surgeries performed in the United States," says Caitlin Hicks, M.D., M.S., assistant professor of surgery at the Johns Hopkins University School of Medicine and lead author on the JAMA Surgery paper. "But the numbers also show that we still have a ways to go to meet established standards that are already the norm in Europe and Asia," she adds.

The two types of vascular access procedures available for extended hemodialysis are the arteriovenous fistula (AVF) and the AV graft (AVG). The AVF is made by connecting a vein, most often in a patient's arm, to a nearby artery. Over a period of two to three months, this bridge, known as a fistula, increases blood flow and pressure to the vein to enlarge and strengthen it. Once matured, the "supervein" will withstand repeated needle insertions that would collapse an untreated vessel.

In contrast, the AVG uses an artificial device, a plastic tube, to make the artery-vein connection. Because it does not require maturation, the graft can be used within three to four weeks after surgery. However, studies have shown that it is more likely than an AVF to have problems with infection and blood clots, and may need repair or replacement within a year. Additionally, according to Medicare data, the average annual cost for creating and maintaining an AVG is higher than that for an AVF -- nearly $73,000 per person annually compared with $60,000.

Recognizing the distinct advantages of the fistula, the federal Centers for Medicare & Medicaid Services and ESRD treatment networks across the United States created the Fistula First Breakthrough Initiative (FFBI) in 2003 to increase the use of AVF to 50% of all vascular access surgeries performed. When that goal was reached in 2009, the AVF over AVG standard was raised to 66% in an effort to match the 60% to 90% rates in Europe and Asia.

In a bid to document progress toward that goal, and identify "physician characteristics" linked to higher-than-appropriate AVG use, Johns Hopkins Medicine researchers used Medicare fee-for-service claims data for more than 85,000 adult kidney failure patients who received first-time vascular access surgeries between Jan. 1, 2016, and Dec. 31, 2017. They calculated an AVG rate (total number of vascular access operations divided by the number of AVG surgeries) for each of the 2,397 physicians who performed 10 or more procedures -- either AVF or AVG -- during that time. While the median, or midpoint, rate for the whole group was 18.2% (meaning that the median rate for AVF surgeries was 81.8%), there were a significant number of outliers.

"We found that 498 physicians, approximately 21% or 1 in 5, performed AVG surgeries in more than 34% of their total cases," Hicks says. "This means that they failed to meet the 66% or higher FFBI target standard for AVF use."

The study indicated that most of the physicians associated with high AVG use rates -- including some who were using AVG in more than 80% of their cases -- had long-established practices (a median of 35.5 years since medical school graduation), were located in a metropolitan setting and specialized in vascular surgery more than general surgery.

"Since the FFBI best practice guidelines have only been around since 2003, perhaps older physicians are simply less familiar with them and have continued 'doing business as usual,'" Hicks says. "Or they may just be seeing more complex cases and believe that fistula access is less suitable. In either case, we believe that more education and targeted intervention using peer-to-peer evaluations that urge a change in practice may help address the problem, because that approach has worked before."

Another Johns Hopkins Medicine study recently reported that a "Dear Colleague" performance evaluation letter successfully convinced physicians nationwide to reduce the amount of tissue they removed in a common surgical treatment for skin cancer to meet a professionally recognized benchmark of good practice.

Hicks says that the research team hopes to conduct such an intervention in a future study and document its impact on improving behavior for vascular access surgery.

The AV fistula versus graft study is part of a larger Johns Hopkins Medicine effort to develop, establish and disseminate quality measures that will capture the appropriateness of care and help reduce low-value care in favor of a more patient-centered approach.

"By identifying practices that are not in the best interest of the patient and providing interventions to address them, we can help physicians who are outliers, and in turn, improve the quality of care for the hundreds of patients each one treats," says Martin A. Makary, M.D., M.P.H., co-author of the study, professor of surgery at the Johns Hopkins University School of Medicine and an authority on health care quality. "Physicians, for the most part, want to do the right thing, and measures of appropriateness can help guide them."

Credit: 
Johns Hopkins Medicine

Materials informatics reveals new class of super-hard alloys

image: An X-ray energy-dispersive spectroscopy (EDS) map of the as-cast microstructure of a hard alloy predicted from Lehigh University researchers' analysis. Lettered panels are X-ray intensity maps associated with different elements comprising the alloy that permit deduction of the spatial distributions of these elements.

Image: 
Lehigh University

A new method of discovering materials using data analytics and electron microscopy has found a new class of extremely hard alloys. Such materials could potentially withstand severe impact from projectiles, thereby providing better protection of soldiers in combat. Researchers from Lehigh University describe the method and findings in an article, "Materials Informatics For the Screening of Multi-Principal Elements and High-Entropy Alloys," that appears today in Nature Communications.

"We used materials informatics - the application of the methods of data science to materials problems - to predict a class of materials that have superior mechanical properties," said primary author Jeffrey M. Rickman, professor of materials science and engineering and physics and Class of '61 Professor at Lehigh University.

Researchers also used experimental tools, such as electron microscopy, to gain insight into the physical mechanisms that led to the observed behavior in the class of materials known as high-entropy alloys (HEAs). High-entropy alloys contain many different elements that, when combined, may result in systems having beneficial and sometimes unexpected thermal and mechanical properties. For that reason, they are currently the subject of intense research.

"We thought that the techniques that we have developed would be useful in identifying promising HEAs," Rickman said. "However, we found alloys that had hardness values that exceeded our initial expectations. Their hardness values are about a factor of 2 better than other, more typical high-entropy alloys and other relatively hard binary alloys."

All seven authors are from Lehigh University, including Rickman; Helen M. Chan, New Jersey Zinc Professor of materials science and engineering; Martin P. Harmer, Alcoa Foundation Professor of materials science and engineering; Joshua Smeltzer, graduate student in materials science and engineering; Christopher Marvel, postdoctoral research associate in materials science and engineering; Ankit Roy, graduate student in mechanical engineering and mechanics; and Ganesh Balasubramanian, assistant professor of mechanical engineering and mechanics.

Rise of High-Entropy Alloys and Data Analysis

The field of high-entropy, or multi-principal element, alloys has recently seen exponential growth. These systems represent a paradigm shift in alloy development, as some exhibit new structures and superior mechanical properties, as well as enhanced oxidation resistance and magnetic properties, relative to conventional alloys. However, identifying promising HEAs has presented a daunting challenge, given the vast palette of possible elements and combinations that could exist.

Researchers have sought a way to identify the element combinations and compositions that lead to high-strength, high-hardness alloys and other desirable qualities, which are a relatively small subset of the large number of potential HSAs that could be created.

In recent years, materials informatics, the application of data science to problems in materials science and engineering, has emerged as a powerful tool for materials discovery and design. The relatively new field is already having a significant impact on the interpretation of data for a variety of materials systems, including those used in thermoelectrics, ferroelectrics, battery anodes and cathodes, hydrogen storage materials, and polymer dielectrics.

"Creation of large data sets in materials science, in particular, is transforming the way research is done in the field by providing opportunities to identify complex relationships and to extract information that will enable new discoveries and catalyze materials design," Rickman said. The tools of data science, including multivariate statistics, machine learning, dimensional reduction and data visualization, have already led to the identification of structure-property-processing relationships, screening of promising alloys and correlation of microstructure with processing parameters.

Lehigh University's research contributes to the field of materials informatics by demonstrating that this suite of tools is extremely useful for identifying promising materials from among myriad possibilities. "These tools can be used in a variety of contexts to narrow large experimental parameter spaces to accelerate the search for new materials," Rickman said.

New Method Combines Complementary Tools

Lehigh University researchers combined two complementary tools to employ a supervised learning strategy for the efficient screening of high-entropy alloys and to identify promising HEAs: (1) a canonical-correlation analysis and (2) a genetic algorithm with a canonical-correlation analysis-inspired fitness function.

They implemented this procedure using a database for which mechanical property information exists and highlighting new alloys with high hardnesses. The methodology was validated by comparing predicted hardnesses with alloys fabricated in a laboratory using arc-melting, identifying alloys with very high measured hardnesses.

"The methods employed here involved a novel combination of existing methods adapted to the high-entropy alloy problem," Rickman said. "In addition, these methods may be generalized to discover, for example, alloys having other desirable properties. We believe that our approach, which relies on data science and experimental characterization, has the potential to change the way researchers discover such systems going forward."

Credit: 
Lehigh University