Tech

Study finds biases in widely used dementia identification tests

MINNEAPOLIS - Quick tests used in primary care settings to identify whether people are likely to have dementia may often be wrong, according to a study published in the November 28, 2018, online issue of Neurology® Clinical Practice, an official journal of the American Academy of Neurology.

The tests, called brief cognitive assessments, evaluate thinking and memory skills. They help doctors decide who may benefit from a full diagnostic assessment for dementia. The three tests examined in this study were the Mini-Mental State Examination, which looks at orientation to time and place and the ability to remember words, the Memory Impairment Screen, which focuses on the ability to remember words, and Animal Naming, which involves naming as many animals as possible in 60 seconds.

"Our study found that all three tests often give incorrect results that may wrongly conclude that a person does or does not have dementia," said study author David Llewellyn, PhD, of the University of Exeter Medical School in the United Kingdom. "Each test has a different pattern of biases, so people are more likely to be misclassified by one test than another depending on factors such as their age, education and ethnicity."

For the study, 824 people in the United States with an average age of 82 were given full dementia assessments that included a physical exam, genetic testing for the APOE gene, which is linked to a risk of Alzheimer's disease, psychological testing and comprehensive memory and thinking tests.

Researchers divided participants into two groups based on the comprehensive dementia diagnosis. Of the participants, 35 percent had dementia and 65 percent did not.

Participants took each of the three quick tests and researchers found that 36 percent of participants were wrongly classified by at least one of the tests, but only 2 percent were misclassified by all three tests. Overall rates of misclassification by these tests individually ranged from 14 to 21 percent, including both false-positive and false-negative results.

Researchers also found that different tests had different biases. One test had an education bias, in that those with higher education were more likely to be misclassified as not having dementia and those with lower education were more likely to be misclassified as having dementia. Older age, having an ethnic background other than white and living in a nursing home also led to misclassification. Across all tests, a lack of information on whether a family member or friend rated the participant's memory to be poor resulted in an increased risk of misclassification.

"Failing to detect dementia can delay access to treatment and support, whereas false alarms lead to unnecessary investigations, causing pressure on health care systems," said Llewellyn. "Identifying people with dementia in a timely fashion is important, particularly as new methods of treatment come onstream. Our findings show that we desperately need more accurate and less biased ways of detecting dementia swiftly in clinic."

A limitation of the study was that other brief cognitive assessments in clinical use were not examined.

Credit: 
American Academy of Neurology

How much do you trust Dr. Google?

Women experiencing signs of breast cancer vary in how they value, use, and trust 'Dr Google' when making sense of their symptoms, a new study in the journal Health, Risk & Society reports.

Researchers from the University of Surrey, led by Dr Afrodita Marcu, investigated whether women sought health information online when experiencing potential breast cancer symptoms and, if so, whether they found it useful. Interviewing 27 women, aged between 47 and 67 years old, researchers found different levels of engagement with the internet for health information that were driven by a range of attitudes and levels of trust.

Some women, particularly those with no formal educational qualifications or with fewer than two O levels, were found to be less positive about the usefulness of 'Dr Google' and were largely against using the internet for health information, claiming that this could lead to misdiagnosis or to unnecessary worry about what their symptoms might mean.

Researchers also found that women, although open to using the internet for health information, reported feeling overwhelmed by what they found and became reluctant to conduct further searches. The majority of women who experienced such feelings went to see their GP, mostly because they felt that only a health care professional could resolve concerns about their symptoms and provide appropriate answers.

Other women in the study were however confident in looking up information online about their breast changes and used it to interpret and act upon their symptoms. These women did not view online health information as problematic nor did they express mistrust in 'Dr Google.' Some even supplemented the information received from the GP by further investigations on the internet.

Dr Afrodita Marcu, Research Fellow at the University of Surrey, said: "The internet is a valuable source of medical information. However, it also contains a lot of poor quality information, or information which cannot be easily interpreted by lay people or applied to an individual situation, so it is not surprising that some people feel they cannot trust it.

"The way that a person will capitalize on the internet for health purposes depends on many factors, like the nature of their symptoms or their fear about coming across misleading information, so we should not assume that 'Dr Google' is valuable and credible to all".

Credit: 
University of Surrey

Promising new imaging method aids fracture prediction

Boston, November 28, 2018 - A study published today in the Lancet Diabetes and Endocrinology reports that high-resolution peripheral computed tomography (HR-pQCT) represents an effective tool in predicting an individual's fracture risk. The research team included scientists from the Hinda and Arthur Marcus Institute for Aging Research (Marcus Institute) at Hebrew SeniorLife, and Beth Israel Deaconess Medical Center- both Harvard Medical School affiliates. Dual-energy X-ray absorptiometry (DXA) has been considered the clinical standard for determining fracture risk, along with the Fracture Risk Assessment Tool (FRAX). A FRAX assessment considers factors such as age, gender, weight, alcohol use, smoking history and fracture history.

However, many older adults who sustain a fracture do not meet the diagnostic criteria for osteoporosis, a disease characterized by bone loss. Bones become fragile not only due to low bone mineral density (BMD), but also from deterioration in bone structure. This study indicates that while DXA measurement of BMD predicts fracture, HR-pQCT adds additional information about risk. This allows clinicians to analyze a patient's bone microarchitecture and is an important step in identifying additional bone traits that predict fracture risk.

Fragility fractures, which lead to significant morbidity, mortality and expense, are a large public health concern. Notably, the number of women who will experience a fracture in any given year exceeds the combined number who will experience a stroke, breast cancer or myocardial infarction. Annual costs associated with fragility fractures exceed $19 billion in the US. Given the predicted growth in the number of older adults, fractures and associated costs are projected to increase two- to four-fold worldwide in the coming decades.

This study included more than 7,000 participants from the United States and four other countries with 765 fractures, and represents the largest prospective study of HR-pQCT indices and incident fracture to date. Bone is generally classified into two tissue types--cortical bone, also known as compact bone and trabecular bone, also known as cancellous or spongy bone.

"Results from this large international cohort of women and men suggest deficits in trabecular and cortical bone density and structure contribute to fracture risk independently of BMD and FRAX," said Lisa Samelson, Ph.D., who is an epidemiologist at the Marcus Institute and Harvard Medical School, and lead author of the study. "Further, assessment of cortical and trabecular bone microstructure may be useful in those who would not otherwise have been identified as being at high risk for fracture," Added Douglas Kiel, M.D., and Mary Bouxsein, Ph.D., Co-Principal Investigators of the study.

Credit: 
Hebrew SeniorLife Hinda and Arthur Marcus Institute for Aging Research

Authenticating the geographic origin of hazelnuts

Hazelnuts, like olive oil, cheese and other agricultural products, differ in flavor depending on their geographic origin. Because consumers and processors are willing to pay more for better nuts -- especially in fine chocolates and other delicacies -- testing methods are needed to reliably authenticate the nuts' country of origin. Researchers now report in ACS' Journal of Agricultural and Food Chemistry that NMR analysis could fill the bill.

People have eaten hazelnuts since at least the Mesolithic era. Today, they're the third most commonly grown nut, after almonds and walnuts. Italian hazelnuts fetch the highest price, followed by those from Turkey, the U.S., Georgia and Azerbaijan. A few previous studies evaluated analytical techniques for chemically profiling hazelnuts, but they focused either on a small region or on particular hazelnut varieties. Thomas Hackl and colleagues wanted to find a method that could pinpoint geographic origin regardless of variety.

The researchers ground up 262 nut samples from different regions around the world and extracted the metabolites, which they identified with proton NMR spectroscopy. The spectra showed that nuts from different regions had different metabolite profiles, with certain compounds proving distinctive for specific areas. For example, the amount of betaine, an amino acid derivative, varied significantly in nuts from different countries. Thus, betaine could potentially be a good biomarker in a future test to identify the source of a particular batch of nuts, the researchers say. For an even more accurate determination, the team's new NMR method -- which had an accuracy of 96 percent -- could be used in combination with a previously devised test that assessed a different group of hazelnut metabolites using liquid chromatography and mass spectrometry.

Credit: 
American Chemical Society

Intelligent framework aims to optimize data transfer in 5G networks

A North Carolina State University researcher has developed technology designed to allow cellular communication nodes in 5G systems to partition bandwidth more efficiently in order to improve end-to-end data transmission rates. In simulations, the tech is capable of meeting the international goal of 10 gigabits per second in peak performance areas.

"End-to-end transfer means that the technology accounts for all of the connections between a data source and the end user," says Shih-Chun Lin, an assistant professor of electrical and computer engineering at NC State and author of a paper on the work.

"My technology, incorporating both hardware and software, is a framework that takes into account data transfer rates, wired and wireless bandwidth availability, and the power of base stations - or eNodeBs - in a 5G network," Lin says. "It then uses stochastic optimization modeling to determine the most efficient means of transferring and retrieving data - and it does this very quickly, without using a lot of computing power."

Lin says that simulation testing of the framework is promising, and he and his research team are in the process of building a fully functional prototype.

"The prototype will allow us to conduct tests on a 5G testbed platform, since full-scale 5G networks are not yet online," Lin says. "But simulation results suggest that we'll be able to meet the 3GPP goal of 10 gigabits per second data transfer in peak coverage areas.

"We are currently seeking industry partners to work with us on developing, testing and deploying the framework to better characterize its performance prior to widespread adoption of 5G networks," Lin says.

The paper, "End-to-End Network Slicing for 5G&B Wireless Software-Defined Systems," will be presented Dec. 11 at IEEE GLOBECOM'18, being held in Abu Dhabi, UAE.

Credit: 
North Carolina State University

Researchers rise to challenge of predicting hail, tornadoes three weeks in advance

People living in Kansas, Nebraska and other states in the Plains are no strangers to tornadoes and hail storms - among the most costly and dangerous severe weather threats in the United States.

Meteorologists and computer models do a good job forecasting severe thunderstorm activity up to a week in advance. Scientists can also read long-term, seasonal signals of severe weather months in advance.But there's a middle ground - a prediction lead time of about 2 to 5 weeks - that's sorely lacking in current forecasting capabilities.

In a new paper in Journal of Geophysical Research: Atmospheres, Colorado State University atmospheric scientists demonstrate the ability to make skillful predictions of severe weather across the Plains and southeastern United States, including hail and tornadoes, in that coveted 2-to-5-weeks-in-advance period. To do it, they use a reliable tropical weather pattern called the Madden-Julian Oscillation, which can influence weather in distant parts of the Earth, including the U.S., by sending out powerful atmospheric waves.

"When the Madden-Julian Oscillation is active, it is capable of setting up atmospheric patterns that are favorable for severe weather across the United States over the next several weeks," explained Cory Baggett, research scientist in atmospheric science and the paper's lead author. "We have found that an active Madden-Julian Oscillation, which periodically goes around the equator in 30 to 60 days, is a really good source of predictability on these subseasonal time scales." Atmospheric scientists typically consider "subseasonal" to mean about three weeks to three months in advance.

Weather forecasting weeks in advance cannot pinpoint where individual tornadoes or hail storms will occur, Baggett explained, but the researchers have shown they can forecast expected environmental conditions that are favorable for the formation of severe thunderstorms. That includes atmospheric instability and rotational vertical wind shear.

Using available datasets, researchers looked at what the Madden-Julian Oscillation was doing about three weeks ahead of severe weather in the Plains and southeastern United States, during the typical severe-weather months of March through June. They used 37 years of data to cross-validate their predictions.

They found "forecasts of opportunity" where they were able to make skillful predictions of severe weather activity about 60 percent to 70 percent of the time. Meteorologists would consider this rate of success "great," according to Sam Childs, a Ph.D. student in atmospheric science who co-authored the work.

"We're judging ourselves against climatology," Childs said. "If we predicted normal thunderstorm activity, we would be right about 50 percent of the time. Three weeks out, we're getting it right about 2-1." They also found consistently stronger ability to forecast hail and tornado activity during certain phases of the Madden-Julian Oscillation.

To understand whether this new method of predicting severe weather would be useful to forecasters, the researchers hope they can transition the work to operational experts who could test it out. "In essence, these forecasts of opportunity would allow a forecaster to better alert the public of a period in which severe weather may be more likely a few weeks in advance," Childs said.

"I think we were all surprised at how good some of our forecasts were," he added. "That's motivation enough to carry forward, so that we might be able to have more useful forecasting products in that coveted 2-to-5-week lead time."

Credit: 
Colorado State University

Researchers successfully train computers to identify animals in photos

image: This photo of a bull elk was one of millions of images used to develop a computer model that identified North American wildlife species in nearly 375,000 images with 97.6 percent accuracy.

Image: 
Jim Beasley

A computer model developed at the University of Wyoming by UW researchers and others has demonstrated remarkable accuracy and efficiency in identifying images of wild animals from camera-trap photographs in North America.

The artificial-intelligence breakthrough, detailed in a paper published in the scientific journal Methods in Ecology and Evolution, is described as a significant advancement in the study and conservation of wildlife. The computer model is now available in a software package for Program R, a widely used programming language and free software environment for statistical computing.

"The ability to rapidly identify millions of images from camera traps can fundamentally change the way ecologists design and implement wildlife studies," says the paper, whose lead authors are recent UW Department of Zoology and Physiology Ph.D. graduate Michael Tabak and Ryan Miller, both of the U.S. Department of Agriculture's Center for Epidemiology and Animal Health in Fort Collins, Colo.

The study builds on UW research published earlier this year in the Proceedings of the National Academy of Sciences (PNAS) in which a computer model analyzed 3.2 million images captured by camera traps in Africa by a citizen science project called Snapshot Serengeti. The artificial-intelligence technique called deep learning categorized animal images at a 96.6 percent accuracy rate, the same as teams of human volunteers achieved, at a much more rapid pace than did the people.

In the latest study, the researchers trained a deep neural network on Mount Moran, UW's high-performance computer cluster, to classify wildlife species using 3.37 million camera-trap images of 27 species of animals obtained from five states across the United States. The model then was tested on nearly 375,000 animal images at a rate of about 2,000 images per minute on a laptop computer, achieving 97.6 percent accuracy -- likely the highest accuracy to date in using machine learning for wildlife image classification.

The computer model also was tested on an independent subset of 5,900 images of moose, cattle, elk and wild pigs from Canada, producing an accuracy rate of 81.8 percent. And it was 94 percent successful in removing "empty" images (without any animals) from a set of photographs from Tanzania.

The researchers have made their model freely available in a software package in Program R. The package, "Machine Learning for Wildlife Image Classification in R (MLWIC)," allows other users to classify their images containing the 27 species in the dataset, but it also allows users to train their own machine learning models using images from new datasets.

Credit: 
University of Wyoming

Putting hybrid-electric aircraft performance to the test

image: This illustration shows the a) parallel and b) series drivetrain models.

Image: 
University of Illinois Department of Aerospace Engineering.

Although hybrid-electric cars are becoming commonplace, similar technology applied to airplanes comes with significantly different challenges. University of Illinois aerospace engineers are addressing some of them toward the development of a more sustainable alternative to fossil fuels to power airplanes.

"Jet fuel and aviation gasoline are easy to store on an airplane. They are compact and lightweight when compared to the amount of energy they provide. Unfortunately, the actual combustion process is very inefficient. We're harnessing only a small fraction of that energy but we currently don't have electrical storage systems that can compete with that," said Phillip Ansell, assistant professor in the Department of Aerospace Engineering in the College of Engineering at the University of Illinois.

Ansell said adding more batteries to fly farther may seem logical, but it works against the goal to make an aircraft as lightweight as possible. "That's one of the big barriers we run into when designing battery-powered electrified aircraft. The current technology has very significant range disadvantages. But strong fuel-burn advantages."

He, along with former aerospace undergraduate student, Tyler Dean, and current doctoral student Gabrielle Wroblewski, utilized a series of simulations to model the performance of hybrid-electric aircraft.

"We started with an existing twin-engine aircraft and looked at how we might create a hybrid-electric drivetrain for it using existing off-the-shelf hardware," Ansell said. "We wanted to know how well it would perform. If I used a certain set of drivetrain components, I want to know how far the aircraft could fly, how much fuel does it burn, how fast can if climb--all of the overall flight performance changes."

A flight-performance simulator was created to accurately represent the true flight performance of a Tecnam P2006T on a general mission to include take off, climb, cruise, descent, and landing, along with sufficient reserves to meet FAA regulations. Transition segments were incorporated into the simulation during climb and descent where the throttle setting, flap deployment, propeller rotation rate, and all other flight control variables were either set to mimic input from a typical pilot or prescribed in accordance with the aircraft flight manual.

After configuring the simulator to collect baseline performance data, a parallel hybrid drivetrain was integrated into the simulation. The researchers compared the sensitivity of range and fuel economy to the level of electrification, battery specific energy density, and electric motor power density. The same sensitivities were studied with a series hybrid-electric drivetrain.

Ansell said that, overall, a hybrid-electric drivetrain can lead to substantial improvements in fuel efficiency of a given aircraft configuration, though these gains depend strongly on the coupled variations in the degree of drivetrain electrification and the required mission range. Both of these factors influence the weight allocation of battery and fuel systems, as well as the weight scaling imposed by internal combustion engine and electrical motor components. In general, to obtain the greatest fuel efficiency a hybrid architecture should be used with as much electrification in the drivetrain as is permissible within a given range requirement.

The fuel efficiency improvements were shown to particularly shine for short-range missions, which is a good thing since range limitations serve as one of the key bottlenecks in hybrid aircraft feasibility. Though, through this study the changes in the range capabilities of the aircraft were also able to be forecast with advancements in hybrid component technologies. "For example," Ansell said, "the propulsion system today could be configured to have 25 percent of its propulsive power come from an electric motor. However, it would only be able to fly about 80 nautical miles. Fast forward to projections for lighter battery technologies for roughly the year 2030 and the same aircraft could fly two and a half to three times as far. The range increase is nonlinear, so the largest improvements can be seen for the most immediate improvements with battery specific energy density, with gradually diminishing returns for that same proportional increase in specific energy."

"One interesting and unexpected result we observed, however, came about when comparing the parallel and series hybrid architectures. Since the parallel architecture mechanically couples the shaft power of the engine and motor together, only one electrical machine is needed. For the series architecture, a generator is also needed to convert the engine power to electrical power, along with a larger motor than the parallel hybrid configuration to drive the propulsor. Unexpectedly, this aspect made the parallel architecture more beneficial for improved range and fuel burn almost across the board due to its lighter weight. However, we did observe that if significant improvements are made in maturing electrical motor components in the very long term, we may actually someday see better efficiency out of series-hybrid architectures, as they permit a greater flexibility in the placement and distribution of propulsors."

The team chose to model the Tecnam P2006T using a series of performance variables found in published articles by the aircraft manufacturer. They selected that particular aircraft, in part, because NASA has been working on their X-57 aircraft, which has leading-edge propellers for high lift. "This study was being conducted for NASA, and use of this aircraft also allowed our results to be better applicable to the X-57 concept vehicle," Ansell said. "Using our data, they will be able to have at least a ballpark idea about how the hybrid system will perform without the other distributed propulsion modifications."

Ansell said propulsion electrification is still very much an unknown in terms of how a vehicle should be built, engineered, flown. "Our study helps inform those discussions. We looked only at battery storage systems though there are many more that can be implemented, each with their own advantages and disadvantages. This study allowed us to look at what types of advancements need to be made in motor technology, in battery technology, etc."

Credit: 
University of Illinois Grainger College of Engineering

Using fine-tuning for record-breaking performance

Materials scientists at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) have achieved a new record in the performance of organic non-fullerene based single-junction solar cells. Using a series of complex optimisations, they achieved certified power conversion efficiency of 12.25 percent on a surface area measuring one square centimetre. This standardised surface area is the preliminary stage for prototype manufacture. The results achieved in conjunction with partners from the South China University of Technology (SCUT) have now been published in the renowned journal Nature Energy*.

Organic photovoltaic systems have undergone rapid development during the last few years. In most cases, organic solar cells consist of two layers of semiconductors - one acts as the donor by supplying the electrons, and the second acts as an acceptor or electron conductor. In contrast to the silicon conventionally used, which must be drawn from a melt or precipitated in vacuum systems, the polymer layers in this system can be deposited from a solution directly on a supporting film. On the one hand, this means comparably low manufacturing costs, and on the other, these flexible modules can be used more easily than silicon solar cells in urban spaces. For a long time, fullerenes, which are carbon-based nanoparticles, were considered ideal acceptors, however the intrinsic losses of fullerene-based composites still severely limit their potential efficiency. The work carried out at FAU has thus resulted in a paradigm shift. 'With our partners in China, we have discovered a new organic molecule that absorbs more light than fullerenes that is also very durable', says Prof. Dr. Christoph Brabec, Chair of Materials Science (Materials in Electronics and Energy Technology) at FAU.

Complex standardisation

The significant improvements in performance and durability mean the organic hybrid printed photovoltaics are now becoming interesting for commercial use. However, to develop practical prototypes, the technology must be transferred from laboratory dimensions of a few square millimetres to the standardised dimension of one square centimetre. 'Significant losses frequently occur during scaling', says Dr. Ning Li, a materials scientist at Prof. Brabec's Chair. During a project funded by the German Research Foundation (DFG), Ning Li and his colleagues at SCUT in Guangzou were able to significantly reduce these losses. In a complex process, they adjusted the light absorption, energy levels and microstructures of the organic semiconductors. The main focus of this optimisation was the compatibility of donor and acceptor and the balance of short-circuit current density and open-circuit voltage, which are important prerequisites for a high output of electricity.

Certified record efficiency

'I think the best way to describe our work is by imagining a box of Lego bricks', says Li. 'Our partners in China inserted and adjusted single molecular groups into the polymer structure and each of these groups influences a special characteristic that is important for the function of solar cells.' This results in a power conversion efficiency of 12.25 percent - a new certified record for solution-based organic single-junction solar cells with a surface area of one square centimetre, where the acceptor does not consist of fullerenes. It is also interesting to note that the researchers succeeded in keeping the scaling losses to such low levels that the highest value in the lab on a small surface was only marginally under 13 percent. At the same time, they were able to demonstrate a stability relevant to production under simulated conditions such as temperature and sunlight.

The next step involves scaling up the model to module size at the Solar Factory of the Future at Energie Campus Nürnberg (EnCN) before development of practical prototypes begins.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Ambulances in Syria deliberately and repeatedly targeted as part of war tactics

Half of the ambulances targeted sustained serious damage and/or had to be withdrawn from service, the findings show.

Now in its eighth year, the Syrian conflict has taken a heavy toll on medical facilities and health professionals from airstrikes, bombings, shootings, kidnappings and lootings.

This is despite the fact that healthcare facilities and the ambulances servicing them are protected under International Humanitarian Law and the Geneva Conventions, and the UN resolution 2286, passed in 2016, condemning attacks on medical facilities and staff.

To try and quantify the extent of the damage inflicted on the country's ambulance service, the researchers analysed data from individual reports submitted to the Syrian Network for Human Rights (SNHR) throughout 2016 and 2017 and reviewed published research on attacks on ambulances since the start of the war in 2011.

Analysis of the SNHR data showed that there were 204 individual attacks involving 243 ambulances in 2016 and 2017. Half (52%) of the vehicles were deliberately targeted.

Most attacks occurred in areas with large factions of opposition forces: Aleppo, Idlib, and Damascus. Only 1-2 per cent of the attacks occurred in pro-government areas.

Half of the vehicles (49%) were either heavily damaged or had to be withdrawn from service. Only 12 per cent of vehicles sustained mild damage.

The principal perpetrators were the Syrian regime (123 incidents; 60%) and Russian armed forces (60; 29%). Air-to-surface missiles and shelling were most often used in the attacks.

Cluster bombs, which scatter munitions over a wide area, leaving unexploded remnants in the ground, and barrel bombs-unguided drums filled with metal fragments and explosives with a massive blast radius-are frequently used by Syrian government and Russian armed forces, note the researchers.

The high degree of indiscriminate and widespread destruction caused by these bombs will most likely have contributed to the large number of ambulances becoming collateral damage during attacks on medical facilities, they suggest.

The use of cluster bombs was banned in 2008 by the Convention on Cluster Munitions, indicating an "intentional disregard for international treaties on medical neutrality by the [Syrian] government," they point out.

The review of the published research (18 articles) included the impact of the attacks-care delayed or prevented altogether; blocking and other forms of violence, such as looting; the withdrawal of humanitarian agencies to protect workers; and 'double-tap' attacks, where a location is bombed twice, several minutes apart, with the intention of targeting first responders, including paramedics.

"This 'weaponisation of healthcare' turns the essential need for healthcare into a war tactic that aims to destabilise, intimidate, and demoralise," write the researchers.

"The intentional, highly destructive and repetitive targeting of ambulances throughout the Syrian conflict has had an immeasurable and devastating impact on the people of Syria and the healthcare system.

"As the most dangerous place in the world to be a healthcare provider, no cadre of health worker or health facility is immune to the attacks," they add.

Collecting data on attacks on ambulances and pre-hospital providers is challenging, but is nevertheless essential for ensuring that these incidents don't become "usual war-time tactics," insist the researchers.

"The UN Security Council and global humanitarian community must do more to protect the sacred space of medical neutrality in conflict and bring harsher punishments to perpetrators of violence against healthcare in Syria," they conclude.

Credit: 
BMJ Group

Study identifies a genetic driver of deadly prostate cancer

image: Graph shows elevated activity of a transcription-factor network that includes the molecule Onecut2 in tumors of patients whose prostate cancer resisted hormone therapy (above purple bar) compared with other types. (Michael Freeman, Ph.D.)

Image: 
Nature Publishing Group (Nature Research)

A new study has identified a novel molecular driver of lethal prostate cancer, along with a molecule that could be used to attack it. The findings were made in laboratory mice. If confirmed in humans, they could lead to more effective ways to control certain aggressive types of prostate cancer, the second-leading cause of cancer death for men in the U.S.

Men whose prostate cancer tumors are localized typically survive many years after diagnosis, whether they have surgery, radiation therapy or no treatment at all. But for a minority of men whose cancer spreads to other parts of the body and resists hormone therapy, the prognosis is poor, with fewer than a third surviving five years after diagnosis. More than 29,000 men in the U.S. die from prostate cancer each year, according to the American Cancer Society.

"We need fresh strategies to prevent prostate cancer from turning deadly for the thousands of men whose disease metastasizes and withstands hormone therapy," said Michael Freeman, PhD, co-director of the Cancer Biology Program in the Samuel Oschin Comprehensive Cancer Institute at Cedars-Sinai. Hormone therapy blocks the activity of male hormones, which fuel the growth of the most common type of prostate cancer.

Freeman was the corresponding author of the multi-institutional study, published in the journal Nature Medicine. The co-first authors were project scientist Mirja Rotinen, PhD, and Sungyong You, PhD, assistant professor of Surgery and Biomedical Sciences, both at Cedars-Sinai.

For the research, the team analyzed genetic and molecular data from cancer patients in a large database. They found evidence of elevated activity of the molecule Onecut2 in tumors of patients whose prostate cancer resisted hormone therapy. Onecut2, a type of transcription factor, is needed for the body to make certain proteins.

The team found Onecut2 interfered with the activity of androgen receptor proteins, ­the targets of hormone therapy for prostate cancer. This process could allow the cancer to become less dependent on hormones for growth. At the same time, Onecut2 drove some of the cancer cells to change into a more aggressive variety that resists hormone therapy. "These twin actions of Onecut2 could help explain how certain prostate cancers evade hormone therapy and turn more aggressive," said Freeman, professor of Surgery and Biomedical Sciences.

In additional experiments involving human tissue samples, pharmaceutical databases and laboratory animals, the investigators identified a compound, CSRM617, that counteracted Onecut2. They showed CSRM617 significantly reduced the size of prostate cancer metastases in mice. "Our research suggested that Onecut2 is a master regulator of lethal prostate cancer that may be a useful therapeutic target in up to a third of patients whose cancer spreads and evades hormone therapy," Freeman said.

Freeman also was the corresponding author of a recent study in the journal Cancer Research that identified a new method to gauge the aggressiveness of prostate cancer. This method is based on how stable the shapes of the cancer's cell nuclei are. Instability was associated with cancer types that spread. The investigators also found they could detect nuclei instability in the bloodstream through cells shed by the tumor. These findings could lead to less invasive techniques for identifying potentially dangerous prostate cancer and monitoring its progression through blood samples, the research team said.

"These discoveries are emblematic of the paradigm-shifting work that is being carried out in cancer at Cedars-Sinai," said Dan Theodorescu, MD, PhD, director of the cancer institute. "They show how our investigators are bridging scientific discovery to clinical development of novel therapies that will impact patients."

Credit: 
Cedars-Sinai Medical Center

Artificial intelligence may help reduce gadolinium dose in MRI

image: Imaging protocol used with 3 different MR series at different contrast doses.

Image: 
Radiological Society of North America

CHICAGO - Researchers are using artificial intelligence to reduce the dose of a contrast agent that may be left behind in the body after MRI exams, according to a study being presented today at the annual meeting of the Radiological Society of North America (RSNA).

Gadolinium is a heavy metal used in contrast material that enhances images on MRI. Recent studies have found that trace amounts of the metal remain in the bodies of people who have undergone exams with certain types of gadolinium. The effects of this deposition are not known, but radiologists are working proactively to optimize patient safety while preserving the important information that gadolinium-enhanced MRI scans provide.

"There is concrete evidence that gadolinium deposits in the brain and body," said study lead author Enhao Gong, Ph.D., researcher at Stanford University in Stanford, Calif. "While the implications of this are unclear, mitigating potential patient risks while maximizing the clinical value of the MRI exams is imperative."

Dr. Gong and colleagues at Stanford have been studying deep learning as a way to achieve this goal. Deep learning is a sophisticated artificial intelligence technique that teaches computers by examples. Through use of models called convolutional neural networks, the computer can not only recognize images but also find subtle distinctions among the imaging data that a human observer might not be capable of discerning.

To train the deep learning algorithm, the researchers used MR images from 200 patients who had received contrast-enhanced MRI exams for a variety of indications. They collected three sets of images for each patient: pre-contrast scans, done prior to contrast administration and referred to as the zero-dose scans; low-dose scans, acquired after 10 percent of the standard gadolinium dose administration; and full-dose scans, acquired after 100 percent dose administration.

The algorithm learned to approximate the full-dose scans from the zero-dose and low-dose images. Neuroradiologists then evaluated the images for contrast enhancement and overall quality.

Results showed that the image quality was not significantly different between the low-dose, algorithm-enhanced MR images and the full-dose, contrast-enhanced MR images. The initial results also demonstrated the potential for creating the equivalent of full-dose, contrast-enhanced MR images without any contrast agent use.

These findings suggest the method's potential for dramatically reducing gadolinium dose without sacrificing diagnostic quality, according to Dr. Gong.

"Low-dose gadolinium images yield significant untapped clinically useful information that is accessible now by using deep learning and AI," he said.

Now that the researchers have shown that the method is technically possible, they want to study it further in the clinical setting, where Dr. Gong believes it will ultimately find a home.

Future research will include evaluation of the algorithm across a broader range of MRI scanners and with different types of contrast agents.

"We're not trying to replace existing imaging technology," Dr. Gong said. "We're trying to improve it and generate more value from the existing information while looking out for the safety of our patients."

Credit: 
Radiological Society of North America

Virtual models provide real knowledge in the grass family

video: This method for creating high-quality, three-dimensional (3D) digital representations of plant structures uses images of thin sections of plant material taken with a light microscope and reconstructed into a 3D model using computer-assisted design (CAD) software. The technique is demonstrated by creating models of grass spikelets.

Image: 
Phillip C. Klahs, Timothy J. Gallaher, and Lynn G. Clark

The structures of flowers and other plant parts represent a rich and complex source of botanical information with great potential to answer a variety of taxonomic, evolutionary, and ecological questions. As computational approaches become ever more central to biological research, there is a pressing need to translate this information into tractable digital data for analysis. In research presented in a recent issue of Applications in Plant Sciences , Phillip Klahs and colleagues refined a method for creating high-quality, three-dimensional (3D) digital representations of plant structures. They demonstrated the effectiveness of this technique by creating models of the flowers of three species in the grass family, Poaceae.

The wind-pollinated flowers of grasses are notoriously difficult to study due to their small, compact, and concealed structure. "Grasses are often underappreciated as flowering plants. The taxonomic keys are difficult, there is so much diversity, and because they are wind pollinated the flowers are labeled 'not showy,'" said Klahs, a graduate student at Iowa State University and the lead author of the study. 3D digital representations of these structures could help botany students learning to identify different species of grass by floral structure. "My personal top application is bringing some love back to standard botany. I really want to help people become excited about keying out plants, especially scary plants like grasses and sedges," said Klahs.

Beyond the classroom, the enigmatic flowers of grasses have enormous economic importance, as their proper pollination leads to the production of grains such as rice, wheat, and corn. "There may be huge implications for agriculture. Understanding seed set and the conditions at which certain grains are being fertilized is valuable biologically and economically," said Klahs. He also pointed out the inverse application, noting that "There are a lot of wind-pollinated weeds, and understanding their dispersal and sexual timing is important as well."

The technique developed by Klahs and colleagues involves taking images of thin sections of plant material with a light microscope and reconstructing these two-dimensional images into a 3D model using computer-assisted design (CAD) software; an animation of the 3D modeling steps can be viewed in the Video. This method has several advantages over existing techniques for creating 3D models of plant structures, such as optical photogrammetry and X-ray tomography. It is cheaper than X-ray tomography, simultaneously produces usable microscope slides, and unlike optical photogrammetry, produces a model of the internal structure of the flower.

The bioinformatic revolution that has swept through botany over recent decades followed the digitization of botanical information such as DNA or protein sequences. Plant anatomy and morphology have been more challenging to represent digitally in a cost-effective and accurate way. But the information contained in detailed representations of plant anatomy can answer questions that DNA cannot, for example, about environmental determinants of effective wind pollination.

The technique presented in this study helps bring plant morphology further into the digital age. The high-quality 3D digital representations of plant structures reported here, once created, can be made cheaply and easily available to scientists and educators around the world to use as they please. The scientists creating these models have likely not imagined all the potential uses for them, much like DNA sequence data in decades past. As for Klahs, he is "currently undertaking a systematic approach utilizing the 3D models to address macroevolutionary questions, with side benefits of showing off how grass flowers are actually really beautiful."

Credit: 
Botanical Society of America

Complex systems help explain how democracy is destabilised

Complex systems theory is usually used to study things like the immune system, global climate, ecosystems, transportation or communications systems.

But with global politics becoming more unpredictable - highlighted by the UK's vote for Brexit and the presidential elections of Donald Trump in the USA and Jair Bolsonaro in Brazil - it is being used to examine the stability of democracies.

An international, interdisciplinary team including mathematicians, economists, psychologists, philosophers, sociologists and political scientists publishes a collective examination of the work in this field today in the European Journal of Physics.

Dr Karoline Wiesner, from the University of Bristol's school of mathematics, is the lead author. She explains the premise of the team's work: "There is little work on the circumstances under which instability of democracy might happen. So, we lack the theory to show us how a democracy destabilises to the point it is not describable as a democracy anymore.

"This reflects the way we in the west have lived in the past 50 to 60 years. But times have changed. Citizens of democracies are becoming less content with their institutions. They are increasingly willing to ditch institutions and norms that have been central to democracy. They are more attracted to alternative, even autocratic regime types.

"Furthermore, we recently saw elected officials in Hungary and Poland put pressure on critical media and undermine institutions like independent courts. This illustrates the need to rethink the idea of democracies as stable institutions."

The team's paper focuses on two features of complex social systems in general, and of democratic systems in particular: feedback and stability, and their mutual relationship.

They examined how the stability of the social institutions democracy relies on are affected by feedback loops.

They looked at several strands, including economic inequality, political divergence, and the impact of media and social media on societal 'norms'.

The authors say: "Economic inequality and the health of democracy are closely linked. We know greater inequality associates with poorer health and social problems. But it is also linked to political polarisation.

"This is because democracy presupposes a basic equality of influence. But when economic inequality increases, so do differences in influence over institutions. Those who have large financial resources can better influence institutional change than those who do not.

"A shock increase in economic inequality - such as resulted from the policy responses to the 2008 financial crisis - leads to corrosion of the relationship between less well-off voters' choices and institutional outcomes. It may even lead to effective or actual non-democratic rule.

The team also shows that extreme diversity of opinion can sometimes be a cause of instability. While a degree of diversity and partisan disagreement is healthy and even necessary in a democracy, too much may lead to an inability to understand and solve joint problems.

Radicalisation and polarisation compound this. Radicalisation occurs when political elites try to reshape politics to secure a permanent advantage by bending rules, ignoring norms, and pursuing strategies that seemed off limits.

Polarisation involves a breakdown of common faith. It leads members of one partisan coalition to ignore potential threats to democracy, based on the belief that having their opponents in power would be worse.

Prof Farrell, one of the co-authors from the U.S. said: "In the US, where the media is less-heavily regulated than in other comparable democracies, we have seen this happen. Talk radio and Fox News have long catered to a conservative constituency hungry for information and perspectives that confirm its beliefs.

"This creates a feedback loop fed by commercial imperatives between the media and its listeners. In a similar way, partisan competition and the need to support or thwart policy goals may lead to feedback loops between media and political actors."

Finally, the authors explored how social institutions can be destabilised by the erosion of social norms.

"Much of democracy relies on norms, conventions and expectations of people's behaviour," said Prof Lewandowsky, the psychologist on the team. "This means numerous psychological processes can contribute to the stability or instability of democracy.

"Social media can have a profound impact on these processes. There is a lot of evidence that the strength with which people hold an opinion is proportionate to the extent to which they believe it to be shared by others.

"But what if this signal is distorted? Extreme views can move into the mainstream when they are legitimised by actual or presumed majority endorsement. It serves to entrench extreme opinions and make them resilient to change.

"The fact that any opinion, no matter how absurd, will be shared by at least some of the more than one billion Facebook users worldwide creates an opportunity for the emergence of a false consensus effect around any fringe opinion, because the social signal is distorted by global interconnectivity."

The researchers also note the algorithms used by social media platforms to determine what appears in users' feeds. They point to the recent Brexit referendum in the U.K. and the 2016 U.S. presidential election, where highly personalised data was available to political operatives, and was used to open the door to micro-targeting of messages that exploited people's unique vulnerabilities.

Dr Wiesner concluded: "One of our important messages in this paper is that a stabilising feature of a democratic system - opinion exchange - breaks down when this possibility for engagement and debate is destroyed because messages are disseminated in secret, targeting individuals based on their personal vulnerabilities to persuasion, without their knowledge and without the opponent being able to rebut any of those arguments.

"These impacts of social media on public discourse show how democracies can be vulnerable in ways against which institutional structures and historical traditions offer little protection. Complex systems science offers a unique entry point to study such phenomena."

Credit: 
IOP Publishing

Treating COPD patients for anxiety using CBT reduces hospital visits and is cost-effective

Cognitive behavioural therapy (CBT) delivered by respiratory nurses is cost-effective and reduces anxiety symptoms in chronic obstructive pulmonary disease (COPD) patients, according to research published in ERJ Open Research [1].

COPD is a long-term condition that causes inflammation in the lungs, narrowing of the airways and damaged lung tissue, making breathing difficult. Anxiety often occurs alongside COPD and can mean that patients do less physical activity, leading to loss of fitness, isolation, and deteriorating health overall. The new study found that brief CBT sessions with respiratory nurses reduced feelings of anxiety for patients with COPD and resulted in less frequent use of A&E and hospital services.

Dr Karen Heslop-Marshall, a Nurse Consultant atNewcastle-upon-Tyne NHS Foundation Trust and Newcastle University, UK, was lead researcher on the study. She explains: "One of the main symptoms of COPD is breathlessness. This is very frightening and often leads to feelings of anxiety. Many healthcare professionals do not currently screen COPD patients for symptoms of anxiety, even though it can have an impact on their overall health.

"Feeling anxious has a negative impact on patients' quality of life and leads to more frequent use of healthcare resources. We wanted to test whether one-to-one CBT sessions delivered by respiratory nurses could reduce symptoms of anxiety and whether this could be a cost-effective intervention."

A total of 236 patients with a diagnosis of mild to very severe COPD took part in the trial.

Each patient had also been screened for anxiety using the HADS-Anxiety Subscale. This is a simple questionnaire that asks patients about their feelings of anxiety and depression over the past week. Scores of between eight and ten are considered to show mild symptoms, 11-14 indicate moderate symptoms, and scores of more than 15 indicate severe symptoms.

All the patients entered into the study scored eight or higher on the HADS scale. In total, 59% of those screened for entry into the study had raised HADS scores, suggesting anxiety is very common in COPD.

Over a three-month period, patients were either given leaflets on anxiety management or given leaflets as well as CBT. The CBT sessions coached patients on how to develop coping strategies to deal with the anxiety caused by breathlessness, to help to improve physical activity levels.

All patients also received standard medical care, including lung function testing, a medical review and appropriate drug treatments. If they were eligible, they also received pulmonary rehabilitation, which is a supervised exercise programme designed for COPD patients.

After three months, patients completed the HADS-Anxiety questionnaire again to assess how the different treatment methods affected their levels of anxiety.

The researchers found that CBT was more effective in reducing anxiety symptoms in COPD patients compared to leaflets alone; on average, the HADS-Anxiety scale scores of CBT patients improved by 3.4, while patients in the leaflet group improved by just 1.9.

After checking the hospital attendance records of patients in the study, the researchers found that for each patient who attended CBT, there was an average saving of £1,089 for hospital admissions and £63 for emergency room attendances.

The data also showed no link between a patients' lung function, measured by how much air a person can breathe out in one second, and their anxiety score. The researchers say this suggests that even patients with mild COPD can feel extremely anxious, and so would benefit from this intervention.

Dr Heslop-Marshall said: "We found that one-to-one CBT sessions delivered by respiratory nurses could reduce symptoms of anxiety and that this could be a cost-effective intervention. Although the CBT intervention initially resulted in added costs, as respiratory nurses required training in CBT skills, this was balanced by the savings made thanks to less frequent need of hospital and A&E services.

"Reducing the levels of anxiety patients experience has a significant impact on their quality of life as well as their ability to keep physically active and may improve survival in the long-term. Our research shows that front-line respiratory staff can deliver this intervention efficiently and effectively."

The researchers say it was not possible to blind participants to what method of treatment they received, which may have had an impact on their responses to the second HADS questionnaire. They are also unable to determine which specific element of the CBT intervention, such as the coping strategies to address frightening thoughts, pacing, breathing control, distraction, or encouraging physical activity, was most effective at reducing feelings of anxiety.

Dr Thierry Troosters, from the Katholieke Universiteit Leuven, Belgium, is President-Elect of the European Respiratory Society and was not involved in the research. He said: "COPD is a major burden to individuals, societies and healthcare systems across the world. This is partly due to the continued exposure to risk factors for COPD, such as smoking and air pollution, and partly due to ageing populations.

"This research highlights how using a multidisciplinary approach in the treatment of COPD can reduce the burden on patients and healthcare services. Treating patients for co-existing conditions such as anxiety contributes greatly to improving their overall health, and these methods can be cost-effective. Care provided by dedicated and properly trained healthcare professionals also allows for early referral of patients with more serious mental health conditions to even more specialised care tracks."

Credit: 
European Respiratory Society