Tech

Children 'not scared' by PPE, says study

Since the outbreak of SARS-CoV-2, it has quickly become apparent that children are extremely unlikely to suffer severe COVID-19 illness. Nevertheless, children have had to adjust to the new world of medical staff dressed in personal protective equipment (PPE) in the same way as all other patients. A new study from one of the UK's leading children's hospitals -Alder Hey Children's Hospital in Liverpool - shows that children are not scared by PPE, and can in fact feel reassured by it.

The study was conducted by Drs Charlotte Berwick, Jacinth Tan, Ijeoma Okonkwo and their colleagues at Alder Hey. It was performed in conjunction with the Procedure Induced Anxiety Network (PIANo-UK), led by Dr Richard Martin of Great Ormond Street Children's Hospital. This piece of work forms part of a larger multi-centre study evaluating the impact of PPE on children and young people undergoing anaesthesia.

The study results from Alder Hey were presented at the Winter Scientific Meeting of the Association of Anaesthetists, held online this year.

Anxiety before, during and after surgery in children is common and leads to complications including pain and delayed recovery. The requirement for staff to don PPE in the context of the coronavirus pandemic was thought to potentially contribute further to anxiety. The authors thus set out to investigate the true impact of PPE on fear and anxiety in children and young people.

The data collection period was from June 22 to July 5, 2020, and included children aged between 2 and 16 years old, using validated scales to score anxiety behaviour in the anaesthetic room.

A total of 63 children with a median age of 9 years were studied: 38/63 (60%) were boys and 25/63 (40%) were girls. Half of children (31/63: 49%) scored zero for anxiety indicating perfect induction of anaesthesia. There was no significant difference in anxiety when a sedative pre-medication was given, indicating that PPE did not impact the non-sedated children any more than those children who had been given a sedative to help manage their anxiety.

The authors explain: "At the start of the pandemic, there were real fears that we would have to separate hospitalised children from their parents, prior to theatre. It was thought that we would be using massive quantities of sedatives in all our patients to enable them to come to theatre safely and minimise the potential trauma from this forced separation."

In the day surgery cohort, 45 families were surveyed, and asked to use a numerical rating scale from 0-10 in answer to the question "How scary do you find staff wearing PPE". In another question, children and young people were asked to choose one or more words from a list of descriptors about how PPE made them feel. Patients could enter free text and/or select multiple descriptors including: I don't know, happy, safe, nervous, anxious, excited, giggly, scared, confident, worried, the same or sad.

Half of the responses (23/45 - 50%) included responses from the child or young person undergoing surgery. A total of 42/45 (93%) families had expected staff to be wearing PPE; 15/23 (65%) children reported it made them feel happy and 15/23 (65%) safe. None of the 23 (0%) chose anxious, nervous or scared as descriptors. Parents overestimated their child's fear of PPE (see graph in link below).

The authors conclude: "This study suggests that PPE does not contribute to anxiety in children and young people who need anaesthesia and surgery. Most patients experienced extremely low levels of anxiety at induction. PPE provided reassurance and increased a child's confidence in anaesthesia. Two thirds of children reported staff PPE made them feel safe and happy, and none reported being scared by PPE."

The authors say these results were a little surprising, saying: "We did expect some children to be scared because staff were wearing PPE. There is such a mix of different masks, hoods and gowns in use at our hospital, at the very least we had thought some patients would feel anxious about being cared for by a mixture of muffled-sounding staff resembling beekeepers, graffiti artists, blacksmiths and astronauts!"

"The finding that surprised us most were the overwhelmingly positive descriptors utilised by the children to describe how staff wearing PPE made them feel with most of them stating it made them feel happy and/or safe. Not a single child chose anxious, nervous or scared to describe how PPE made them feel. We were less surprised to find that parents overestimated their child's anxiety and fear. This is a more common protective response brought about by worry and concern."

While noting their study is limited by the absence of pre-COVID-19 data, they believe their work "highlights a probable psychological shift to a 'new normal' that warrants further study. PPE will likely remain commonplace in anaesthesia, understanding its impact will allow further improvements to the quality of the patient experience."

Credit: 
AAGBI

Burial practices point to an interconnected early Medieval Europe

Early Medieval Europe is frequently viewed as a time of cultural stagnation, often given the misnomer of the 'Dark Ages'. However, analysis has revealed new ideas could spread rapidly as communities were interconnected, creating a surprisingly unified culture in Europe.

Dr Emma Brownlee, Department of Archaeology, University of Cambridge, examined how a key change in Western European burial practices spread across the continent faster than previously believed - between the 6th - 8th centuries AD, burying people with regionally specific grave goods was largely abandoned in favour of a more standardised, unfurnished burial.

"Almost everyone from the eighth century onwards is buried very simply in a plain grave, with no accompanying objects, and this is a change that has been observed right across western Europe," said Dr Brownlee.

To explore this change, Emma examined over 33,000 graves from this period in one of the largest studies of its kind. Statistical analysis was used to create a 'heat map' of the practice, tracking how it changed in frequency over time.

The results of this analysis, published in the journal Antiquity, reveal that changes in grave good use began to decline from the mid-sixth century in England, France, Germany, and the Low Countries, and by the early eighth century, it had been abandoned entirely.

"The most important finding is that the change from burial with grave goods to burial without them was contemporary across western Europe," said Dr Brownlee. "Although we knew this was a widespread change before, no one has previously been able to show just how closely aligned the change was in areas that are geographically very far apart."

Crucially, this contemporary transition provides strong evidence that early Medieval Europe was a well-connected place, with regular contact and exchange of ideas across vast areas.

Evidence of increasing long-distance trade is seen around this period, which may have been how these connections were facilitated. As the idea spread between communities, social pressure drove more people to adopt it. As more people did, this pressure grew - explaining why the spread of unfurnished funerals appeared to accelerate over time.

With people sharing more similarities, this likely reinforced the connections themselves as well.

"The change in burial practice will have further reinforced those connections; with everyone burying their dead in the same manner, a medieval traveller could have gone anywhere in Europe and seen practices they were familiar with," said Dr Brownlee.

An interconnected Europe with long-distance trade and travel facilitating the spread of new ideas to create a shared culture may sound modern, but in reality, Europe has been 'global' for over a millennium.

Credit: 
University of Cambridge

Size of connections between nerve cells determines their signaling strength

image: The size of synapses in the cerebral cortex directly determines the strength of their signal transmission - illustrated as three nerve cell connections of different size and brightness.

Image: 
Kristian Herrera and authors

The neocortex is the part of the brain that humans use to process sensory impressions, store memories, give instructions to the muscles, and plan for the future. These computational processes are possible because each nerve cell is a highly complex miniature computer that communicates with around 10,000 other neurons. This communication happens via special connections called synapses.

The bigger the synapse, the stronger its signal

Researchers in Kevan Martin's laboratory at the Institute of Neuroinformatics at the University of Zurich (UZH) and ETH Zurich have now shown for the first time that the size of synapses determines the strength of their information transmission. "Larger synapses lead to stronger electrical impulses. Finding this relationship closes a key knowledge gap in neuroscience" explains Martin. "The finding is also critical for advancing our understanding of how information flows through our brain's circuits, and therefore how the brain operates."

Reconstructing the connections between nerve cells of the neocortex

First, the neuroscientists set about measuring the strength of the synaptic currents between two connected nerve cells. To do this, they prepared thin sections of a mouse brain and, under a microscope, inserted glass microelectrodes into two neighboring nerve cells of the neocortex. This enabled the researchers to artificially activate one of the nerve cells and at the same time measure the strength of the resulting synaptic impulse in the other cell. They also injected a dye into the two neurons to reconstruct their branched-out cellular processes in three dimensions under a light microscope.

Synapse size correlates with signaling strength

Since synapses are so tiny, the scientists used the high resolution of an electron microscope to be able to reliably identify and precisely measure the neuronal contact points. First, in their light microscope reconstructions, they marked all points of contact between the cell processes of the activated neuron that forwarded the signal and the cell processes of the neuron that received the synaptic impulse. Then, they identified all synapses between the two nerve cells under the electron microscope. They correlated the size of these synapses with the synaptic impulses they had measured previously. "We discovered that the strength of the synaptic impulse correlates directly with the size and form of the synapse," says lead author Gregor Schuhknecht, formerly a PhD student in Kevan Martin's team.

Gaining a deeper understanding of the brain's wiring diagrams

This correlation can now be used to estimate the strength of information transmission on the basis of the measured size of the synapse. "This could allow scientists to use electron microscopy to precisely map the wiring diagrams of the neocortex and then simulate and interpret the flow of information in these wiring diagrams in the computer," explains Schuhknecht. Such studies will enable a better understanding of how the brain functions under normal circumstances and how "wiring defects" can lead to neurodevelopmental disorders.

More computing power and storage capacity than thought

The team was also able to resolve another longstanding puzzle in neuroscience. Until now, the conventional doctrine had been that only a single neurotransmitter-filled packet (a so-called vesicle) is released at a synapse upon activation. The researchers were able to use a novel mathematical analysis to prove that each synapse in fact has several sites that can release packets of neurotransmitter simultaneously. "This means that synapses are much more complex and can regulate their signal strength more dynamically than previously thought. The computational power and storage capacity of the entire neocortex therefore seems to be much greater than was previously believed", says Kevan Martin.

Credit: 
University of Zurich

Investigational combo therapy shows benefit for TP53 mutant MDS and AML patients

TAMPA, Fla. -- Myelodysplastic Syndromes (MDS) and acute myeloid leukemia (AML) are rare hematologic malignancies of the bone marrow. They can occur spontaneously or secondary to treatment for other cancers, so called therapy related disease, which is frequently associated with a mutation of the tumor suppressor gene TP53. Standard treatment for these patients includes hypomethylating agents such as azacitidine or decitabine but unfortunately outcomes are very poor.

"Patients with TP53-mutant disease, which is roughly 10% to 20% of AML and de novo MDS cases, don't have many options for therapy with nondurable responses to standard therapy," said David Sallman, M.D., assistant member of the Malignant Hematology Department at Moffitt Cancer Center. "There is clearly a need for new targeted therapies for this patient population."

Sallman is leading a national, multicenter clinical trial investigating a new therapy option for this group of patients. It builds upon the standard of care therapy, combining eprenetapopt (APR-246) with the chemotherapy azacitidine. Eprenetapopt is a first-in-class mutant p53 reactivator. It is infused in the body and induces cell death in TP53 mutant cancer cells. It also has a synergistic effect when combined with azacitidine, meaning not only do the drugs work well on their own but also together they provide an amplified response.

Results of the phase 1b/2 trial to determine the safety, recommended dose and efficacy of the combination therapy were published in the Journal of Clinical Oncology.

Fifty-five patients (40 MDS, 11 AML, 4 MDS/myeloproliferative neoplasms) with at least one TP53 mutation were treated. The overall response rate was 71% with 44% having a complete response (50% for MDS patients), meaning no sign of disease with return of normal blood cell production. The median overall survival for patients was 10.8 months. Patients who responded to treatment had significantly improved overall survival at 14.7 months. Additionally, 35% of patients were able to proceed with allogeneic stem cell transplantation with favorable outcomes versus historical outcomes in this patient population.

"The data is promising and supports the current phase 3, multicenter trial, which we hope will lead to FDA approval and a new much-needed treatment option for this patient population," said Sallman.

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

RUDN University neurosurgeon created a method to collect mental activity data of software developers

image: A neurosurgeon from RUDN University studied the mental activity of developers at work. In his novel method, he combined mobile EEG devices and software that analyzes neurophysiological data.

Image: 
RUDN University

A neurosurgeon from RUDN University studied the mental activity of developers at work. In his novel method, he combined mobile EEG devices and software that analyzes neurophysiological data. The results of the study were published in the materials of the 23rd Euromicro Conference on Digital System Design (DSD).

To collect data about the activity of specific areas of the brain, one can use functional magnetic resonance imaging (fMRI). However, this method involves massive equipment and is only available at clinics or laboratories. Therefore, it is quite difficult to register human mental activity in a natural environment. Even if usual conditions are reproduced in a lab, the very fact that it is an experiment would still affect the behavior of the participants. To study the human brain in everyday situations, for example, at work, scientists need portable technologies, such as devices that would trace EEG through the skin on the head and the bones of the skull. EEG registers the brain's electric activity and the accuracy of this method largely depends on the algorithm used to process the electric signals and to render them into an image. A neurosurgeon from RUDN University confirmed the efficiency of the open-source software solution MNE in the process of EEG interpretation.

"fMRI measures mental activity using blood oxygenation parameters and produces around one image per second, while EEG allows one to collect data with much higher frequency. Moreover, modern-day EEG devices can be used in various situations, unlike fMRI equipment that requires a participant to lie still in a tomographer," said Prof. Aldo Spallone, MD, from the Department of Neurology and Neurosurgery at RUDN University.

MNE is a software solution that has been used in clinical practice to process fMRI and EEG data since 2011. To conduct the experiment, the team invited three groups of developers with different levels of experience. Each group was given a task that had to be completed in 10 to 20 minutes and each participant wore a portable EEG device on their head. The participants worked individually in an open-space office. The team also conducted separate experiments during which the participants worked in pairs and listened to music. Using MNE, the team managed to process EEG data in real-time and obtain images similar to MRI scans. To make the measurements more accurate in the future, Prof. Spallone suggested combining EEF data with MRI and magnetoencephalography results, because EEG is unable to provide information about brain structure.

"It is extremely important to understand how our brain works in different situations. In the case of software developers, it may help create an optimal working environment that would promote high efficiency and reduce the incidence of errors. We have confirmed that EEG devices can be used to study the human brain in everyday conditions. In the future, models of mental activity could be developed based on this method," added Prof. Spallone fron RUDN University.

Credit: 
RUDN University

Electron transfer discovery is a step toward viable grid-scale batteries

The liquid electrolytes in flow batteries provide a bridge to help carry electrons into electrodes, and that changes how chemical engineers think about efficiency.

The way to boost electron transfer in grid-scale batteries is different than researchers had believed, a new study from the University of Michigan has shown.

The findings are a step toward being able to store renewable energy more efficiently.

As governments and utilities around the world roll out intermittent renewable energy sources such as wind and solar, we remain reliant on coal, natural gas and nuclear power plants to provide energy when the wind isn't blowing and the sun isn't shining. Grid-scale "flow" batteries are one proposed solution, storing energy for later use. But because they aren't very efficient, they need to be large and expensive.

In a flow battery, the energy is stored in a pair of "electrolyte" fluids which are kept in tanks and flow through the working part of the battery to store or release energy. An active metal gains or loses electrons from the electrode on either side, depending on whether the battery is charging or discharging. One efficiency bottleneck is how quickly electrons move between the electrodes and the active metal.

"By maximizing the charge transfer, we can reduce the overall cost of flow batteries," said study first author Harsh Agarwal, a chemical engineering Ph.D. student who works in the lab of Nirala Singh, U-M assistant professor of chemical engineering.

Researchers have been trying different chemical combinations to improve it, but they don't really know what's going on at the molecular level. This study, published in Cell Reports Physical Science, is one of the first to explore it.

Researchers had believed that the negatively charged molecular groups from the acids provided more spots for electron transfer to take place on the battery's negative electrode. The findings from the team co-led by Singh and Bryan Goldsmith, the Dow Corning Assistant Professor of Chemical Engineering, tell a different story. Instead, the acid groups lowered the energy barrier of the electron transfer by serving as a sort of bridge between the active metal in the fluid--vanadium in this case--and the electrode. This helps the vanadium give up its electron.

"Our findings suggest that bridging may play a critical yet underexplored role in other flow battery chemistries employing transition metals," Singh said. "This discovery is not only relevant to energy storage but also fields of corrosion and electrodeposition."

The study shows that the reaction rate in flow batteries can be tuned by controlling how well the acid in the liquid electrolyte binds with the active metal.

"Researchers can apply this knowledge to electrolyte engineering or electrocatalyst development, both of which are important disciplines in sustainable energy," Agarwal said.

Agarwal and Singh measured the reaction rate between the vanadium and electrode for five different acidic electrolytes. To get a clearer picture of the details at the atomic level, the team used a form of quantum mechanical modeling, known as density functional theory, to calculate how well the vanadium-acid combinations bind to the electrode. This part of the study was undertaken by Goldsmith and Jacob Florian, a chemical engineering senior undergraduate student working in the Goldsmith lab.

At Argonne National Laboratory, Agarwal and Singh used X-ray spectroscopy to discover details about how the vanadium ions configured themselves when in contact with different acids. Density functional theory calculations helped interpret the X-ray measurements. The study also provides the first direct experimental verification of how water attaches to vanadium ions.

Credit: 
University of Michigan

Climate change puts hundreds of coastal airports at risk of flooding

Even a modest sea level rise, triggered by increasing global temperatures, would place 100 airports below mean sea level by 2100, a new study has found.

Scientists from Newcastle University modelled the risk of disruption to flight routes as a result of increasing flood risk from sea level rise.

Publishing the findings in the journal Climate Risk Management, Professor Richard Dawson and Aaron Yesudian of Newcastle University's School of Engineering analysed the location of more than 14,000 airports around the world and their exposure to storm surges for current and future sea level. The researchers also studied airports' pre-COVID-19 connectivity and aircraft traffic, and their current level of flood protection.

They found that 269 airports are at risk of coastal flooding now. A temperature rise of 2C - consistent with the Paris Agreement - would lead to 100 airports being below mean sea level and 364 airports at risk of flooding. If global mean temperature rise exceeds this then as many as 572 airports will be at risk by 2100, leading to major disruptions without appropriate adaptation.

The team developed a global ranking of airports at risk from sea level rise, which considers both the likelihood of flooding from extreme sea levels, level of flood protection, and the impacts in terms of flight disruption. Airports are at risk in Europe, North America and Oceania, with those in East and Southeast Asia and the Pacific dominating the top 20 list for airports at the highest risk.

Suvarnabhumi Airport in Bangkok (BKK) and Shanghai Pudong (PVG) topped the list, while London City is the UK airport with the highest risk.

Professor Dawson said: "These coastal airports are disproportionately important to the global airline network, and by 2100 between 10 and 20% of all routes will be at risk of disruption. Sea level rise therefore poses a serious risk to global passenger and freight movements, with considerable cost of damage and disruption."

"Moreover, some airports, for example in low-lying islands, play critical roles in providing economic, social and medical lifelines"

Adaptation options for coastal airports include increased flood protection, raising land and relocation.

Professor Dawson added: "The cost of adaptation will be modest in the context of global infrastructure expenditure. However, in some locations the rate of sea level rise, limited economic resources or space for alternative locations will make some airports unviable."

Credit: 
Newcastle University

A display that completely blocks off counterfeits

image: Schematic illustration of switchable metasurfaces
Full-color image is switched on (top) and concealed (bottom) by the polarization angle (0° and 90°) of the incident light, making it applicable to cryptography.

Image: 
POSTECH

Despite the anticounterfeiting devices attached to luxury handbags, marketable securities, and identification cards, counterfeit goods are on the rise. There is a demand for the next-generation anticounterfeiting technology - that surpasses the traditional ones - that are not easily forgeable and can hold various data.

A POSTECH research team, led by Professor Junsuk Rho of the departments of mechanical engineering and chemical engineering, Ph.D. candidates Chunghwan Jung of the Department of Chemical Engineering and Younghwan Yang of the Department of Mechanical Engineering, have together succeeded in making a switchable display device using nanostructures that is capable of encrypting full-color images depending on the polarization of light. These findings were recently published in Nanophotonics.

The new device developed by the research team was produced with a microstructure about one thousand times thinner than a strand of hair which is called a metasurface. It is known that various colors can be expressed through a uniformly arranged microstructures within the metasurface. Because the microstructures produced this time have very small pixels, they boast high resolution (approximately 40,000 dpi) and wide viewing angle while being thin, which allows it to be produced in the form of stickers.

In addition, unlike previous studies that focused on the expression of various colors, in this study, the on and off states can be adjusted according to the polarization of the incident light. This new device displays full-color images during the on state and shows no images in the off state.

Besides having the ability to turn on and turn off an image, the device can switch between different images. Specifically, by arranging three consecutive nanostructures, it achieves higher colorization rate than the previous studies. The researchers properly configured a total of 125 types of structures to encode a full-color image and proved through experiments that it completely turns off according to the polarization.

This feature can be utilized in real life as an anti-forgery device. For example, it can be designed into a security label that appears to be a simple color image to the naked eye, but reveals the serial number when a special filter is used. Moreover, by utilizing its ultrahigh resolution feature and inserting high-capacity data security algorithm, it can be used as a new security device that can replace the traditional labeling method.

Chunghwan Jung, the first author of the paper, commented, "This new device is practically impossible to forge because it requires an electron microscope with magnification capacity of several thousand and a nanometer-scale production equipment."

"This device is an ultra-high-resolution device-type display that can turn on and turn off full-color images according to the polarization component of the incident light," remarked the corresponding author Professor Junsuk Rho who led the study. "These displays can store multiple images simultaneously and can be applied to in optical cryptography."

Credit: 
Pohang University of Science & Technology (POSTECH)

Neuronal recycling: This is how our brain allows us to read

Letters, syllables, words and sentences--spatially arranged sets of symbols that acquire meaning when we read them. But is there an area and cognitive mechanism in our brain that is specifically devoted to reading? Probably not; written language is too much of a recent invention for the brain to have developed structures specifically dedicated to it.

According to this novel paper published in Current Biology, underlying reading there is evolutionarily ancient function that is more generally used to process many other visual stimuli. To prove it, SISSA researchers subjected volunteers to a series of experiments in which they were shown different symbols and images. Some were very similar to words, others were very much unlike reading material, like nonsensical three-dimensional tripods, or entirely abstract visual gratings; the results showed no difference between the way participants learned to recognise novel stimuli across these three domains. According to the scholars, these data suggest that we process letters and words similarly to how to process any visual stimulus to navigate the world through our visual experiences: we recognise the basic features of a stimulus - shape, size, structure and, yes, even letters and words - and we capture their statistics: how many times they occur, how often they present themselves together, how well one predicts the presence of the other. Thanks to this system, based on the statistical frequency of specific symbols (or combinations thereof), we can recognise orthography, understand it and therefore immerse ourselves in the pleasure of reading.

Reading is a cultural invention, not an evolutionary acquisition

"Written language was invented about 5000 years ago, there was no enough time in evolutionary terms to develop an ad hoc system", explain Yamil Vidal and Davide Crepaldi, lead author and coordinator of the research, respectively, which was also carried out by Eva Viviani, a PhD graduate from SISSA and now post-doc at the university of Oxford, and Davide Zoccolan, coordinator of the Visual Neuroscience Lab, at SISSA, too.

"And yet, a part of our cortex would appear to be specialised in reading in adults: when we have a text in front of us, a specific part of the cortex, the left fusiform gyrus, is activated to carry out this specific task. This same area is implicated in the visual recognition of objects, and faces in particular". On the other hand, explain the scientists, "there are animals such as baboons that can learn to visually recognise words, which suggests that behind this process there is a processing system that is not specific for language, and that get "recycled" for reading as we humans become literate".

Pseudocharacters, 3D objects and abstract shapes to prove the theory

How to shed light on this question? "We started from an assumption: if this theory is true, some effects that occur when we are confronted with orthographic signs should also be found when we are subjected to non-orthographic stimuli. And this is exactly what this study shows". In the research, volunteers were subjected to four different tests. In the first two, they were shown short "words" composed of few pseudocharacters, similar to numbers or letters, but with no real meaning. The scholars explain that this was done to prevent the participants, all adults, from being influenced in their performance by their prior knowledge. "We found that the participants learned to recognise groups of letters - words, in this invented language -- on the basis of the frequency of co-occurrence between their parts: words that were made up of more frequent pairs of pseudocharacters were identified more easily". In the third experiment, they were shown 3D objects that were characterised by triplet of terminal shapes--very much like the invented words were characterised by triplets of letters. In experiment 4, the images were even more abstract and dissimilar from letters. In all the experiments, the response was the same, giving full support to their theory.

From human beings to artificial intelligence: the unsupervised learning

"What emerged from this investigation", explain the authors, "not only supports our hypothesis but also tells us something more about the way we learn. It suggests that a fundamental part of it is the appreciation of statistical regularities in the visual stimuli that surround us". We observe what is around us and, without any awareness, we decompose it into elements and see their statistics; by so doing, we give everything an identity. In jargon, we call it "unsupervised learning". The more often these elements compose themselves in a precise organisation, the better we will be at giving that structure a meaning, be it a group of letters or an animal, a plant or an object. And this, say the scientists, occurs not only in children, but also in adults. "There is, in short, an adaptive development to stimuli which regularly occur. And this is important not only to understand how our brain functions, but also to enhance artificial intelligence systems that base their "learning" on these same statistical principles".

Credit: 
Scuola Internazionale Superiore di Studi Avanzati

Fans of less successful football clubs are more loyal to one another

Research led by the universities of Kent and Oxford has found that fans of the least successful Premier League football teams have a stronger bond with fellow fans and are more 'fused' with their club than supporters of the most successful teams.

The study, which was carried out in 2014, found that fans of Crystal Palace, Hull, Norwich, Sunderland, and West Bromwich Albion were found to have higher loyalty towards one another and even expressed greater willingness to sacrifice their own lives to save the lives of other fans of their club. This willingness was much higher than that of Manchester United, Arsenal, Chelsea, Liverpool or Manchester City fans. A decade of club statistics from 2003-2013 was used to identify the five most consistently successful and the five least successful clubs in the Premier League.

Crystal Palace fans were most likely to report willingness to sacrifice themselves for fellow fans (34.5%) and Arsenal's were least likely (9.4%). Manchester City fans appeared to bond to one another more like fans of the less successful clubs, although they did not significantly differ in willingness to sacrifice themselves, when compared to fans of more successful clubs, such as local rivals Manchester United.

The fans who reported the most social ties for the origin of their football passion was the least successful club, Hull (92.2%), whereas the club reporting the fewest social ties was Chelsea (63%), historically one of the most highly successful.

The survey-based study, which was conducted by Dr Martha Newson from Kent's School of Anthropology and Conservation, in collaboration with colleagues at Oxford, concluded that social bonding is significantly higher in fans of consistently unsuccessful clubs due to them having experienced more dysphoria in relation to the emotional difficulty of being a fan of a club that has been relegated or lost many games. Across clubs, memories of past football defeats formed an essential part of fans' self-concepts, fusing them to their club. They also considered their peers to be more like kin than did fans of consistently-successful clubs.

Dr Newson, an expert in group bonding, said: 'This research has helped to unpick the psychological causes of bonds among fans and is relevant to other coalitional groups, such as the military. This could be incredibly valuable, with the long-term sustainability of football clubs depending on their ability to attract and retain supporters who will support their club through thick and thin.

'These findings may also help football clubs to think about broadening inclusion and diversity among fans, and to use their links to charitable foundations to make a difference. The Twinning Project is an example, which pairs major clubs with their local prison to deliver football-based qualifications in a bid to reduce reoffending.'

Professor Harvey Whitehouse, the senior author on the paper and Director of Oxford's Centre for the Study of Social Cohesion, said: 'This is the latest in a string of studies we have conducted showing that shared suffering can produce incredibly strong social glue - a finding that is not only relevant to sports fans but to all of us as we emerge from a year of lockdowns and personal losses. A key question is whether the bonds forged through collective ordeals can be put to practical use by enabling us to pull together more effectively in the future.'

While, historically, there have been cases where this extreme bonding among fans can turn to hostility and violence, the researchers argue that the extreme, pro-group sentiments of fans who are highly fused to their club need not be violent. Nevertheless, understanding what motivates devoted fans may help football clubs and policy makers better manage crowd behaviour. Furthermore, clubs could benefit from tailoring brand management and fan retainment strategies by extracting the best from dysphoric events and treating them as opportunities to remind fans that they are 'in it together'.

Dr Newson added: 'I hope that further studies can encourage clubs with high corporate social responsibility to implement more research-driven policies to improve other critical social areas, such as sexism, racial and ethnic relations, and homophobia.'

Credit: 
University of Kent

Pioneering new technique could revolutionise super-resolution imaging systems

Scientists have developed a pioneering new technique that could revolutionise the accuracy, precision and clarity of super-resolution imaging systems.

A team of scientists, led by Dr Christian Soeller from the University of Exeter's Living Systems Institute, which champions interdisciplinary research and is a hub for new high-resolution measurement techniques, has developed a new way to improve the very fine, molecular imaging of biological samples.

The new method builds upon the success of an existing super-resolution imaging technique called DNA-PAINT (Point Accumulation for Imaging in Nanoscale Topography) - where molecules in a cell are labelled with marker molecules that are attached to single DNA strands.

Matching DNA strands are then also labelled with a florescent chemical compound and introduced in solution - when they bind the marker molecules, it creates a 'blinking effect' that makes imaging possible.

However, DNA-PAINT has a number of drawbacks in its current form, which limit the applicability and performance of the technology when imaging biological cells and tissues.

In response, the research team have developed a new technique, called Repeat DNA-Paint, which is capable of supressing background noise and nonspecific signals, as well as decreasing the time taken for the sampling process.

Crucially, using Repeat DNA-PAINT is straightforward and does not carry any known drawbacks, it is routinely applicable, consolidating the role of DNA-PAINT as one of the most robust and versatile molecular resolution imaging methods.

The study is published in Nature Communications on 21st January 2021 .

Dr Soeller, lead author of the study and who is a biophysicist at the Living Systems Institute said: "We can now see molecular detail with light microscopy in a way that a few years ago seemed out of reach. This allows us to directly see how molecules orchestrate the intricate biological functions that enable life in both health and disease".

The research was enabled by colleagues from physics, biology, medicine, mathematics and chemistry working together across traditional discipline boundaries. Dr Lorenzo Di Michele, co-author from Imperial College London said: "This work is a clear example of how quantitative biophysical techniques and concepts can really improve our ability to study biological systems".

Credit: 
University of Exeter

New combination of immunotherapies shows great promise for treating lung cancer

image: McMaster researchers Ali Ashkar and Sophie Poznanski have developed a promising new form of treatment for lung cancer using a combination of two immunotherapies.

Image: 
Georgia Kirkos, McMaster University

HAMILTON, ON, Jan. 21, 2020 -- McMaster University researchers have established in lab settings that a novel combination of two forms of immunotherapy can be highly effective for treating lung cancer, which causes more deaths than any other form of cancer.

The new treatment, yet to be tested on patients, uses one form of therapy to kill a significant number of lung tumor cells, while triggering changes to the tumor that enable the second therapy to finish the job.

The first therapy employs suppressed "natural killer" immune cells by extracting them from patients' tumours or blood and supercharging them for three weeks. The researchers condition the cells by expanding and activating them using tumour-like feeder cells to improve their effectiveness before sending them back into battle against notoriously challenging lung tumors.

The supercharged cells are very effective on their own, but in combination with another form of treatment called checkpoint blockade therapy, create a potentially revolutionary treatment.

"We've found that re-arming lung cancer patients' natural killer immune cells acts as a triple threat against lung cancer," explains Sophie Poznanski, the McMaster PhD student and CIHR Vanier Scholar who is lead author of a paper published today in the Journal for ImmunoTherapy of Cancer.

"First, these highly activated cells are able to kill tumour cells efficiently. Second, in doing so, they also reactivate tumour killing by exhausted immune cells within the patients' tumours. And third, they release factors that sensitize patients' tumours to another immunotherapy called immune checkpoint blockade therapy.

"As a result, we've found that the combination of these two therapies induces robust tumour destruction against patient tumours that are initially non-responsive to therapy."

Previous breakthroughs in checkpoint blockade therapy had earned Japanese researcher Tasuku Honjo and American immunologist James Allison the 2018 Nobel Prize for Medicine or Physiology.

Checkpoint blockade therapy works by unlocking cancer's defence against the body's natural immune response. The therapy can be highly effective in resolving even advanced cases of lung cancer - but it only works in about 10 per cent of patients who receive it.

The research team, featuring 10 authors in total, has shown that the supercharged immune cells, when deployed, release an agent that breaks down tumors' resistance to checkpoint blockade therapy, allowing it to work on the vast majority of lung-cancer patients whose tumors would otherwise resist the treatment.

Once activated, the natural killer cells are able to secrete inflammatory factors that help enhance the target of the blockchain that the other immunotherapy treats.

"We needed to find a one-two punch to dismantle the hostile lung tumor environment," says Ali Ashkar, a professor of Medicine and a Canada Research Chair who is Poznanski's research supervisor and the corresponding author on the paper. "Not only is this providing a new treatment for hard-to-treat lung cancer tumors with the natural killer cells, but that treatment also converts the patients who are not responsive to PD1-blockade therapy into highly responsive candidates for this effective treatment".

Such progress is possible because of the close collaboration among clinical practitioners and lab-based researchers at McMaster and its partner institutions, Ashkar says.

He said the team's clinical practitioners, who work with cancer patients every day, provided critical wisdom and collected vital samples from patients at St. Joseph's Healthcare Hamilton. Ashkar says those clinicians' insights and the samples were integral to the research.

Co-author Yaron Shargall, a professor in and Chief of the Division of Thoracic Surgery at McMaster's Michael G. DeGroote School of Medicine and a thoracic surgeon at St. Joseph's Healthcare Hamilton, says the promising outcome is the result of close links between basic science and clinical medicine.

"It was successful mostly due to the facts that the two groups have spent long hours together, discussing potential ways of combining forces and defining a linkage between a highly specific basic science technology and a very practical clinical, day-to-day dilemmas," he said. "This led to a flawless collaboration which resulted in a very elegant, potentially practice-changing, study."

The researchers are now working to organize a human clinical trial of the combined therapies, a process that could be under way within months, since both immunotherapies have already been approved for individual use.

Credit: 
McMaster University

Squeezing a rock-star material could make it stable enough for solar cells

image: Scientists at SLAC National Accelerator Laboratory and Stanford University discovered that squeezing a promising lead halide material in a diamond anvil cell (left) produces a so-called "black perovskite" (right) that's stable enough for solar power applications.

Image: 
Greg Stewart/ SLAC National Accelerator Laboratory

Among the materials known as perovskites, one of the most exciting is a material that can convert sunlight to electricity as efficiently as today's commercial silicon solar cells and has the potential for being much cheaper and easier to manufacture.

There's just one problem: Of the four possible atomic configurations, or phases, this material can take, three are efficient but unstable at room temperature and in ordinary environments, and they quickly revert to the fourth phase, which is completely useless for solar applications.

Now scientists at Stanford University and the Department of Energy's SLAC National Accelerator Laboratory have found a novel solution: Simply place the useless version of the material in a diamond anvil cell and squeeze it at high temperature. This treatment nudges its atomic structure into an efficient configuration and keeps it that way, even at room temperature and in relatively moist air.

The researchers described their results in Nature Communications.

"This is the first study to use pressure to control this stability, and it really opens up a lot of possibilities," said Yu Lin, a SLAC staff scientist and investigator with the Stanford Institute for Materials and Energy Sciences (SIMES).

"Now that we've found this optimal way to prepare the material," she said, "there's potential for scaling it up for industrial production, and for using this same approach to manipulate other perovskite phases."

A search for stability

Perovskites get their name from a natural mineral with the same atomic structure. In this case the scientists studied a lead halide perovskite that's a combination of iodine, lead and cesium.

One phase of this material, known as the yellow phase, does not have a true perovskite structure and can't be used in solar cells. However, scientists discovered a while back that if you process it in certain ways, it changes to a black perovskite phase that's extremely efficient at converting sunlight to electricity. "This has made it highly sought after and the focus of a lot of research," said Stanford Professor and study co-author Wendy Mao.

Unfortunately, these black phases are also structurally unstable and tend to quickly slump back into the useless configuration. Plus, they only operate with high efficiency at high temperatures, Mao said, and researchers will have to overcome both of those problems before they can be used in practical devices.

There had been previous attempts to stabilize the black phases with chemistry, strain or temperature, but only in a moisture-free environment that doesn't reflect the real-world conditions that solar cells operate in. This study combined both pressure and temperature in a more realistic working environment.

Pressure and heat do the trick

Working with colleagues in the Stanford research groups of Mao and Professor Hemamala Karunadasa, Lin and postdoctoral researcher Feng Ke designed a setup where yellow phase crystals were squeezed between the tips of diamonds in what's known as a diamond anvil cell. With the pressure still on, the crystals were heated to 450 degrees Celsius and then cooled down.

Under the right combination of pressure and temperature, the crystals turned from yellow to black and stayed in the black phase after the pressure was released, the scientists said. They were resistant to deterioration from moist air and remained stable and efficient at room temperature for 10 to 30 days or more.

Examination with X-rays and other techniques confirmed the shift in the material's crystal structure, and calculations by SIMES theorists Chunjing Jia and Thomas Devereaux provided insight into how the pressure changed the structure and preserved the black phase.

The pressure needed to turn the crystals black and keep them that way was roughly 1,000 to 6,000 times atmospheric pressure, Lin said ­- about a tenth of the pressures routinely used in the synthetic diamond industry. So one of the goals for further research will be to transfer what the researchers have learned from their diamond anvil cell experiments to industry and scale up the process to bring it within the realm of manufacturing.

Credit: 
DOE/SLAC National Accelerator Laboratory

COVID-19 infection in immunodeficient patient cured by infusing convalescent plasma

image: Under FDA emergency-use authorization, doctors successfully resolved COVID-19 in a seriously ill, immunodeficient woman using a very high-neutralizing antibody-titer convalescent plasma from a recovered COVID-19 patient. However, further study suggested that use of convalescent plasma may not be warranted in many cases, for two reasons: 1) titer levels are too low in many convalescent plasmas, and 2) there are high endogenous neutralizing antibody titers already present in COVID-19 patients prior to infusion.

Image: 
UAB

BIRMINGHAM, Ala. - A 72-year-old woman was hospitalized with severe COVID-19 disease, 33 days after the onset of symptoms. She was suffering a prolonged deteriorating illness, with severe pneumonia and a high risk of death, and she was unable to mount her own immune defense against the SARS-CoV-2 virus because of chronic lymphocytic leukemia, which compromises normal immunoglobulin production.

But when physicians at the University of Alabama at Birmingham recommended a single intravenous infusion of convalescent blood plasma from her son-in-law -- who had recovered from COVID-19 disease -- a remarkable, beneficial change followed. Her physician, Randall Davis, M.D., professor in the UAB Department of Medicine, says she showed prompt and profound improvement within 48 hours.

Her 104-degree F fever rapidly dropped. In three days, the virus was no longer detectable in her respiratory swabs. And four days after the infusion, she was discharged from the hospital.

However, this single case appears to be an outlier, as shown through collaborative research at UAB, the University of Pennsylvania and several other institutions. In their study, reported in Cell Reports Medicine, researchers show that the woman's recovery was due to an extremely high virus-neutralizing titer in the son-in-law's donated plasma that she received. They found that this titer was higher than titers they measured in 64 other remnant convalescent plasmas collected by two blood banks.

Only 37 percent of the convalescent plasmas from the first blood bank had neutralizing antibody titers above 250, the lower cut-off value allowed by the Food and Drug Administration's emergency use authorization for convalescent plasma, which allows an unapproved medical product to be used in an emergency to treat life-threatening disease. In the plasmas from the second blood bank, only 47 percent exceeded the neutralizing antibody titer cut-off of 250. Thus, many of these plasmas were inadequate for transfusion.

While eight convalescent plasmas from the second blood bank exceeded a neutralizing antibody titer of 1,000, they were far beneath the extremely high, 5,720 neutralizing antibody titer of the son-in-law's plasma that was given to his immunodeficient COVID-19 mother-in-law.

The researchers also analyzed plasma titers in 17 other patients besides the 72-year-old woman, both before and after they were given convalescent plasma for treatment of COVID-19. Before infusion of plasma, 53 percent of these patients already had neutralizing antibody values greater than 250, and seven of the patients had titers greater than 3,000.

For the 16 patients the researchers were able to analyze, the infusion of convalescent plasma had no significant impact on their preexisting neutralizing antibody titers, and many of the recipients had endogenous neutralizing antibody responses that far exceeded those of the administered convalescent plasma units.

In contrast, the infusion of 218 milliliters of the son-in-law's convalescent plasma into the index COVID-19 patient, an amount equal to about one cup, resulted in an obvious rise in her neutralizing antibody titers that persisted four days after infusion.

"Our results have important implications for how convalescent plasma therapy is being used now and how it may be improved," Davis and colleagues reported. "The low neutralizing titers in most convalescent plasma donors raise concern.

"Convalescing individuals with truly high-titer neutralizing antibodies are rare, which underscores the need for a concerted effort to identify them but also poses the question of whether there are ample numbers of suitable convalescent plasma donors," they wrote. "The generally low neutralizing antibody titers in most donors, as well as high titer baseline neutralizing antibodies in many recipients, highlight the importance of first testing the convalescent plasmas, and also the recipients. Doing so should optimize the clinical benefit and reduce the effort spent when convalescent plasma therapy is not appropriate."

Credit: 
University of Alabama at Birmingham

Adaptive optics with cascading corrective elements

image: Cascading optofluidic phase modulators for performance enhancement in refractive adaptive optics, doi 10.1117/1.AP.2.6.066005

Image: 
SPIE

Microscopy is the workhorse of contemporary life science research, enabling morphological and chemical inspection of living tissue with ever-increasing spatial and temporal resolution. Even though modern microscopes are genuine marvels of engineering, minute deviations from ideal imaging conditions will still lead to optical aberrations that rapidly degrade imaging quality. A mismatch between the refractive indices of the sample and its immersion medium, deviations in the thickness of sample holders or cover glasses, the effects of aging on the instrument--such deviations can manifest themselves in the form of spherical aberration and focusing errors. Also, particularly for deep tissue imaging, an essential tool in neurobiology research, an inhomogeneous refractive index of the sample and its complex surface shape can lead to additional higher order aberrations.

Adaptive optics microscopy

Adaptive optics (AO), an image correction technique first used in astronomical telescopes for compensating the effects of atmospheric turbulence, is the state-of-the-art method to dynamically correct for sample and system-induced aberrations in a microscopy system. A typical AO system features an active, shapeshifting optical element that can reproduce the inverse of the wavefront error present in the system. Commonly taking the form of either a deformable mirror or a liquid crystal spatial light modulator, the limitations of this element define the quality of achievable aberration correction and thus the widespread applicability of AO microscopy.

As reported in Advanced Photonics, researchers from the University of Freiburg, Germany, have made a significant advance in AO microscopy through the demonstration of a new AO module comprising two deformable phase plates (DPPs). In contrast to deformable mirrors, the DPP system is a wavefront modulator operating in transmission, enabling direct AO integration with existing microscopes. In this AO configuration, similar to hi-fidelity loudspeakers with separate woofer and tweeter units, one of the optical modulators is optimized for low-spatial frequency aberrations, while the second is used for high-frequency correction.

Cascading modulation

A major challenge for an AO system with multiple phase modulators is how to place them on optically equivalent (conjugate) positions, often requiring multiple additional optical components to relay the image until it reaches the detector. Therefore, configuring even two modulators in an AO system is very challenging. Since the DPPs are

To demonstrate its performance, the team integrated their new AO system into a custom-built fluorescence microscope, where sample-induced aberrations are iteratively estimated without a wavefront sensor. Imaging experiments on synthetic samples demonstrated that the new AO system not only doubles the aberration correction range, but also greatly improves correction quality. The work demonstrates that more advanced aberration correction schemes, such as multi-conjugate adaptive optics, can be implemented as easily and with new and more advanced control methods.

Credit: 
SPIE--International Society for Optics and Photonics