Brain

Children develop PTSD when they 'overthink' their trauma

Children are more likely to suffer Post Traumatic Stress Disorder (PTSD) if they think their reaction to traumatic events is not 'normal' - according to new research from the University of East Anglia.

While most children recover well after a traumatic event, some go on to develop PTSD that may stay with them for months, years, or even into adulthood.

A new study, published today, reveals that children begin down this route when they have trouble processing their trauma and perceive their symptoms as being a sign that something is seriously wrong.

Lead researcher Prof Richard Meiser-Stedman, from UEA's Norwich Medical School, said: "Symptoms of PTSD can be a common reaction to trauma in children and teenagers. These can include distressing symptoms like intrusive memories, nightmares and flashbacks. Health professionals steer away from diagnosing it in the first month after a trauma because, rather than being a disorder, it's a completely normal response.

"Many children who experience a severe traumatic stress response initially can go on to make a natural recovery without any professional support. But a minority go on to have persistent PTSD, which can carry on for much longer.

"We wanted to find out more about why some children have significant traumatic stress symptoms in the days and weeks after a trauma and while others do not, and importantly - why some recover well without treatment, while others go on to experience more persistent problems."

The research team worked with over 200 children aged between eight and 17 who had attended a hospital emergency department following a one-off traumatic incident. These included events such as car crashes, assaults, dog attacks and other medical emergencies.

These young people were interviewed and assessed for PTSD between two and four weeks following their trauma, and again after two months.

The research team split the children's reactions into three groups - a 'resilient' group who did not develop clinically significant traumatic stress symptoms at either time point, a 'recovery' group who initially displayed symptoms but none at the two month follow up, and a 'persistent' group who had significant symptoms at both time points.

The team also examined whether social support and talking about the trauma with friends or family may be protective against persistent problems after two months. They also took into account factors including other life stressors and whether the child was experiencing on-going pain.

Dr Meiser-Stedman said: "We found that PTSD symptoms are fairly common early on - for example between two and four weeks following a trauma. These initial reactions are driven by high levels of fear and confusion during the trauma.

"But the majority of children and young people recovered naturally without any intervention.

"Interestingly the severity of physical injuries did not predict PTSD, nor did other life stressors, the amount of social support they could rely on, or self-blame.

"The young people who didn't recover well, and who were heading down a chronic PTSD track two months after their trauma, were much more likely to be thinking negatively about their trauma and their reactions - they were ruminating about what happened to them.

"They perceived their symptoms as being a sign that something was seriously and permanently wrong with them, they didn't trust other people as much, and they thought they couldn't cope.

"In many cases, more deliberate attempts to process the trauma - for example, trying to think it through or talk it through with friends and family - were actually associated with worse PTSD. The children who didn't recover well were those that reported spending a lot of time trying to make sense of their trauma. While some efforts to make sense of trauma might make sense, it seems that it is also possible for children to get 'stuck' and spend too long focusing on what happened and why.

"The young people who recovered well on the other hand seemed to be less bothered by their reactions, and paid them less attention."

Credit: 
University of East Anglia

New type of mobile tracking link shoppers' physical movements, buying choices

Improvements in the precision of mobile technologies make it possible for advertisers to go beyond using static location and contextual information about consumers to increase the effectiveness of mobile advertising based on customers' location. A new study used a targeting strategy that tracks where, when, and for how long consumers are in a shopping mall to determine how shoppers' physical movements affect their economic choices. The study found that targeting potential customers in this way can significantly improve advertising via mobile phones.

The study, by researchers at Carnegie Mellon University, New York University, and Pennsylvania State University, appears in the journal Management Science.

"Our results can help advertisers improve the design and effectiveness of their mobile marketing strategies," says Beibei Li, assistant professor of information systems and management at Carnegie Mellon University's Heinz College of Information Systems and Public Policy, who coauthored the study.

The study took place in June 2014 at an Asian shopping mall with more than 300 stores and more than 100,000 daily visitors. Consumers were asked if they wanted to enjoy free Wi-Fi, and if they did, completed a form with their age, gender, income range, and type of credit card and phone.

Researchers tracked 83,370 unique responses over 14 days. Participants were randomly assigned to one of four groups: Those who did not receive any ads via their mobile phone, those sent an ad from a randomly selected store, those send an ad based on their current location, and those sent ads based on information trajectory-based targeting. Researchers monitored the participants, obtaining detailed information on the shoppers' trajectory--where they were, when, and for how long--as well as detailed behavioral data that is recorded and updated regularly from many mobile devices.

Customers who purchased an item from a store in the mall were asked to fill out another form, which included similar questions as well as information on the amount spent and whether the purchase was related to a coupon the customer received via his or her mobile phone. A short follow-up survey was conducted via phone.

The study found that trajectory-based targeting can lead customers to use offers sent via mobile phone more frequently and more rapidly than more conventional forms of mobile targeting. In addition, trajectory-based targeting led to higher customer satisfaction among participants.

Trajectory-based mobile targeting also increased total revenues from the stores that were associated with the promotion, as well as overall revenue for the shopping mall. It was less effective in raising overall mall revenues on weekends, and less effective for shoppers who were exploring products across a range of categories instead of considering buying something from just one category.

The study also found that trajectory-based targeting is especially effective in attracting high-income and male shoppers.

"Mobile ads that are based on customers' trajectories can be designed to influence consumers' shopping patterns," explains Anindya Ghose, professor of business at New York University, who coauthored the study. "This suggests that this type of targeting can be used not only to boost the efficiency of customers' current shopping behavior but also to nudge them toward changing their shopping patterns, which will generate additional revenue for businesses."

Credit: 
Carnegie Mellon University

Scientists argue for more comprehensive studies of Cascade volcanoes

image: Mount Hood

Image: 
Oregon State University

CORVALLIS, Ore. - The string of volcanoes in the Cascades Arc, ranging from California's Mt. Lassen in the south to Washington's Mt. Baker in the north, have been studied by geologists and volcanologists for over a century. Spurred on by spectacular events such as the eruption of Mount Lassen in 1915 and Mount St. Helens in 1980, scientists have studied most of the Cascade volcanoes in detail, seeking to work out where the magma that erupts comes from and what future eruptions might look like.

However, mysteries still remain about why nearby volcanoes often have radically different histories of eruption or erupt different types of magma. Now scientists would like to find out why - both for the Cascades and for other volcanic ranges.

In a perspective essay published today (March 22) in Nature Communications, scientists argue for more "synthesis" research looking at the big picture of volcanology to complement myriad research efforts looking at single volcanoes.

"The study of volcanoes is fascinating in detail, and it has largely been focused on research into individual volcanoes rather than the bigger picture," said Adam Kent, a volcano expert at Oregon State University and a co-author on the essay. "We now have the insight and data to go beyond looking at just Mount St. Helens and other well-known volcanoes. We can take a step back and ask why is St. Helens different from Mount Adams, why is that different from Mount Hood?"

The study takes a novel approach to this topic. "One way to do this is to consider the heat it took to create each of the volcanoes in the Cascades Arc, for example, and also compare this to the local seismic wave speeds and heat flow within the crust, Kent said. "Linking these diverse data sources together this way gives us a better glimpse into the past, but offer some guidance on what we might expect in the future."

The need for studying volcanoes more thoroughly is simple, noted Christy Till of Arizona State University, lead author of the Nature Communications essay.

Worldwide almost a billion people live in areas at risk from volcanic eruptions, 90 percent of which live in the so-called Pacific Ring of Fire.

The subduction of the Juan de Fuca tectonic plate beneath the North American plate is the ultimate driver for the formation of the Cascade Range, as well as many of the earthquakes the Northwest has experienced. Subduction results in deep melting of the Earth's mantle, and the magma then heads upward towards the crust and surface, eventually reaching the surface to produce volcanoes.

But there are differences among the volcanoes, the researchers note, including in the north and south of the Cascade Range.

"The volcanoes in the north stand out because they stand alone," Kent said. "In the south, you also have recognizable peaks like the Three Sisters and Mount Jefferson, but you also many thousands of smaller volcanoes like Lava Butte and those in the McKenzie Pass area in between. Our work suggests that, together with the larger volcanoes, these small centers require almost twice the amount of magma being input into the crust in the southern part of the Cascade Range."

Why is that important?

"If you live around a volcano, you have to be prepared for hazards and the hazards are different with each different type of volcano," Kent said. "The northern Cascades are likely to have eruptions in the future, but we know where they'll probably be - at the larger stratovolcanoes like Mount Rainier, Mount Baker and Glacier Peak. In the south the larger volcanoes might also have eruptions, but then we have these large fields of smaller - so called 'monogenetic' volcanoes. For these it is harder to pinpoint where future eruptions will occur."

The field of volcanology has progressed quite a bit, the researchers acknowledge, and the need now exists to integrate some of the methodology of individual detailed studies to give a more comprehensive look at the entire volcanic system. The past is the best informer of the future.

"If you look at the geology of a volcano, you can tell what kind of eruption is most likely to happen," Kent said. "Mount Hood, for example, is known to have had quite small eruptions in the past, and the impact of these is mostly quite local. Crater Lake, on the other hand, spread ash across much of the contiguous United States.

"What we would like to know is why one volcano turns out to be a Mount Hood while another develops into a Crater Lake, with a very different history of eruptions. This requires us to think about the data that we have in new ways."

The 1980 eruption of Mt. St. Helens was a wake-up call to the threat of volcanoes in the continental United States, and though noteworthy, its eruption was relatively minor. The amount of magma involved in the eruption was estimated to be 1 kilometer cubed (enough to fill about 400,000 Olympic swimming pools), whereas the eruption of Mt. Mazama 6,000 years ago that created Crater Lake was 50 km cubed, or 50 times as great.

The researchers say the process of building and tearing down volcanoes continues today, though it is difficult to observe on a day-to-day basis.

"If you could watch a time-lapse camera over millions of years, you would see volcanoes building up slowly, and then eroding fairly quickly," said Kent, who is in OSU's College of Earth, Ocean, and Atmospheric Sciences. "Sometimes, both are happening at once."

Which of the Cascades is most likely to erupt? The smart money is on Mount St. Helens, because of its recent activity, but many of the volcanoes are still considered active.

"I can tell you unequivocally that Mount Hood will erupt in the future," Kent said. "I just can't tell you when."

For the record, Kent said the odds of Mt. Hood erupting in the next 30 to 50 years are less than 5 percent.

Credit: 
Oregon State University

Females live longer when they have help raising offspring

Female birds age more slowly and live longer when they have help raising their offspring, according to new research from the University of Sheffield.

Researchers studied the relationship between ageing and offspring rearing patterns in the Seychelles warbler, and found that females who had assistance from other female helpers benefitted from a longer, healthier lifespan.

The findings help explain why social species, such as humans, which live in groups and cooperate to raise offspring, often have longer lifespans.

The study was led by researchers at the University of East Anglia (UEA) and the University of Groningen in the Netherlands, in collaboration with the universities of Sheffield, Leeds and Wageningen, and with Nature Seychelles.

Professor Terry Burke, from the University of Sheffield's Department of Animal and Plant Sciences, said: "It is well understood that one of the benefits of having relatives' help to raise offspring is that this improves the survival of the young. We have now shown that as parents age they decline in their ability to care for their offspring, but having helpers compensates for this effect, allowing the parents to continue to reproduce successfully into old age. This result helps to answer the question of why some animals assist others to reproduce, instead of raising their own offspring."

Professor David S Richardson, from UEA's School of Biological Sciences, said: "There is huge variation in lifespan between different species, and also between individuals within a species. But we know very little about what causes one individual to live a long healthy life, and another to die young. Or indeed, why individuals in one species live much longer than individuals in another similar species.

"Finding out more about what causes biological ageing is really important. And, until now, there has been very little known about the relationship between sociality and ageing within species."

Many species have cooperative breeding systems - in which offspring are cared for not only by their parents, but also by other adult members of the group called 'helpers'. These helpers are often - but not always - grown-up offspring from previous years.

The research team used more than 15 years of data on the breeding patterns of Seychelles warblers living on the small island of Cousin, in the Seychelles, to study associations between cooperative care giving and ageing.

As well as studying how quickly individuals' chances of dying increased as they grow older, the team also used the length of the birds' telomeres as a measure of their condition. Telomeres are found at the end of chromosomes and act as protective caps to stop genes close to the end of the chromosome being damaged - like the hard plastic ends of a bootlace.

Professor Richardson said: "Our previous work has shown that telomere length can be a good indicator of an individual's biological condition relative to its actual age - a measure of an individual's biological age so to speak. So we can use it to measure how quickly different birds are ageing.

"In the Seychelles warbler the majority of helpers are female - and they assist with incubating the eggs and providing food for the chicks. This means that the parents don't need to do as much work when they have help.

"We found that older dominant females really benefit from having female helpers - they lose less of their telomeres and are less likely to die in the near future. This shows they are ageing slower than females without helpers. Interestingly, these older female mothers were also more likely to have female helpers.

"Meanwhile the survival of elderly birds who were not assisted by helpers declined rapidly with age.

"The birds only need one female helper to show the effect of delayed ageing, and indeed most only have either one or no helpers. Very few may have two or three helpers, but there were not enough of those to determine whether there would be a greater benefit in having more helpers."

Dr Martijn Hammers, from the University of Groningen, said: "Our results suggest that for the older mothers, there are real benefits to cooperative breeding. Biologically speaking they stay 'younger' for longer, and they are more likely to live longer.

"These findings may help to explain why social species often have longer lifespans.

"What we don't know yet is why some older individuals have helpers, which enable them to live longer, and some don't despite the obvious benefits. Further research is needed to confirm the causality of the associations we have found."

Credit: 
University of Sheffield

Brain region discovered that only processes spoken, not written words

image: This is an MRI brain scan from a patient in the study.

Image: 
Northwestern University

Patients in a new Northwestern Medicine study were able to comprehend words that were written but not said aloud. They could write the names of things they saw but not verbalize them.

Even though these patients could hear and speak perfectly fine, a disease had crept into a portion of their brain that kept them from processing auditory words while still allowing them to process visual ones. Patients in the study had primary progressive aphasia (PPA), a rare type of dementia that destroys language and currently has no treatment.

The study, published March 21 in the journal Cognitive and Behavioral Neurology, allowed the scientists to identify a previously little-studied area in the left brain that seems specialized to process auditory words.

If a patient in the study saw the word "hippopotamus" written on a piece of paper, they could identify a hippopotamus in flashcards. But when that patient heard someone say "hippopotamus," they could not point to the picture of the animal.

"They had trouble naming it aloud but did not have trouble with visual cues," said senior author Sandra Weintraub, professor of psychiatry and behavioral sciences and neurology at Northwestern University Feinberg School of Medicine. "We always think of these degenerative diseases as causing widespread impairment, but in early stages, we're learning that neurodegenerative disease can be selective with which areas of the brain it attacks."

For most patients with PPA, communicating can be difficult because it disrupts both the auditory and visual processes in the brain.

"It's typically very frustrating for patients with PPA and their families," said Weintraub, also a member of Northwestern's Mesulam Center for Cognitive Neurology and Alzheimer's Disease. "The person looks fine, they're not limping and yet they're a different person. It means having to re-adjust to this person and learning new ways to communicate."

Remarkably, all four patients in this study could still communicate with others through writing and reading because of a specific type of brain pathology, TDP-43 Type A.

"It doesn't happen that often that you just get an impairment in one area," Weintraub said, explaining that the brain is compartmentalized so that different networks share the job of seemingly easy tasks, such as reading a word and being able to say it aloud. "The fact that only the auditory words were impaired in these patients and their visual words were untouched leads us to believe we've identified a new area of the brain where raw sound information is transformed into auditory word images."

The findings are preliminary because of the small sample size but the scientists hope they will prompt more testing of this type of impairment in future PPA patients, and help design therapies for PPA patients that focus on written communication over oral communication.

While 30 percent of PPA cases are caused by molecular changes in the brain due to Alzheimer's Disease, the most common cause of this dementia, especially in people under 60 years old, is frontotemporal lobar degeneration (FTLD). The patients in this study had FTLD-TDP Type A, which is very rare. The fact that this rare neurodegenerative disease is associated with a unique clinical disorder of language is a novel finding.

The study followed patients longitudinally and examined their brains postmortem. Weintraub stressed the importance of people participating in longitudinal brain studies while they're alive and donating their brain to science after they die so the science community can continue learning more about how to keep brains healthy.

"We know so much about the heart, liver, kidneys, eyes and other organs but we know so little about the brain in comparison," Weintraub said.

Credit: 
Northwestern University

Uncovering the superconducting phosphine: P2H4 and P4H6

image: The high pressure phase diagrams of PH3 at room temperature and low temperature.

Image: 
©Science China Press

Searching high-Tc superconductor has become a hot topic in physics since superconducting mercury was first reported more than one century ago. Dense hydrogen was predicted to metalize and become superconductor at high pressure and room temperature. However, it has been very challenging and no widely accepted experimental work has been reported yet. In 2004, Ashcroft predicted hydrogen-dominant hydrides could become high-Tc superconductor at high pressure, due to the 'chemical precompression'. Later, Drozdov et al. observed the superconductive transition of H2S at 203 K and 155 GPa, which broke the highest Tc record. Very recently, LaH6 was reported to shown superconducting behavior at ~260K. Motivated by these works, extensive investigations on hydrides system have been reported.

PH3, a typical hydrogen-rich hydride, has attracted a great deal of research interest because of its superconducting transition discovered at high pressure. However, structural information was not provided, and the origin of the superconducting transition remains puzzling. Although a series of theoretical works suggested possible structures, the PH3 phase under compression has remained unknown and no relevant experimental studies have been reported.

In a recent research article published in National Science Review, scientists from the Center for High Pressure Science and Technology Advanced Research, School of Physics and Electronic Engineering, Jiangsu Normal University, Key Laboratory of Carbon Materials of Zhejiang Province, College of Chemistry and Materials Engineering, Wenzhou University and Shanghai Institute of Applied Physics, Chinese Academy of Sciences present their results on the studies of stoichiometric evolutions of PH3 under high pressure. It was found that PH3 is stable below 11.7 GPa and then it starts to dehydrogenate through two dimerization processes at room temperature and pressures up to 25 GPa. Two resulting phosphorus hydrides, P2H4 and P4H6, were verified experimentally and can be recovered to ambient pressure. Under further compression above 35 GPa, the P4H6 directly decomposed into elemental phosphorus. Low temperature can greatly hinder polymerization/decomposition under high pressure, and retain P4H6 up to at least 205 GPa. "Our findings suggested that P4H6 might be responsible for superconductivity at high pressures." said Dr. Lin Wang, the corresponding author of the article.

To determine the possible structure of P4H6 at high pressure, structural searches were further performed. Theoretical calculations revealed that two stable structures with space group Cmcm ( 182 GPa) were found. Phonon dispersions calculations of the two structures do not give any imaginary frequencies and therefore, this verifies their dynamic stabilities. The superconducting Tc of the C2/m structure at 200 GPa was estimated to be 67 K. "All of these findings confirmed P4H6 might be the corresponding superconductor, which is helpful for shedding light on the superconducting mechanism." Dr. Wang added.

Credit: 
Science China Press

Computer program developed to find 'leakage' in quantum computers

Quantum computers are designed to process information using quantum bits, and promise huge speedups in scientific computing and codebreaking

Current prototype devices are publicly accessible but highly error prone: information can 'leak' into unwanted states

Computer program designed and run by University of Warwick physicists can tell when a quantum computer is 'leaking'

Results will inform the development of future quantum computers and error correction techniques

A new computer program that spots when information in a quantum computer is escaping to unwanted states will give users of this promising technology the ability to check its reliability without any technical knowledge for the first time.

Researchers from the University of Warwick's Department of Physics have developed a quantum computer program to detect the presence of 'leakage', where information being processed by a quantum computer escapes from the states of 0 and 1.

Their method is presented in a paper published today (19 March) in the journal Physical Review A, and includes experimental data from its application on a publicly accessible machine, that shows that undesirable states are affecting certain computations.

Quantum computing harnesses the unusual properties of quantum physics to process information in a wholly different way to conventional computers. Taking advantage of the behaviour of quantum systems, such as existing in multiple different states at the same time, this radical form of computing is designed to process data in all of those states simultaneously, lending it a huge advantage over conventional computing.

In conventional computing, quantum computers use combinations of 0s and 1s to encode information, but quantum computers can exploit quantum states that are both 0 and 1 at the same time. However, the hardware that encodes that information may sometimes encode it incorrectly in another state, a problem known as 'leakage'. Even a miniscule leakage accumulating over many millions of hardware components can cause miscalculations and potentially serious errors, nullifying any quantum advantage over conventional computers. As a part of a much wider set of errors, leakage is playing its part in preventing quantum computers from being scaled up towards commercial and industrial application.

Armed with the knowledge of how much quantum leakage is occurring, computer engineers will be better able to build systems that mitigate against it and programmers can develop new error-correction techniques to take account of it.

Dr Animesh Datta, Associate Professor of Physics, said: "Commercial interest in quantum computing is growing so we wanted to ask how we can say for certain that these machines are doing what they are supposed to do.

"Quantum computers are ideally made of qubits, but as it turns out in real devices some of the time they are not qubits at all - but in fact are qutrits (three state) or ququarts (four state systems). Such a problem can corrupt every subsequent step of your computing operation.

"Most quantum computing hardware platforms suffer from this issue - even conventional computer drives experience magnetic leakage, for example. We need quantum computer engineers to reduce leakage as much as possible through design, but we also need to allow quantum computer users to perform simple diagnostic tests for it.

"If quantum computers are to enter common usage, it's important that a user with no idea of how a quantum computer works can check that it is functioning correctly without requiring technical knowledge, or if they are accessing that computer remotely."

The researchers applied their method using the IBM Q Experience quantum devices, through IBM's publicly accessible cloud service. They used a technique called dimension witnessing: by repeatedly applying the same operation on the IBM Q platform, they obtained a dataset of results that could not be explained by a single quantum bit, and only by a more complicated, higher dimensional quantum system. They have calculated that the probability of this conclusion arising from mere chance is less than 0.05%.

While conventional computers use binary digits, or 0s and 1s, to encode information in transistors, quantum computers use subatomic particles or superconducting circuits known as transmons to encode that information as a qubit. This means that it is in a superposition of both 0 and 1 at the same time, allowing users to compute on different sequences of the same qubits simultaneously. As the number of qubits increases, the number of processes also increases exponentially. Certain kinds of problems, like those found in codebreaking (which relies on factoring large integers) and in chemistry (such as simulating complicated molecules), are particularly suited to exploiting this property.

Transmons (and other quantum computer hardware) can exist in a huge number of states: 0, 1, 2, 3, 4 and so on. An ideal quantum computer only uses states 0 and 1, as well as superpositions of these, otherwise errors will emerge in the quantum computation.

Dr George Knee, whose work was funded by a Research Fellowship from the Royal Commission for the Exhibition of 1851, said: "It is quite something to be able to make this conclusion at a distance of several thousand miles, with very limited access to the IBM chip itself. Although our program only made use of the permitted 'single qubit' instructions, the dimension witnessing approach was able to show that unwanted states were being accessed in the transmon circuit components. I see this as a win for any user who wants to investigate the advertised properties of a quantum machine without the need to refer to hardware-specific details."

Credit: 
University of Warwick

Parkinson's treatment delivers a power-up to brain cell 'batteries'

Scientists have gained clues into how a Parkinson's disease treatment, called deep brain stimulation, helps tackle symptoms.

The early-stage study, by researchers at Imperial College London, suggests the treatment boosts the number and strength of brain cell 'batteries' called mitochondria. These batteries in turn provide power to brain cells, which may help reduce problems with movement and tremors.

Deep brain stimulation is a treatment used for late-stage Parkinson's disease that involves surgically implanting thin wires, called electrodes, into the brain. These wires deliver small electric pulses into the head, which helps reduce slow movement, tremor and stiffness.

However scientists have been unsure how the treatment, which is given to around 300 patients a year, tackles Parkinson's symptoms.

Dr Kambiz Alavian, senior author of the study from the Department of Medicine, Imperial College London, said: "Deep brain stimulation has been used successfully to treat Parkinson's for over 20 years, and is often offered to patients once medication no longer controls their symptoms.

"But despite the success of the treatment, we still don't know exactly how delivering electric pulses to brain cells creates these beneficial effects. Our results, despite being at an early-stage, suggest the electric pulses boost batteries in the brain cells. This potentially opens avenues for exploring how to replicate this cell power-up with non-surgical treatments, without the need for implanting electrodes in the brain."

Parkinson's disease affects around 127,000 people in the UK, and causes the progressive loss of brain cells in an area called the substantia nigra. This leads to a reduction in a brain chemical called dopamine, which is crucial for controlling movement. As a result, the condition triggers symptoms such as tremor and slow movement.

The initial causes of the condition are still unknown, but recent studies suggest brain cells in the substantia nigra of the patients have fewer mitochondria - tiny energy-producing structures that keep cells alive.

In the latest trial, published in the FASEB Journal, scientists investigated brain cells from three deceased patients with Parkinson's disease who had received deep brain stimulation (DBS), four deceased patients who had Parkinson's disease but did not receive DBS, and three deceased individuals who did not have Parkinson's.

All brains came from the Parkinson's UK Brain Bank, at Imperial College London.

The team found the brain cells of people who had received deep brain stimulation had a higher number of mitochondria, compared to patients who didn't receive the treatment. The mitochondria in the DBS patients were also bigger than those in patients who didn't receive treatment, suggesting they may produce more energy.

The scientists highlight the fact only a small number of brain samples were used for this study, but now hope to start larger investigations.

"These type of studies are difficult to perform, as they can only be carried out after a patient has passed away. Without the Parkinson's UK Brain Bank - and ultimately the people affected by Parkinson's disease who choose to donate their brain after death, we wouldn't be able to perform important studies such as these.

"We now hope to carry out larger studies to explore new treatments that may preserve brain cell mitochondria. The ultimate goal would be to keep cells powered-up for longer, and Parkinson's symptoms at bay."

Credit: 
Imperial College London

Chatterpies, haggisters and ninuts could help children love conservation

Weaving stories and intriguing names into children's education about the natural world could help to engage them with species' conservation messages, new research shows.

A team at the University of Birmingham carried out a study to explore the potential of species' cultural heritage for inspiring the conservationists of the future. Focusing on magpies, one of the UK's most easily recognised birds, the researchers presented schoolchildren with information about the birds, and then asked them questions about their attitudes to magpies and the conservation of the species.

Around 400 10- and 11-year-olds participated in the survey, which took place across a number of different schools in Milton Keynes, Buckinghamshire - a town typical of expanding urban areas in industrialised countries. Divided into four groups, the children were given either only cultural information about the birds, only scientific or both. A control group was given no additional information at all.

The children were then asked to fill in a questionnaire about the birds, in particular whether they thought it was important to protect magpies, and the reasons for doing so - for example, because it's the right thing to do, or so that more can be learned about the species, or because of their cultural heritage.

Nigel Hopper, of the University of Birmingham's School of Biosciences, is lead author on the paper. He explained: "Magpies feature strikingly in folk stories, myths and rhyme - think, 'One for sorrow, two for joy', and so on. They're often portrayed as sinister creatures, bringing with them bad omens, or as cheeky thieves with an attraction to shiny objects. They also have dozens of quirky names attached to them. We wanted to see if using some of this wealth of cultural information could help magpies steal the hearts and minds of pupils and persuade them to engage with species' conservation."

The survey results showed that the students who were given only cultural information valued that information and regarded it as a reason to protect magpies. Children given only scientific information had less regard for cultural information and were less likely to agree that magpies should be protected on account of their cultural heritage. This suggests a diluting effect of scientific information on appreciation for cultural heritage information.

"Most people are not natural-born scientists," says Hopper. Our results suggest that using species' cultural heritage to first engage people's imaginations could be an effective way of ensuring a captive audience for important scientific messages around species' conservation. And because adults love stories as much as children do, species' cultural heritage has the potential to inspire a conservation ethos that lasts a lifetime."

Dr Jim Reynolds, a senior author also at the University of Birmingham, added "We have questioned for a long time the optimum age at which to engage with the general public about conservation issues. Our study reveals that children even as young as 10 or 11 years old can assimilate quite complex information and use it to express strong personal opinions.

"We now wonder whether children even younger might already be holding strong conservation values. Our research indicates that the form of communicated information may be crucial in translating personal interests and motivations into tangible and powerful conservation benefits. Get it right and the rewards for biodiversity conservation could be enormous."

Credit: 
University of Birmingham

'Goldilocks' stars may be 'just right' for finding habitable worlds

image: The artist's concept depicts NASA's Kepler mission's smallest habitable zone planet. Seen in the foreground is Kepler-62f, a super-Earth-size planet in the habitable zone of a star smaller and cooler than the sun, located about 1,200 light-years from Earth in the constellation Lyra.

Kepler-62f orbits it's host star every 267 days and is roughly 40 percent larger than Earth in size. The size of Kepler-62f is known, but its mass and composition are not. However, based on previous exoplanet discoveries of similar size that are rocky, scientists are able to determine its mass by association.

Much like our solar system, Kepler-62 is home to two habitable zone worlds. The small shining object seen to the right of Kepler-62f is Kepler-62e. Orbiting on the inner edge of the habitable zone, Kepler-62e is roughly 60 percent larger than Earth.

Image: 
NASA Ames/JPL-Caltech/Tim Pyle

Scientists looking for signs of life beyond our solar system face major challenges, one of which is that there are hundreds of billions of stars in our galaxy alone to consider. To narrow the search, they must figure out: What kinds of stars are most likely to host habitable planets?

A new study finds a particular class of stars called K stars, which are dimmer than the Sun but brighter than the faintest stars, may be particularly promising targets for searching for signs of life.

Why? First, K stars live a very long time -- 17 billion to 70 billion years, compared to 10 billion years for the Sun -- giving plenty of time for life to evolve. Also, K stars have less extreme activity in their youth than the universe's dimmest stars, called M stars or "red dwarfs."

M stars do offer some advantages for in the search for habitable planets. They are the most common star type in the galaxy, comprising about 75 percent of all the stars in the universe. They are also frugal with their fuel, and could shine on for over a trillion years. One example of an M star, TRAPPIST-1, is known to host seven Earth-size rocky planets.

But the turbulent youth of M stars presents problems for potential life. Stellar flares - explosive releases of magnetic energy - are much more frequent and energetic from young M stars than young Sun-like stars. M stars are also much brighter when they are young, for up to a billion years after they form, with energy that could boil off oceans on any planets that might someday be in the habitable zone.

"I like to think that K stars are in a 'sweet spot' between Sun-analog stars and M stars," said Giada Arney of NASA's Goddard Space Flight Center in Greenbelt, Maryland.

Arney wanted to find out what biosignatures, or signs of life, might look like on a hypothetical planet orbiting a K star. Her analysis is published in the Astrophysical Journal Letters.

Scientists consider the simultaneous presence of oxygen and methane in a planet's atmosphere to be a strong biosignature because these gases like to react with each other, destroying each other. So, if you see them present in an atmosphere together, that implies something is producing them both quickly, quite possibly life, according to Arney.

However, because planets around other stars (exoplanets) are so remote, there needs to be significant amounts of oxygen and methane in an exoplanet's atmosphere for it to be seen by observatories at Earth. Arney's analysis found that the oxygen-methane biosignature is likely to be stronger around a K star than a Sun-like star.

Arney used a computer model that simulates the chemistry and temperature of a planetary atmosphere, and how that atmosphere responds to different host stars. These synthetic atmospheres were then run through a model that simulates the planet's spectrum to show what it might look like to future telescopes.

"When you put the planet around a K star, the oxygen does not destroy the methane as rapidly, so more of it can build up in the atmosphere," said Arney. "This is because the K star's ultraviolet light does not generate highly reactive oxygen gases that destroy methane as readily as a Sun-like star."

This stronger oxygen-methane signal has also been predicted for planets around M stars, but their high activity levels might make M stars unable to host habitable worlds. K stars can offer the advantage of a higher probability of simultaneous oxygen-methane detection compared to Sun-like stars without the disadvantages that come along with an M star host.

Additionally, exoplanets around K stars will be easier to see than those around Sun-like stars simply because K stars are dimmer. "The Sun is 10 billion times brighter than an Earthlike planet around it, so that's a lot of light you have to suppress if you want to see an orbiting planet. A K star might be 'only' a billion times brighter than an Earth around it," said Arney.

Arney's research also includes discussion of which of the nearby K stars may be the best targets for future observations. Since we don't have the ability to travel to planets around other stars due to their enormous distances from us, we are limited to analyzing the light from these planets to search for a signal that life might be present. By separating this light into its component colors, or spectrum, scientists can identify the constituents of a planet's atmosphere, since different compounds emit and absorb distinct colors of light.

"I find that certain nearby K stars like 61 Cyg A/B, Epsilon Indi, Groombridge 1618, and HD 156026 may be particularly good targets for future biosignature searches," said Arney.

Credit: 
NASA/Goddard Space Flight Center

Bullying bosses negatively impact employee performance and behavior

Employees bullied by their bosses are more likely to report unfairness and work stress, and consequently become less committed to their jobs or even retaliate, according to a Portland State University study.

The findings, published recently in the Journal of Management, highlight the consequences of abusive supervision, which is becoming increasingly common in workplaces, said Liu-Qin Yang, the study's co-author and an associate professor of industrial-organizational psychology in PSU's College of Liberal Arts and Sciences.

Yang and her co-authors reviewed 427 studies and quantitatively aggregated the results to better understand why and how bullying bosses can decrease "organizational citizenship behavior" -- or the voluntary extras you do that aren't part of your job responsibilities -- and increase "counterproductive work behavior." Examples of such behaviors include sabotage at work, coming into work late, taking longer-than-allowed breaks, doing tasks incorrectly or withholding effort, all of which can affect your team and coworkers.

The researchers attribute the negative work behaviors to either perceptions of injustice or work stress.

With perceptions of injustice, employees bullied by their boss see the treatment as unfair relative to the effort they've put into their jobs. In response, they're more likely to purposely withhold from the unpaid extras that help the organization, like helping coworkers with problems or attending meetings that are not mandatory. They're also more likely to engage in counterproductive work behavior such as taking longer breaks or coming in late without notice, Yang said.

Having an abusive boss can also lead to work stress, which reduces an employee's ability to control negative behaviors or contribute to the organization in a positive way.

The researchers found that fairness (or the lack thereof) accounted more for the link between abusive supervision and organizational citizenship behavior, while work stress led to more counterproductive work behavior.

"Stress is sometimes uncontrollable. You don't sleep well, so you come in late or take a longer break, lash out at your coworkers or disobey instructions," Yang said. "But justice is more rational. Something isn't fair, so you're purposely not going to help other people or when the boss asks if anyone can come in on a Saturday to work, you don't volunteer."

Yang and her co-authors recommend that organizations take measures to reduce or curb abusive supervision. Among their suggestions:

Launch regular training programs to help supervisors learn and adopt more effective interpersonal and management skills when interacting with their employees

Implement fair policies and procedures to reduce employees' perceptions of injustice in the organization

Ensure employees have sufficient resources to perform their job, such as by offering stress management training

Credit: 
Portland State University

Do all networks obey the scale-free law? Maybe not

As Benjamin Franklin once joked, death and taxes are universal. Scale-free networks may not be, at least according to a new study from CU Boulder.

The research challenges a popular, two-decade-old theory that networks of all kinds, from Facebook and Twitter to the interactions of genes in yeast cells, follow a common architecture that mathematicians call "scale-free."

Such networks fit into a larger category of networks that are dominated by a few hubs with many more connections than the vast majority of nodes--think Twitter where for every Justin Bieber (105 million followers) out there, you can find thousands of users with just a handful of fans.

In research published this week in the journal Nature Communications, CU Boulder's Anna Broido and Aaron Clauset set out to put that theory to the test. They used computational tools to analyze a huge dataset of more than 900 networks, with examples from the realms of biology, transportation, technology and more.

Their results suggest that death and taxes may not have much competition, at least in networks. Based on Broido and Clauset's analysis, close to 50 percent of real networks didn't meet even the most liberal definition of what makes a network scale-free.

Those findings matter, Broido said, because the shape of a networks determines a lot about its properties, including how susceptible it is to targeted attacks or disease outbreaks.

"It's important to be careful and precise in defining things like what it means to be a scale-free network," said Broido, a graduate student in the Department of Applied Mathematics.

Clauset, an associate professor in the Department of Computer Science and BioFrontiers Institute, agrees.

"The idea of scale-free networks has been a unifying but controversial theme in network theory for nearly 20 years," he said. "Resolving the controversy has been difficult because we lacked good tools and broad data. What we've found now is that there is little evidence for classically scale-free networks except in a few specific places. Most networks don't look scale-free at all."

Deciding whether or not a network is "scale-free," however, can be tricky. Many types of networks look similar from a distance.

In scale-free networks, however, the patterns of connections coming into and out of nodes follows a precise mathematical form called a power law distribution.

To take such networks out of the realm of speculation, Clauset and Broido turned to the Index of Complex Networks (ICON). This archive, which was assembled by Clauset's research group at CU Boulder, lists data on thousands of networks from every scientific domain. They include the social links between Star Wars characters, interactions among yeast proteins, friendships on Facebook and Twitter, airplane travel and more.

Their findings were stark. The researchers calculated that only about 4 percent of the networks they studied met the strictest criteria for being scale free. These special networks included some types of protein networks in cells and certain kinds of technological networks.

Far from being a let-down, Clauset sees these null findings in a positive light: If scale-free isn't the norm, then scientists are free to explore new and more accurate structures for the networks people encounter every day.

"The diversity of real networks presents a mystery," he said. "What are the common shapes of the networks? How do different kinds of networks assemble and maintain their structure over time? I'm excited that our findings open up room to explore new ideas."

Credit: 
University of Colorado at Boulder

2018's biggest volcanic eruption of sulfur dioxide

image: The natural-color image above was acquired on July 27, 2018, by the Visible Infrared Imaging Radiometer Suite (VIIRS) on Suomi NPP.

Image: 
Image by Lauren Dauphin, NASA Earth Observatory.

The Manaro Voui volcano on the island of Ambae in the nation of Vanuatu in the South Pacific Ocean made the 2018 record books. A NASA-NOAA satellite confirmed Manaro Voui had the largest eruption of sulfur dioxide that year.

The volcano injected 400,000 tons of sulfur dioxide into the upper troposphere and stratosphere during its most active phase in July, and a total of 600,000 tons in 2018. That's three times the amount released from all combined worldwide eruptions in 2017.

During a series of eruptions at Ambae in 2018, volcanic ash also blackened the sky, buried crops and destroyed homes, and acid rain turned the rainwater, the island's main source of drinking water, cloudy and "metallic, like sour lemon juice," said New Zealand volcanologist Brad Scott. Over the course of the year, the island's entire population of 11,000 was forced to evacuate.

At the Ambae volcano's peak eruption in July, measurements showed the results of a powerful burst of energy that pushed gas and ash to the upper part of the troposphere and into the stratosphere, at an altitude of 10.5 miles. Sulfur dioxide is short-lived in the atmosphere, but once it penetrates into the stratosphere, where it combines with water vapor to convert to sulfuric acid aerosols, it can last much longer -- for weeks, months or even years, depending on the altitude and latitude of injection, said Simon Carn, professor of volcanology at Michigan Tech.

In extreme cases, like the 1991 eruption of Mount Pinatubo in the Philippines, these tiny aerosol particles can scatter so much sunlight that they cool the Earth's surface below.

The map above shows stratospheric sulfur dioxide concentrations on July 28, 2018, as detected by OMPS on the Suomi-NPP satellite. Ambae (also known as Aoba) was near the peak of its sulfur emissions at the time. For perspective, emissions from

Hawaii's Kilauea and the Sierra Negra volcano in the Galapagos are shown on the same day. The plot below shows the July-August spike in emissions from Ambae.

"With the Kilauea and Galapagos eruptions, you had continuous emissions of sulfur dioxide over time, but the Ambae eruption was more explosive," said Simon Carn, professor of volcanology at Michigan Tech. "You can see a giant pulse in late July, and then it disperses."

The OMPS nadir mapper instruments on the Suomi-NPP and NOAA-20 satellites contain hyperspectral ultraviolet sensors, which map volcanic clouds and measure sulfur dioxide emissions by observing reflected sunlight. Sulfur dioxide (SO2) and other gases like ozone each have their own spectral absorption signature, their unique fingerprint. OMPS measures these signatures, which are then converted, using complicated algorithms, into the number of SO2 gas molecules in an atmospheric column.

"Once we know the SO2 amount, we put it on a map and monitor where that cloud moves," said Nickolay Krotkov, a research scientist at NASA Goddard's Atmospheric Chemistry and Dynamics Laboratory.

These maps, which are produced within three hours of the satellite's overpass, are used at volcanic ash advisory centers to predict the movement of volcanic clouds and reroute aircraft, when needed.

Mount Pinatubo's violent eruption injected about 15 million tons of sulfur dioxide into the stratosphere. The resulting sulfuric acid aerosols remained in the stratosphere for about two years, and cooled the Earth's surface by a range of 1 to 2 degrees Fahrenheit.

This Ambae eruption was too small to cause any such cooling. "We think to have a measurable climate impact, the eruption needs to produce at least 5 to 10 million tons of SO2," Carn said.

Still, scientists are trying to understand the collective impact of volcanoes like Ambae and others on the climate. Stratospheric aerosols and other volcanic gases emitted by volcanoes like Ambae can alter the delicate balance of the chemical composition of the stratosphere. And while none of the smaller eruptions have had measurable climate effects on their own, they may collectively impact the climate by sustaining the stratospheric aerosol layer.

"Without these eruptions, the stratospheric layer would be much, much smaller," Krotkov said.

Credit: 
NASA/Goddard Space Flight Center

Early use of antibiotics in elderly patients with UTIs associated with reduced risk of sepsis

Prescribing antibiotics immediately for elderly patients with urinary tract infections is linked with a reduced risk of sepsis and death, compared with patients who receive antibiotics in the days following diagnosis, or none at all.

These are the latest findings from researchers at Imperial College London and Public Health England, published in the BMJ.

The research team say the results provide further evidence to help GPs make clinical decisions about when to prescribe antibiotics immediately for a urinary tract infections (UTI) and when to defer treatment to see if symptoms improve on their own, to avoid overuse of antibiotics.

In the research, funded by the National Institute for Health Research, the team looked at records from 157,264 patients over the age of 65 across England who had been diagnosed by their GP with a suspected or confirmed UTI. Patients had been prescribed antibiotics immediately (87 per cent of cases studied in the research), had antibiotics delayed by up to 7 days (6 per cent of cases), or received no antibiotics at all (7 per cent of cases).

Of the patients who received antibiotics immediately, 0.2 per cent developed sepsis within the following 60 days. After taking into account available information about differences in age, gender, pre-existing illness and other personal characteristics, the results revealed that compared with patients who received antibiotics immediately, patients who had their antibiotic prescription delayed or received no antibiotics at all were up to eight times more likely to develop sepsis.

The research also revealed that 1.6 per cent of patients who received antibiotics immediately died in the following 60 days. The risk of death over the same time period among patients who had their antibiotic prescription delayed showed a slight increase (16 per cent), while patients who received no antibiotics had over double the risk.

The researchers estimated that, on average, for every 37 patients exposed to no antibiotics and for every 51 patients exposed to deferred antibiotics, one case of sespis would occur that would not have been seen with immediate antibiotics.

They also found that the rate of hospital admissions roughly doubled (27%) in patients with either no or deferred antibiotic prescriptions, compared with those receiving immediate prescriptions (15%).

Older men, especially those aged over 85 years, and those living in more deprived areas were found to be most at risk.

The researchers stress this study only shows delayed antibiotics are associated with an increased risk of sepsis and death, rather than causing it directly. They add that patients may also have had other health conditions that the researchers weren't able to account for, which may have contributed to their increased risk of sepsis or death.

Lead author Dr Myriam Gharbi, from Imperial's School of Public Health, said: "Current national guidelines for GPs recommend they should ask patients about the severity of their symptoms, discuss possible self-care, such as drinking plenty of water to avoid dehydration and taking paracetamol or ibuprofen for pain relief, and consider a back-up antibiotic prescription to be used if symptoms worsen or have not improved after 48 hours. This is to avoid antibiotic overuse, as sometimes UTIs can get better without medication. However, our research suggests antibiotics should not be delayed in elderly patients."

UTIs are common in the elderly and may trigger symptoms such as pain when urinating, or needing to use the loo more often. UTIs are most commonly caused by E. coli bacteria, and if not treated the bacteria can trigger blood poisoning.

However, doctors are increasingly concerned about the rise in antibiotic resistance - caused by antibiotics being overprescribed. UTIs are the second most common diagnosis for which antibiotics are prescribed in the UK.

Therefore to help clarify when antibiotics should be prescribed to the elderly with UTIs, the research team studied data from 157, 264 patients aged 65 or above diagnosed with a UTI or suspected UTI, between 2007 and 2015. The data was from the Clinical Practice Research Datalink, which uses anonymised patient data from both GP practices linked to hospital data, allowing the same patients to be tracked between the two settings. The average age of the patients in the study was 77 years old.

Professor Paul Aylin, senior author of the research from the NIHR Health Protection Unit at Imperial, said: "Although antibiotic prescribing must be controlled to help combat the increasing problem of antibiotic resistance, our study suggests early use of antibiotics in elderly patients with UTIs is the safest approach."

Professor Alan Johnson from Public Health England who collaborated on the research said: "Antibiotic resistance is a major threat to public health that is being driven by the overuse of antibiotics. Current recommendations suggest healthcare professionals take a number of different factors into account when deciding whether to prescribe antibiotics immediately or consider deferring antibiotics for patients with a suspected urinary tract infection. This study highlights the importance of taking age into account when making clinical decisions about antibiotic prescribing in order to reduce the risk of complications. This work will help doctors target antibiotic use more effectively and improve patient wellbeing."

Credit: 
Imperial College London

'Silent-type' cells play greater role in brain behavior than previously thought

Brain cells recorded as among the least electrically active during a specific task may be the most important to doing it right.

Results of new experiments in rodents, led by neuroscientists at NYU School of Medicine, challenge the assumption in brain research that the most active brain cells, or neurons, involved in any complex activity are also the most important in controlling that behavior.

For the study, published in detail in the journal eLife online Feb. 26, the researchers monitored brain cell activity with probes in two brain regions of the cerebral cortex, the front portion of the mammalian brain known to control how tasks are carried out in response to what is heard and seen in the environment.

Among the study's key findings was that among nearly 200 monitored brain cells or sets of the brain cells in rats, 60 percent appeared at first glance to be relatively quiet as the rats, based on training, successfully pushed a button with their noses to get food in response to a certain sound. However, computer analysis showed that the least-active neurons in the cortex "fired" at the same time with more active brain cells when the rodents correctly pushed the button in response to the right sound, researchers say.

Moreover, when these least-active neurons were not in sync, there was a greater likelihood that rodents would err during the exercise. This suggests, say the study authors, that those relatively quiet cells were essential to success. Researchers described this coordinated motion, which varied by just thousandths of a second, as a form of "consensus building" among distinct groups of neurons that add importance to a message by acting in unison.

"Our study offers firm evidence that some neurons presumed to be the least involved in controlling a particular behavior may actually be among the most important in 'building consensus' among other neurons to carry out complicated tasks," says study senior investigator and neuroscientist Robert Froemke, PhD.

Specifically, researchers monitored individual cells and groups of up to eight brain cells in rats as they performed the task. Animals were exposed to several different sounds, and researchers recorded neural activity when the task was carried out correctly, as well as when mistakes were made.

The complex data were then analyzed using a computer algorithm developed at NYU specifically designed to detect patterns among the electrical recordings, or what researchers described as "spike trains" of brain cells actively involved in carrying out tasks.

"If further experiments confirm our findings, these 'silent-type' neurons might be the most responsible cells doing much of the hard work in the mammalian brain," says Froemke, an associate professor at NYU Langone Health and its Skirball Institute of Biomolecular Medicine.

Froemke, who is also a faculty scholar at the Howard Hughes Medical Institute, says the team's latest findings could have clinical significance in the future for people living with brain disorders, such as epilepsy, who use electrical implants to stop epileptic seizures. Instead of focusing on single brain cells, future electrical devices could focus on networks or groups of neurons thought to be involved in the activity.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine