Tech

Your favorite music can send your brain into a pleasure overload

We all know that moment when we're in the car, at a concert or even sitting on our sofa and one of our favorite songs is played. It's the one that has that really good chord in it, flooding your system with pleasurable emotions, joyful memories, making your hair stand on edge, and even sending a shiver or "chill" down your spine. About half of people get chills when listening to music. Neuroscientists based in France have now used EEG to link chills to multiple brain regions involved in activating reward and pleasure systems. The results are published in Frontiers in Neuroscience.

Thibault Chabin and colleagues at the Université de Bourgogne Franche-Comté in Besançon EEG-scanned the brains of 18 French participants who regularly experience chills when listening to their favorite musical pieces. In a questionnaire, they were asked to indicate when they experienced chills, and rate their degree of pleasure from them.

"Participants of our study were able to precisely indicate "chill-producing" moments in the songs, but most musical chills occurred in many parts of the extracts and not only in the predicted moments," says Chabin.

When the participants experienced a chill, Chabin saw specific electrical activity in the orbitofrontal cortex (a region involved in emotional processing), the supplementary motor area (a mid-brain region involved in movement control) and the right temporal lobe (a region on the right side of the brain involved in auditory processing and musical appreciation). These regions work together to process music, trigger the brain's reward systems, and release dopamine -- a "feel-good" hormone and neurotransmitter. Combined with the pleasurable anticipation of your favorite part of the song, this produces the tingly chill you experience -- a physiological response thought to indicate greater cortical connectivity.

"The fact that we can measure this phenomenon with EEG brings opportunities for study in other contexts, in scenarios that are more natural and within groups," Chabin comments. "This represents a good perspective for musical emotion research."

EEG is a non-invasive, highly accurate technique that scans for electrical currents caused by brain activity using sensors placed across the surface of the scalp. When experiencing musical chills, low frequency electrical signals called "theta activity" -- a type of activity associated with successful memory performance in the context of high rewards and musical appreciation -- either increase or decrease in the brain regions that are involved in musical processing.

"Contrary to heavy neuroimaging techniques such as PET scan or fMRI, classic EEG can be transported outside of the lab into naturalistic scenarios," says Chabin. "What is most intriguing is that music seems to have no biological benefit to us. However, the implication of dopamine and of the reward system in processing of musical pleasure suggests an ancestral function for music."

This ancestral function may lie in the period of time we spend in anticipation of the "chill-inducing" part of the music. As we wait, our brains are busy predicting the future and release dopamine. Evolutionarily speaking, being able to predict what will happen next is essential for survival.

Why should we continue to study chills?

"We want to measure how cerebral and physiological activities of multiple participants are coupled in natural, social musical settings," Chabin says. "Musical pleasure is a very interesting phenomenon that deserves to be investigated further, in order to understand why music is rewarding and unlock why music is essential in human lives."

How the study was done:

The study was carried out on 18 healthy participants - 11 female and 7 male. Participants were recruited through posters on the campus and university hospital. They had a mean age of 40 years, were sensitive to musical reward, and frequently experienced chills. They had a range of musical abilities.

A high-density EEG scan was conducted as participants listened to 15 minutes of 90 s excerpts of their most enjoyable musical pieces. While listening, participants were told to rate their subjectively felt pleasure and indicate when they felt "chills". In total, 305 chills were reported, each lasting, on average, 8.75 s. These findings implied increased brain activity in regions previously linked to musical pleasure in PET and fMRI studies.

Credit: 
Frontiers

New protein nanobioreactor designed to improve sustainable bioenergy production

image: Illustration of a carboxysome and enzymes.

Image: 
Professor Luning Liu

Researchers at the University of Liverpool have unlocked new possibilities for the future development of sustainable, clean bioenergy. The study, published in Nature Communications, shows how bacterial protein 'cages' can be reprogrammed as nanoscale bioreactors for hydrogen production.

The carboxysome is a specialised bacterial organelle that encapsulates the essential CO2-fixing enzyme Rubisco into a virus-like protein shell. The naturally designed architecture, semi-permeability, and catalytic improvement of carboxysomes have inspired the rational design and engineering of new nanomaterials to incorporate different enzymes into the shell for enhanced catalytic performance.

The first step in the study involved researchers installing specific genetic elements into the industrial bacterium E. coli to produce empty carboxysome shells. They further identified a small 'linker' - called an encapsulation peptide - capable of directing external proteins into the shell.

The extreme oxygen sensitive character of hydrogenases (enzymes that catalyse the generation and conversion of hydrogen) is a long-standing issue for hydrogen production in bacteria, so the team developed methods to incorporate catalytically active hydrogenases into the empty shell.

Project lead Professor Luning Liu, Professor of Microbial Bioenergetics and Bioengineering at the Institute of Systems, Molecular and Integrative Biology, said: "Our newly designed bioreactor is ideal for oxygen-sensitive enzymes, and marks an important step towards being able to develop and produce a bio-factory for hydrogen production."

In collaboration with Professor Andy Cooper in the Materials Innovation Factory (MIF) at the University, the researchers then tested the hydrogen-production activities of the bacterial cells and the biochemically isolated nanobioreactors. The nanobioreactor achieved a ~550% improvement in hydrogen-production efficiency and a greater oxygen tolerance in contrast to the enzymes without shell encapsulation.

"The next step for our research is answering how we can further stabilise the encapsulation system and improve yields," said Professor Liu. "We are also excited that this technical platform opens the door for us, in future studies, to create a diverse range of synthetic factories to encase various enzymes and molecules for customised functions."

First author, PhD student Tianpei Li, said: "Due to climate change, there is a pressing need to reduce the emission of carbon dioxide from burning fossil fuels. Our study paves the way for engineering carboxysome shell-based nanoreactors to recruit specific enzymes and opens the door for new possibilities for developing sustainable, clean bioenergy."

Credit: 
University of Liverpool

Solar cells of the future

Organic solar cells are cheaper to produce and more flexible than their counterparts made of crystalline silicon, but do not offer the same level of efficiency or stability. A group of researchers led by Prof. Christoph Brabec, Director of the Institute of Materials for Electronics and Energy Technology (i-MEET) at the Chair of Materials Science and Engineering at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), have been working on improving these properties for several years. During his doctoral thesis, Andrej Classen, who is a young researcher at FAU, demonstrated that efficiency can be increased using luminescent acceptor molecules. His work has now been published in the journal Nature Energy.

The sun can supply radiation energy of around 1000 watts per square metre on a clear day at European latitudes. Conventional monocrystalline silicon solar cells convert up to a fifth of this energy into electricity, which means they have an efficiency of around 20 percent. Prof. Brabec's working group has held the world record for efficiency in an organic photovoltaic module of 12.6% since September 2019. The multi-cell module developed at Energie Campus Nürnberg (EnCN) has a surface area of 26 cm². 'If we can achieve over 20% in the laboratory, we could possibly achieve 15% in practice and become real competition for silicon solar cells,' says Prof. Brabec.

Flexible application and high energy efficiency during manufacturing

The advantages of organic solar cells are obvious - they are thin and flexible like foil and can be adapted to fit various substrates. The wavelength at which the sunlight is absorbed can be 'adjusted' via the macromodules used. An office window coated with organic solar cells that absorbs the red and infrared spectrum would not only screen out thermal radiation, but also generate electricity at the same time. One criterion that is becoming increasingly important in view of climate change is the operation period after which a solar cell generates more energy than was required to manufacture it. This so-called energy payback time is heavily dependent on the technology used and the location of the photovoltaic (PV) system. According to the latest calculations of the Fraunhofer Institute for Solar Energy Systems (ISE), the energy payback time of PV modules made of silicon in Switzerland is around 2.5 to 2.8 years. However, this time is reduced to only a few months for organic solar cells according to Dr. Thomas Heumüller, research associate at Prof. Brabec's Chair.

Loss of performance for charge separation

Compared with a 'traditional' silicon solar cell, its organic equivalent has a definite disadvantage: Sunlight does not immediately produce charge for the flow of current, but rather so-called excitons in which the positive and negative charges are still bound. 'An acceptor that only attracts the negative charge is required in order to trigger charge separation, which in turn produces free charges with which electricity can be generated,' explains Dr. Heumüller. A certain driving force is required to separate the charges. This driving force depends on the molecular structure of the polymers used. Since certain molecules from the fullerene class of materials have a high driving force they have been the preferred choice of electron acceptors in organic solar cells up to now. In the meantime, however, scientists have discovered that a high driving force has a detrimental effect on the voltage. This means that the output of the solar cell decreases, in accordance with the formula that applies to direct current - power equals voltage times current.

Andrej Classen wanted to find out how low the driving force has to be to just achieve complete charge separation of the exciton. To do so, he compared combinations of four donor and five acceptor polymers that have already proven their potential for use in organic solar cells. Classen used them to produce 20 solar cells under exactly the same conditions with a driving force of almost zero to 0.6 electronvolts.

Increase in performance with certain molecules

The measurement results provided the proof for a theory already assumed in research - a 'Boltzmann equilibrium' between excitons and separated charges, the so-called charge transfer (CT) states. 'The closer the driving force reaches zero, the more the equilibrium shifts towards the excitons,' says Dr. Larry Lüer who is a specialist for photophysics in Brabec's working group. This means that future research should concentrate on preventing the exciton from decaying, which means increasing its excitation 'lifetime'. Up to now, research has only focused on the operating life of the CT state. Excitons can decay by emitting light (luminescence) or heat. By skilfully modifying the polymers, the scientists were able to reduce the heat production to a minimum, retaining the luminescence as far as possible. 'The efficiency of solar cells can therefore be increased using highly luminescent acceptor molecules,' predicts Andrej Classen.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

The importance of good neighbors in catalysis

image: Neighbourly collaboration for catalysis. First, a number of nanoparticles of copper are isolated in a gas-filled nanotube. Researchers then use light to measure how they affect each other in the process by which oxygen and carbon monoxide become carbon dioxide. The long-term goal of the research is to find a resource-efficient "neighbourhood collaboration" where as many particles as possible are catalytically active at the same time.

Image: 
David Albinsson/Chalmers University of Technology

Are you affected by your neighbours? So are nanoparticles in catalysts. New research from Chalmers University of Technology, Sweden, published in the journals Science Advances and Nature Communications, reveals how the nearest neighbours determine how well nanoparticles work in a catalyst.

"The long-term goal of the research is to be able to identify 'super-particles', to contribute to more efficient catalysts in the future. To utilise the resources better than today, we also want as many particles as possible to be actively participating in the catalytic reaction at the same time," says research leader Christoph Langhammer at the Department of Physics at Chalmers University of Technology.

Imagine a large group of neighbours gathered together to clean a communal courtyard. They set about their work, each contributing to the group effort. The only problem is that not everyone is equally active. While some work hard and efficiently, others stroll around, chatting and drinking coffee. If you only looked at the end result, it would be difficult to know who worked the most, and who simply relaxed. To determine that, you would need to monitor each person throughout the day. The same applies to the activity of metallic nanoparticles in a catalyst.

The hunt for more effective catalysts through neighbourly cooperation

Inside a catalyst several particles affect how effective the reactions are. Some of the particles in the crowd are effective, while others are inactive. But the particles are often hidden within different 'pores', much like in a sponge, and are therefore difficult to study.

To be able to see what is really happening inside a catalyst pore, the researchers from Chalmers University of Technology isolated a handful of copper particles in a transparent glass nanotube. When several are gathered together in the small gas-filled pipe, it becomes possible to study which particles do what, and when, in real conditions.

What happens in the tube is that the particles come into contact with an inflowing gas mixture of oxygen and carbon monoxide. When these substances react with each other on the surface of the copper particles, carbon dioxide is formed. It is the same reaction that happens when exhaust gases are purified in a car's catalytic converter, except there particles of platinum, palladium and rhodium are often used to break down toxic carbon monoxide, instead of copper. But these metals are expensive and scarce, so researchers are looking for more resource-efficient alternatives.

"Copper can be an interesting candidate for oxidising carbon monoxide. The challenge is that copper has a tendency to change itself during the reaction, and we need to be able to measure what oxidation state a copper particle has when it is most active inside the catalyst. With our nanoreactor, which mimics a pore inside a real catalyst, this will now be possible," says David Albinsson, Postdoctoral researcher at the Department of Physics at Chalmers and first author of two scientific articles recently published in Science Advances and Nature Communications.

Anyone who has seen an old copper rooftop or statue will recognise how the reddish-brown metal soon turns green after contact with the air and pollutants. A similar thing happens with the copper particles in the catalysts. It is therefore important to get them to work together in an effective way.

"What we have shown now is that the oxidation state of a particle can be dynamically affected by its nearest neighbours during the reaction. The hope therefore is that eventually we can save resources with the help of optimised neighbourly cooperation in a catalyst," says Christoph Langhammer, Professor at the Department of Physics at Chalmers.

Credit: 
Chalmers University of Technology

Secrets behind "Game of Thrones" unveiled by data science and network theory

image: The social network at the end of the first book "A Game of Thrones". Blue nodes represent male characters, red are female characters and transparent grey are characters who are killed by the end of the first book.

Image: 
University of Cambridge

What are the secrets behind one of the most successful fantasy series of all time? How has a story as complex as "Game of Thrones" enthralled the world and how does it compare to other narratives?

Researchers from five universities across the UK and Ireland came together to unravel "A Song of Ice and Fire", the books on which the TV series is based.

In a paper that has just been published by the Proceedings of the National Academy of Sciences of the USA, a team of physicists, mathematicians and psychologists from Coventry, Warwick, Limerick, Cambridge and Oxford universities have used data science and network theory to analyse the acclaimed book series by George R.R. Martin.

The study shows the way the interactions between the characters are arranged is similar to how humans maintain relationships and interact in the real world. Moreover, although important characters are famously killed off at random as the story is told, the underlying chronology is not at all so unpredictable.

The team found that, despite over 2,000 named characters in "A Song of Ice and Fire" and over 41,000 interactions between them, at chapter-by-chapter level these numbers average out to match what we can handle in real life. Even the most predominant characters - those who tell the story - average out to have only 150 others to keep track of. This is the same number that the average human brain has evolved to deal with.

While matching mathematical motifs might have been expected to lead to a rather narrow script, the author, George R. R. Martin, keeps the tale bubbling by making deaths appear random as the story unfolds. But, as the team show, when the chronological sequence is reconstructed the deaths are not random at all: rather, they reflect how common events are spread out for non-violent human activities in the real world.

'Game of Thrones' has invited all sorts of comparison to history and myth and the marriage of science and humanities in this paper opens new avenues to comparative literary studies. It shows, for example, that it is more akin to the Icelandic sagas than to mythological stories such as England's Beowulf or Ireland's Táin Bó Cúailnge. The trick in Game of Thrones, it seems, is to mix realism and unpredictability in a cognitively engaging manner.

Thomas Gessey-Jones, from the University of Cambridge, commented: "The methods developed in the paper excitingly allow us to test in a quantitative manner many of the observations made by readers of the series, such as the books famous habit of seemingly killing off characters at random."

Professor Colm Connaughton, from the University of Warwick, observed: "People largely make sense of the world through narratives, but we have no scientific understanding of what makes complex narratives relatable and comprehensible. The ideas underpinning this paper are steps towards answering this question."

Professor Ralph Kenna, from Coventry University, said: "This kind of study opens up exciting new possibilities for examining the structure and design of epics in all sorts of contexts; impact of related work includes outcry over misappropriation of mythology in Ireland and flaws in the processes that led to it."

Professor Robin Dunbar, from the University of Oxford, observed: "This study offers convincing evidence that good writers work very carefully within the psychological limits of the reader."

Dr Pádraig MacCarron, from University of Limerick commented: "These books are known for unexpected twists, often in terms of the death of a major character, it is interesting to see how the author arranges the chapters in an order that makes this appear even more random than it would be if told chronologically."

Dr Joseph Yose, from Coventry University said: "I am excited to see the use of network analysis grow in the future, and hopefully, combined with machine learning, we will be able to predict what an upcoming series may look like."

Credit: 
University of Warwick

New insight into how brain neurons influence choices

image: By studying animals choosing between two drink options, researchers at Washington University School of Medicine in St. Louis have discovered that the activity of certain neurons in the brain leads directly to the choice of one option over another. The findings could lead to better understanding of how decision-making goes wrong in conditions such as addiction and depression.

Image: 
Mike Worful

When you are faced with a choice -- say, whether to have ice cream or chocolate cake for dessert -- sets of brain cells just above your eyes fire as you weigh your options. Animal studies have shown that each option activates a distinct set of neurons in the brain. The more enticing the offer, the faster the corresponding neurons fire.

Now, a study in monkeys by researchers at Washington University School of Medicine in St. Louis has shown that the activity of these neurons encodes the value of the options and determines the final decision. In the experiments, researchers let animals choose between different juice flavors. By changing the neurons' activity, the researchers changed how appealing the monkeys found each option, leading the animals to make different choices. The study is published Nov. 2 in the journal Nature.

A detailed understanding of how options are valued and choices are made in the brain will help us understand how decision-making goes wrong in people with conditions such as addiction, eating disorders, depression and schizophrenia.

"In a number of mental and neuropsychiatric disorders, patients consistently make poor choices, but we don't understand exactly why," said senior author Camillo Padoa-Schioppa, PhD, a professor of neuroscience, of economics and of biomedical engineering. "Now we have located one critical piece of this puzzle. As we shed light on the neural mechanisms underlying choices, we'll gain a deeper understanding of these disorders."

In the 18th century, economists Daniel Bernoulli, Adam Smith and Jeremy Bentham suggested that people choose among options by computing the subjective value of each offer, taking into consideration factors such as quantity, quality, cost and the probability of actually receiving the promised offer. Once computed, values would be compared to make a decision. It took nearly three centuries to find the first concrete evidence of such calculations and comparisons in the brain. In 2006, Padoa-Schioppa and John Assad, PhD, a professor of neurobiology at Harvard Medical School, published a groundbreaking paper in Nature describing the discovery of neurons that encode the subjective value offered and chosen goods. The neurons were found in the orbitofrontal cortex, an area of the brain just above the eyes involved in goal-directed behavior.

At the time, though, they were unable to demonstrate that the values encoded in the brain led directly to choosing one option over another.

"We found neurons encoding subjective values, but value signals can guide all sorts of behaviors, not just choice," Padoa-Schioppa said. "They can guide learning, emotion, perceptual attention, and aspects of motor control. We needed to show that value signals in a particular brain region guide choices."

To examine the connection between values encoded by neurons and choice behavior, researchers performed two experiments. The study was conducted by first authors Sébastien Ballesta, PhD, then a postdoctoral researcher, and Weikang Shi, a graduate student, with the help of Katherine Conen, PhD, then a graduate student, who designed one of the experiments. Ballesta is now an associate professor at the University of Strasbourg in Strasbourg, France; Conen is now at Brown University.

In one experiment, the researchers repeatedly presented monkeys with two drinks and recorded the animals' selections. The drinks were offered in varying amounts and included lemonade, grape juice, cherry juice, peach juice, fruit punch, apple juice, cranberry juice, peppermint tea, kiwi punch, watermelon juice and salted water. The monkeys often preferred one flavor over another, but they also liked to get more rather than less, so their decisions were not always easy. Each monkey indicated its choice by glancing toward it, and the chosen drink was delivered.

Then, the researchers placed tiny electrodes in each monkey's orbitofrontal cortex. The electrodes painlessly stimulate the neurons that represent the value of each option. When the researchers delivered a low current through the electrodes while a monkey was offered two drinks, neurons dedicated to both options began to fire faster. From the perspective of the monkey, this meant that both options became more appealing but, because of the way values are encoded in the brain, the appeal of one option increased more than that of the other. The upshot is that low-level stimulation made the animal more likely to choose one particular option, in a predictable way.

In another experiment, the monkeys saw first one option, then the other, before they made a choice. Delivering a higher current while the monkey was considering one option disrupted the computation of value taking place at that time, making the monkey more likely to choose whichever option was not disrupted. This result indicates that values computed in the orbitofrontal cortex are a necessary part of making a choice.

"When it comes to this kind of choices, the monkey brain and the human brain appear very similar," Padoa-Schioppa said. "We think that this same neural circuit underlies all sorts of choices people make, such as between different dishes on a restaurant menu, financial investments, or candidates in an election. Even major life decisions like which career to choose or whom to marry probably utilize this circuit. Every time a choice is based on subjective preferences, this neural circuit is responsible for it."

Credit: 
Washington University School of Medicine

Palm oil certification brings mixed outcomes to neighbouring communities

Research led by the University of Kent's Durrell Institute of Conservation and Ecology (DICE) has found that Indonesian communities living near oil palm plantations are impacted in different ways, both positive and negative, during plantation development and certification.

Based on a counterfactual analysis of government poverty data from villages across Indonesia, the study published in Nature Sustainability explored the socio-economic and socio-ecological impacts of oil palm cultivation, and subsequent certification, on rural communities. Plantations were certified sustainable via the Roundtable on Sustainable Palm Oil (RSPO), which requires its members adhere to social and environmental principles and criteria that aim to ensure the production of palm oil improves outcomes for neighboring communities and the environment.

The study, led by Dr Truly Santika, Dr Matthew Struebig and Professor Erik Meijaard, demonstrates that the timing of certification matters. The assessment tracked poverty changes in more than 36,000 villages over 18 years, providing the most detailed assessment to date of the impacts of oil palm and certification on people.

Indonesia is the world's largest palm oil producer but the expansion of the sector has not been uniform across the country. Sumatra was developed first and is now dominated by mature oil palm. Before oil palm, rubber plantations were widespread, so villages had adapted relatively well to the plantation sector. The researchers found this led to socio-economic impacts from oil palm certification being positive in Sumatra.

Kalimantan on the other hand, experienced a more recent oil palm boom, which resulted in a very rapid change in land cover and deterioration of social-ecological conditions in this part of Borneo. In contrast with Sumatra, communities in Kalimantan with livelihoods that were much more dependent on forests saw fewer poverty improvements from oil palm, which continued when certification came in.

Compared with non-certified villages, overall measures of well-being declined by 11% on average across the country in communities that relied on subsistence-based livelihoods before certification. This decline was driven mainly by the fall in socio-ecological indicators, for example, via a significant increase in the prevalence of conflicts, low-wage agricultural labourers, and water and air pollution.

In the early stages of oil palm development, the hoped-for social benefits from RSPO certification did not materialize in much of Indonesia. However, these benefits tended to show more in later stages of development when the enhanced environmental and social management associated with certified plantations have had more time to improve the lives of people.

Dr Struebig said: 'Our research provides companies, the RSPO, and other certification schemes, useful insights into how the oil palm industry can contribute to people's living standards. We now know that the potential benefits of oil palm production can take time to be fully experienced in Indonesia, so it will be interesting to see how further improvements to certification standards will be experienced by people in years to come.

Dr Santika said: 'Consumers want to make informed decisions when purchasing products with palm oil, and the more research-led information that can be shared with them, the better.'

Credit: 
University of Kent

Room temperature conversion of CO2 to CO: A new way to synthesize hydrocarbons

image: Illustration of a novel room-temperature process to remove carbon dioxide (CO2) by converting the molecule into carbon monoxide (CO). Instead of using heat, the nanoscale method relies on the energy from surface plasmons (violet hue) that are excited when a beam of electrons (vertical beam) strikes aluminum nanoparticles resting on graphite, a crystalline form of carbon. In the presence of the graphite, aided by the energy derived from the plasmons, carbon dioxide molecules (black dot bonded to two red dots) are converted to carbon monoxide (black dot bonded to one red dot. The hole under the violet sphere represents the graphite etched away during the chemical reaction CO2 + C = 2CO.

Image: 
NIST

Researchers at the National Institute of Standards and Technology (NIST) and their colleagues have demonstrated a room-temperature method that could significantly reduce carbon dioxide levels in fossil-fuel power plant exhaust, one of the main sources of carbon emissions in the atmosphere.

Although the researchers demonstrated this method in a small-scale, highly controlled environment with dimensions of just nanometers (billionths of a meter), they have already come up with concepts for scaling up the method and making it practical for real-world applications.

In addition to offering a potential new way of mitigating the effects of climate change, the chemical process employed by the scientists also could reduce costs and energy requirements for producing liquid hydrocarbons and other chemicals used by industry. That's because the method's byproducts include the building blocks for synthesizing methane, ethanol and other carbon-based compounds used in industrial processing.

The team tapped a novel energy source from the nanoworld to trigger a run-of-the-mill chemical reaction that eliminates carbon dioxide. In this reaction, solid carbon latches onto one of the oxygen atoms in carbon dioxide gas, reducing it to carbon monoxide. The conversion normally requires significant amounts of energy in the form of high heat -- a temperature of at least 700 degrees Celsius, hot enough to melt aluminum at normal atmospheric pressure.

Instead of heat, the team relied on the energy harvested from traveling waves of electrons, known as localized surface plasmons (LSPs), which surf on individual aluminum nanoparticles. The team triggered the LSP oscillations by exciting the nanoparticles with an electron beam that had an adjustable diameter. A narrow beam, about a nanometer in diameter, bombarded individual aluminum nanoparticles while a beam about a thousand times wider generated LSPs among a large set of the nanoparticles.

In the team's experiment, the aluminum nanoparticles were deposited on a layer of graphite, a form of carbon. This allowed the nanoparticles to transfer the LSP energy to the graphite. In the presence of carbon dioxide gas, which the team injected into the system, the graphite served the role of plucking individual oxygen atoms from carbon dioxide, reducing it to carbon monoxide. The aluminum nanoparticles were kept at room temperature. In this way, the team accomplished a major feat: getting rid of the carbon dioxide without the need for a source of high heat.

Previous methods of removing carbon dioxide have had limited success because the techniques have required high temperature or pressure, employed costly precious metals, or had poor efficiency. In contrast, the LSP method not only saves energy but uses aluminum, a cheap and abundant metal.

Although the LSP reaction generates a poisonous gas -- carbon monoxide -- the gas readily combines with hydrogen to produce essential hydrocarbon compounds, such as methane and ethanol, that are often used in industry, said NIST researcher Renu Sharma.

She and her colleagues, including scientists from the University of Maryland in College Park and DENSsolutions, in Delft, the Netherlands, reported their findings in Nature Materials.

"We showed for the first time that this carbon dioxide reaction, which otherwise will only happen at 700 degrees C or higher, can be triggered using LSPs at room temperature," said researcher Canhui Wang of NIST and the University of Maryland.

The researchers chose an electron beam to excite the LSPs because the beam can also be used to image structures in the system as small as a few billionths of a meter. This enabled the team to estimate how much carbon dioxide had been removed. They studied the system using a transmission electron microscope (TEM).

Because both the concentration of carbon dioxide and the reaction volume of the experiment were so small, the team had to take special steps to directly measure the amount of carbon monoxide generated. They did so by coupling a specially modified gas cell holder from the TEM to a gas chromatograph mass spectrometer, allowing the team to measure parts-per-millions concentrations of carbon dioxide.

Sharma and her colleagues also used the images produced by the electron beam to measure the amount of graphite that was etched away during the experiment, a proxy for how much carbon dioxide had been taken away. They found that the ratio of carbon monoxide to carbon dioxide measured at the outlet of the gas cell holder increased linearly with the amount of carbon removed by etching.

Imaging with the electron beam also confirmed that most of the carbon etching -- a proxy for carbon dioxide reduction -- occurred near the aluminum nanoparticles. Additional studies revealed that when the aluminum nanoparticles were absent from the experiment, only about one-seventh as much carbon was etched.

Limited by the size of the electron beam, the team's experimental system was small, only about 15 to 20 nanometers across (the size of a small virus).

To scale up the system so that it could remove carbon dioxide from the exhaust of a commercial power plant, a light beam may be a better choice than an electron beam to excite the LSPs, Wang said. Sharma proposes that a transparent enclosure containing loosely packed carbon and aluminum nanoparticles could be placed over the smokestack of a power plant. An array of light beams impinging upon the grid would activate the LSPs. When the exhaust passes through the device, the light-activated LSPs in the nanoparticles would provide the energy to remove carbon dioxide.

The aluminum nanoparticles, which are commercially available, should be evenly distributed to maximize contact with the carbon source and the incoming carbon dioxide, the team noted.

The new work also suggests that LSPs offer a way for a slew of other chemical reactions that now require a large infusion of energy to proceed at ordinary temperatures and pressures using plasmonic nanoparticles.

"Carbon dioxide reduction is a big deal, but it would be an even bigger deal, saving enormous amounts of energy, if we can start to do many chemical reactions at room temperature that now require heating," Sharma said.

Credit: 
National Institute of Standards and Technology (NIST)

An underwater navigation system powered by sound

GPS isn't waterproof. The navigation system depends on radio waves, which break down rapidly in liquids, including seawater. To track undersea objects like drones or whales, researchers rely on acoustic signaling. But devices that generate and send sound usually require batteries -- bulky, short-lived batteries that need regular changing. Could we do without them?

MIT researchers think so. They've built a battery-free pinpointing system dubbed Underwater Backscatter Localization (UBL). Rather than emitting its own acoustic signals, UBL reflects modulated signals from its environment. That provides researchers with positioning information, at net-zero energy. Though the technology is still developing, UBL could someday become a key tool for marine conservationists, climate scientists, and the U.S. Navy.

These advances are described in a paper being presented this week at the Association for Computing Machinery's Hot Topics in Networks workshop, by members of the Media Lab's Signal Kinetics group. Research Scientist Reza Ghaffarivardavagh led the paper, along with co-authors Sayed Saad Afzal, Osvy Rodriguez, and Fadel Adib, who leads the group and is the Doherty Chair of Ocean Utilization as well as an associate professor in the MIT Media Lab and the MIT Department of Electrical Engineering and Computer Science.

"Power-hungry"

It's nearly impossible to escape GPS' grasp on modern life. The technology, which relies on satellite-transmitted radio signals, is used in shipping, navigation, targeted advertising, and more. Since its introduction in the 1970s and '80s, GPS has changed the world. But it hasn't changed the ocean. If you had to hide from GPS, your best bet would be underwater.

Because radio waves quickly deteriorate as they move through water, subsea communications often depend on acoustic signals instead. Sound waves travel faster and further underwater than through air, making them an efficient way to send data. But there's a drawback.

"Sound is power-hungry," says Adib. For tracking devices that produce acoustic signals, "their batteries can drain very quickly." That makes it hard to precisely track objects or animals for a long time-span -- changing a battery is no simple task when it's attached to a migrating whale. So, the team sought a battery-free way to use sound.

Good vibrations

Adib's group turned to a unique resource they'd previously used for low-power acoustic signaling: piezoelectric materials. These materials generate their own electric charge in response to mechanical stress, like getting pinged by vibrating soundwaves. Piezoelectric sensors can then use that charge to selectively reflect some soundwaves back into their environment. A receiver translates that sequence of reflections, called backscatter, into a pattern of 1s (for soundwaves reflected) and 0s (for soundwaves not reflected). The resulting binary code can carry information about ocean temperature or salinity.

In principle, the same technology could provide location information. An observation unit could emit a soundwave, then clock how long it takes that soundwave to reflect off the piezoelectric sensor and return to the observation unit. The elapsed time could be used to calculate the distance between the observer and the piezoelectric sensor. But in practice, timing such backscatter is complicated, because the ocean can be an echo chamber.

The sound waves don't just travel directly between the observation unit and sensor. They also careen between the surface and seabed, returning to the unit at different times. "You start running into all of these reflections," says Adib. "That makes it complicated to compute the location." Accounting for reflections is an even greater challenge in shallow water -- the short distance between seabed and surface means the confounding rebound signals are stronger.

The researchers overcame the reflection issue with "frequency hopping." Rather than sending acoustic signals at a single frequency, the observation unit sends a sequence of signals across a range of frequencies. Each frequency has a different wavelength, so the reflected sound waves return to the observation unit at different phases. By combining information about timing and phase, the observer can pinpoint the distance to the tracking device. Frequency hopping was successful in the researchers' deep-water simulations, but they needed an additional safeguard to cut through the reverberating noise of shallow water.

Where echoes run rampant between the surface and seabed, the researchers had to slow the flow of information. They reduced the bitrate, essentially waiting longer between each signal sent out by the observation unit. That allowed the echoes of each bit to die down before potentially interfering with the next bit. Whereas a bitrate of 2,000 bits/second sufficed in simulations of deep water, the researchers had to dial it down to 100 bits/second in shallow water to obtain a clear signal reflection from the tracker. But a slow bitrate didn't solve everything.

To track moving objects, the researchers actually had to boost the bitrate. One thousand bits/second was too slow to pinpoint a simulated object moving through deep water at 30 centimeters/second. "By the time you get enough information to localize the object, it has already moved from its position," explains Afzal. At a speedy 10,000 bits/second, they were able to track the object through deep water.

Efficient exploration

Adib's team is working to improve the UBL technology, in part by solving challenges like the conflict between low bitrate required in shallow water and the high bitrate needed to track movement. They're working out the kinks through tests in the Charles River. "We did most of the experiments last winter," says Rodriguez. That included some days with ice on the river. "It was not very pleasant."

Conditions aside, the tests provided a proof-of-concept in a challenging shallow-water environment. UBL estimated the distance between a transmitter and backscatter node at various distances up to nearly half a meter. The team is working to increase UBL's range in the field, and they hope to test the system with their collaborators at the Wood Hole Oceanographic Institution on Cape Cod.

They hope UBL can help fuel a boom in ocean exploration. Ghaffarivardavagh notes that scientists have better maps of the moon's surface than of the ocean floor. "Why can't we send out unmanned underwater vehicles on a mission to explore the ocean? The answer is: We will lose them," he says.

UBL could one day help autonomous vehicles stay found underwater, without spending precious battery power. The technology could also help subsea robots work more precisely, and provide information about climate change impacts in the ocean. "There are so many applications," says Adib. "We're hoping to understand the ocean at scale. It's a long-term vision, but that's what we're working toward and what we're excited about."

This work was supported, in part, by the Office of Naval Research.

Credit: 
Massachusetts Institute of Technology

Age is a primary determinant of melanoma treatment resistance, two studies find

image: Ashwani Weeraratna, Ph.D.

Image: 
Christopher Hartlove/Johns Hopkins Medicine

Age may cause identical cancer cells with the same mutations to behave differently. In animal and laboratory models of melanoma cells, age was a primary factor in treatment response.

Cancer has long been known to be a disease of aging, with 60% of cases and 70% of deaths occurring in people over age 65. The new findings by Johns Hopkins Kimmel Cancer Center and Johns Hopkins Bloomberg School of Public Health researchers, published Sept. 4 in Cancer Discovery and Oct. 23 in Clinical Cancer Research, reveal new mechanisms common in aging that contribute to melanoma spread and resistance to treatment.

Melanoma is an aggressive type of skin cancer affecting about 100,000 Americans annually.

In the laboratory, study senior author Ashani Weeraratna, Ph.D., study first author Gretchen Alicea, Ph.D., and collaborators combined fibroblasts -- cells that generate connective tissue and allow the skin to recover from injury -- from people age 25 to 35 or 55 to 65 with lab-created artificial skin and melanoma cells. The cells with the aged fibroblasts consistently upregulated a fatty acid transporter known as FATP2 and increased the uptake of fatty acids from the microenvironment in and around the tumor. When exposed to anti-cancer drugs, the melanoma cells cultured with aged fibroblasts resisted cell death, but this rarely occurred in the cells cultured with young fibroblasts, the researchers reported.

"Taking up a lot of fat protects melanoma cells during therapy," explains Weeraratna, the E.V. McCollum Professor and chair of the Department of Biochemistry and Molecular Biology at the Bloomberg School of Public Health, a Bloomberg Distinguished Professor (cancer biology), a professor of oncology, and co-director of the cancer invasion and metastasis program at the Kimmel Cancer Center.

The findings build on a 2016 study in Nature, which reported on a mouse model of melanoma in which the cancer-promoting BRAF, PTEN and CDKN2A oncogenes were deleted to easily grow tumors. The study found that cancers grew better in the skin of young mice, age 6 to 8 weeks old, but metastasized from the skin to the lung more easily in older mice, age 1 year or older. Using drugs to inhibit the BRAF oncogene is a targeted therapy approach used to treat melanoma in the clinic. In the 2016 study, Weeraratna's laboratory showed that targeting the BRAF oncogene was less effective in reducing tumor growth in aged mice, and a study of patient responses to the BRAF inhibitor corroborated the laboratory findings, showing that complete responses occurred most frequently in patients under age 55.

In the current study, Weeraratna and collaborators used newer-generation inhibitors of this pathway, including drugs targeted at two arms of the BRAF pathway, and they assessed the impact of simultaneously depleting FATP2.

In older mice, the BRAF-targeted therapy alone worked initially, reducing tumor volume, but tumors came back in 10 to 15 days, the researchers reported. However, when they added an FATP2 inhibitor to the targeted therapy, the tumors went away and did not come back during the 60-day period they were monitored.

"Age was the clear driver," says Alicea. "In young models, melanoma cells responded to targeted therapy initially, and targeting FATP2 had no further impact. In aged models, melanoma cells did not respond to targeted therapy until we depleted FATP2, and then the response was dramatic. When FATP2 was depleted, in all of the aged models, tumors regressed in size completely, and did not start to grow back for over two months, a significant amount of time in a mouse experiment."

This was a critically important finding because Weeraratna says most cancer research animal models use young mice, which could mask age-related factors.

"Because we are using mouse models in which the genetic components are identical, these studies point to the critical involvement of the normal surrounding cells and tell us that it is more than genes that are driving the cancer," says Weeraratna.

Weeraratna says the next step is development of an FATP2 inhibitor that, once proved to be effective, could be given in combination with targeted therapies to improve treatment responses, particularly for older patients. Alicea points out that, although less common, FATP2-related treatment resistance can occur in patients under age 55 too.

In the Cancer Clinical Research study, Weeraratna, Mitchell Fane, Ph.D., and colleagues evaluated patient responses to the anti-angiogenesis drug Avastin. Angiogenesis refers to the blood supply that nourishes tumors and transports cancer cells to other parts of the body to seed metastasis. Anti-angiogenesis drugs, like Avastin, work by cutting cancers off from this blood supply.

Using data from a prior United Kingdom study of 1,343 patients with melanoma who received Avastin after surgery, the researchers went back and broke down responses to the drug by age.

Angiogenesis increases with age, so Weeraratna expected to see a greater benefit for older patients than younger patients, but the opposite was true. Patients age 45 or older had virtually no benefit from Avastin and, conversely, patients under age 45 had longer progression-free survival.

Avastin blocks the angiogenesis-promoting gene VEGF, so Weeraratna and the team studied age-stratified melanoma samples from the Cancer Genome Atlas database to see what role VEGF was playing among younger and older patients and to find clues to explain why Avastin did not work for older patients. The researchers found that the expression of both VEGF and its associated receptors was significantly decreased among aged patients. Instead, they found that another protein, sFRP2, was driving angiogenesis in patients over age 55.

Their findings were backed up by studies on mouse models. When the researchers gave mice an anti-VEGF antibody, it reduced the growth of new blood vessels by almost 50%, but when the sFRP2 protein was simultaneously administered, the anti-VEGF antibody had no effect, confirming that sFRP2 was another driver of angiogenesis.

"Older patients have more angiogenesis, which helps cancer spread, but it is driven by sFRP2, not VEGF," says Fane, a postdoctoral fellow at the Johns Hopkins Bloomberg School of Public Health and co-lead author of the study along with Brett Ecker, M.D., and Amanpreet Kaur, Ph.D. The researchers plan to study antibodies that block sFRP2 as a potential treatment for patients over age 55 who do not respond to Avastin.

Both studies, Weeraratna says, make it clear that age is a parameter that must be considered when developing experiments and clinical trials.

"Cancer treatment is not one-size-fits-all," she says. "Our research shows that younger patients can have very different responses to treatment than older patients. Recognizing that the age of a patient can affect response to treatment is critical to providing the best care for all patients."

Weeraratna is also exploring how age influences treatment resistance among patients with pancreas cancer.

Credit: 
Johns Hopkins Medicine

NUS researchers invent flexible and highly reliable sensor

image: Flexible TRACE sensor patches can be placed on the skin to measure blood flow in superficial arteries.

Image: 
National University of Singapore

Real-time health monitoring and sensing abilities of robots require soft electronics, but a challenge of using such materials lie in their reliability. Unlike rigid devices, being elastic and pliable makes their performance less repeatable. The variation in reliability is known as hysteresis.

Guided by the theory of contact mechanics, a team of researchers from the National University of Singapore (NUS) came up with a new sensor material that has significantly less hysteresis. This ability enables more accurate wearable health technology and robotic sensing.

The research team, led by Assistant Professor Benjamin Tee from the Institute for Health Innovation & Technology at NUS, published their results in the prestigious journal Proceedings of the National Academy of Sciences on 28 September 2020.

High sensitivity, low hysteresis pressure sensor

When soft materials are used as compressive sensors, they usually face severe hysteresis issues. The soft sensor's material properties can change in between repeated touches, which affects the reliability of the data. This makes it challenging to get accurate readouts every time, limiting the sensors' possible applications.

The NUS team's breakthrough is the invention of a material which has high sensitivity, but with an almost hysteresis-free performance. They developed a process to crack metal thin films into desirable ring-shaped patterns on a flexible material called polydimethylsiloxane (PDMS).

The team integrated this metal/PDMS film with electrodes and substrates for a piezoresistive sensor and characterised its performance. They conducted repeated mechanical testing, and verified that their design innovation improved sensor performance. Their invention, named Tactile Resistive Annularly Cracked E-Skin, or TRACE, is five times better than conventional soft materials.

"With our unique design, we were able to achieve significantly improved accuracy and reliability. The TRACE sensor could potentially could be used in robotics to perceive surface texture or in wearable health technology devices, for example to measure blood flow in superficial arteries for health monitoring applications" said Asst Prof Tee, who is also from the NUS Department of Materials Science and Engineering.

Next steps

The next step for the NUS team is to further improve the conformability of their material for different wearable applications, and to develop artificial intelligence (AI) applications based on the sensors.

"Our long-term goal is to predict cardiovascular health in the form of a tiny smart patch that is placed on human skin. This TRACE sensor is a step forward towards that reality because the data it can capture for pulse velocities is more accurate, and can also be equipped with machine learning algorithms to predict surface textures more accurately," explained Asst Prof Tee.

Other applications the NUS team aims to develop include uses in prosthetics, where having a reliable skin interface allows for a more intelligent response.

Credit: 
National University of Singapore

it's not if, but how people use social media that impacts their well-being

image: A new study from UBC Okanagan examines how using social media impacts happiness.

Image: 
UBC Okanagan

New research from UBC Okanagan indicates what's most important for overall happiness is how a person uses social media.

Derrick Wirtz, an associate professor of teaching in psychology at the Irving K. Barber Faculty of Arts and Social Sciences, took a close look at how people use three major social platforms--Facebook, Twitter and Instagram--and how that use can impact a person's overall well-being.

"Social network sites are an integral part of everyday life for many people around the world," says Wirtz. "Every day, billions of people interact with social media. Yet the widespread use of social network sites stands in sharp contrast to a comparatively small body of research on how this use impacts a person's happiness."

Even before COVID-19 and self-isolation became standard practice, Wirtz says social media has transformed how we interact with others. Face-to-face, in-person contact is now matched or exceeded by online social interactions as the primary way people connect. While most people gain happiness from interacting with others face-to-face, Wirtz notes that some come away from using social media with a feeling of negativity--for a variety of different reasons.

One issue is social comparison. Participants in Wirtz's study said the more they compared themselves to others while using social media, the less happy they felt.

"Viewing images and updates that selectively portray others positively may lead social media users to underestimate how much others actually experience negative emotions and lead people to conclude that their own life--with its mix of positive and negative feelings--is, by comparison, not as good," he says.

Wirtz notes that viewing other people's posts and images while not interacting with them lends itself to comparison without the mood-boosting benefits that ordinarily follow social contact, undermining well-being and reducing self-esteem. "Passive use, scrolling through others' posts and updates, involves little person-to-person reciprocal interaction while providing ample opportunity for upward comparison."

As part of his research, study participants were asked about four specific functions of Facebook--checking a news feed, messaging, catching up on world news and posting status or picture updates. The most frequently used function was passively checking one's news feed. Participants primarily used Facebook without directly connecting with other users, and the negative effects on subjective well-being were consistent with this form of use.

During COVID-19, Wirtz notes people naturally turn to social media to reduce feelings of social isolation. Yet, his research (conducted before the pandemic) found that although people used social media more when they were lonely, time spent on social media only increased feelings of loneliness for participants in the study. "Today, the necessity of seeing and hearing friends and family only through social media due to COVID-19 might serve as a reminder of missed opportunities to spend time together."

The more people used any of these three social media sites, the more negative they reported feeling afterwards. "The three social network sites examined--Facebook, Twitter and Instagram--yielded remarkably convergent findings," he says. "The more respondents had recently used these sites, either in aggregate or individually, the more negative effect they reported when they responded to our randomly-timed surveys over a 10-day period."

Wirtz's study also included offline interactions with others, either face-to-face or a phone call. Comparing both offline communication with online, he was able to demonstrate that offline social interaction had precisely the opposite effect of using social media, strongly enhancing emotional well-being.

But all is not lost, Wirtz says, as this research also reveals how people can use social media positively, something more important than ever during COVID-19. He suggests people avoid passively scrolling and resist comparing themselves to other social media users. He also says people should use social media sites to enable direct interactions and social connectedness--for example, talking online synchronously or arranging time spent with others in-person, when possible and with proper precautions.

"If we all remember to do that, the negative impact of social media use could be reduced--and social networks sites could even have the potential to improve our well-being and happiness," he adds. "In other words, we need to remember how we use social media has the potential to shape the effects on our day-to-day happiness."

Credit: 
University of British Columbia Okanagan campus

Ultrapotent COVID-19 vaccine candidate designed via computer

image: Artist's depiction of an ultrapotent COVID-19 vaccine candidate in which 60 pieces of a coronavirus protein (red) decorate nanoparticles (blue and white). The vaccine candidate was designed using methods developed at the UW Medicine Institute for Protein Design. The molecular structure of the vaccine roughly mimics that of a virus, which may account for its enhanced ability to provoke an immune response.

Image: 
Ian Haydon/ UW Medicine Institute for Protein Design

An innovative nanoparticle vaccine candidate for the pandemic coronavirus produces virus-neutralizing antibodies in mice at levels ten-times greater than is seen in people who have recovered from COVID-19 infections. Designed by scientists at the University of Washington School of Medicine in Seattle, the vaccine candidate has been transferred to two companies for clinical development.

Compared to vaccination with the soluble SARS-CoV-2 Spike protein, which is what many leading COVID-19 vaccine candidates are based on, the new nanoparticle vaccine produced ten times more neutralizing antibodies in mice, even at a six-fold lower vaccine dose. The data also show a strong B-cell response after immunization, which can be critical for immune memory and a durable vaccine effect. When administered to a single nonhuman primate, the nanoparticle vaccine produced neutralizing antibodies targeting multiple different sites on the Spike protein. Researchers say this may ensure protection against mutated strains of the virus, should they arise. The Spike protein is part of the coronavirus infectivity machinery.

The findings are published in Cell. The lead authors of this paper are Alexandra Walls, a research scientist in the laboratory of David Veesler, who is an associate professor of biochemistry at the UW School of Medicine; and Brooke Fiala, a research scientist in the laboratory of Neil King, who is an assistant professor of biochemistry at the UW School of Medicine.

The vaccine candidate was developed using structure-based vaccine design techniques invented at UW Medicine. It is a self-assembling protein nanoparticle that displays 60 copies of the SARS-CoV-2 Spike protein's receptor-binding domain in a highly immunogenic array. The molecular structure of the vaccine roughly mimics that of a virus, which may account for its enhanced ability to provoke an immune response.

"We hope that our nanoparticle platform may help fight this pandemic that is causing so much damage to our world," said King, inventor of the computational vaccine design technology at the Institute for Protein Design at UW Medicine. "The potency, stability, and manufacturability of this vaccine candidate differentiate it from many others under investigation."

Hundreds of candidate vaccines for COVID-19 are in development around the world. Many require large doses, complex manufacturing, and cold-chain shipping and storage. An ultrapotent vaccine that is safe, effective at low doses, simple to produce and stable outside of a freezer could enable vaccination against COVID-19 on a global scale.

"I am delighted that our studies of antibody responses to coronaviruses led to the design of this promising vaccine candidate," said Veesler, who spearheaded the concept of a multivalent receptor-binding domain-based vaccine.

Credit: 
University of Washington School of Medicine/UW Medicine

Just like us - Neanderthal children grew and were weaned similar to us

image: Presumably a Neanderthal child lost this tooth 40,000 to 70,000 year ago when his or her permanent teeth came in

Image: 
ERC project SUCCESS, University of Bologna, Italy

FRANKFURT/KENT/BOLOGNA/FERRARA. Teeth grow and register information in form of growth lines, akin to tree rings, that can be read through histological techniques. Combining such information with chemical data obtained with a laser-mass spectrometer, in particular strontium concentrations, the scientists were able to show that these Neanderthals introduced solid food in their children's diet at around 5-6 months of age.

Not cultural but physiological

Alessia Nava (University of Kent, UK), co-first author of the work, says: "The beginning of weaning relates to physiology rather than to cultural factors. In modern humans, in fact, the first introduction of solid food occurs at around 6 months of age when the child needs a more energetic food supply, and it is shared by very different cultures and societies. Now, we know that also Neanderthals started to wean their children when modern humans do".

"In particular, compared to other primates" says Federico Lugli (University of Bologna), co-first author of the work "it is highly conceivable that the high energy demand of the growing human brain triggers the early introduction of solid foods in child diet".

Neanderthals are our closest cousins within the human evolutionary tree. However, their pace of growth and early life metabolic constraints are still highly debated within the scientific literature.

Stefano Benazzi (University of Bologna), co-senior author, says: "This work's results imply similar energy demands during early infancy and a close pace of growth between Homo sapiens and Neanderthals. Taken together, these factors possibly suggest that Neanderthal newborns were of similar weight to modern human neonates, pointing to a likely similar gestational history and early-life ontogeny, and potentially shorter inter-birth interval".

Home, sweet home

Other than their early diet and growth, scientists also collected data on the regional mobility of these Neanderthals using time-resolved strontium isotope analyses.

"They were less mobile than previously suggested by other scholars" says Wolfgang Müller (Goethe University Frankfurt), co-senior author "the strontium isotope signature registered in their teeth indicates in fact that they have spent most of the time close to their home: this reflects a very modern mental template and a likely thoughtful use of local resources".

"Despite the general cooling during the period of interest, Northeastern Italy has almost always been a place rich in food, ecological variability and caves, ultimately explaining survival of Neanderthals in this region till about 45,000 years ago" says Marco Peresani (University of Ferrara), co-senior author and responsible for findings from archaeological excavations at sites of De Nadale and Fumane.

This research adds a new piece in the puzzling pictures of Neanderthal, a human species so close to us but still so enigmatic. Specifically, researchers exclude that the Neanderthal small population size, derived in earlier genetic analyses, was driven by differences in weaning age, and that other biocultural factors led to their demise. This will be further investigated within the framework of the ERC project SUCCESS ('The Earliest Migration of Homo sapiens in Southern Europe - Understanding the biocultural processes that define our uniqueness'), led by Stefano Benazzi at University of Bologna.

Credit: 
Goethe University Frankfurt

Early impact of COVID-19 on scientists revealed in global survey of 25,000

The initial impact of the coronavirus pandemic on the scientific community has been revealed in one of the largest academic surveys ever conducted. Open access academic publisher Frontiers surveyed more than 25,000 members of its scientific research community from 152 countries between May and June this year to assess the initial impact of the virus on them and their work.

Highlights from The Academic Response to COVID-19, released by Frontiers this week, include researchers views on:

The political response

Mitigating future disasters

The impact to funding

Keeping the science going

Commenting on the report Kamila Markram, Frontiers' CEO and co-founder, said: "A pulse check of how COVID-19 has manifested itself across the research community is crucial if we are to ensure that scientific discovery continues unabated. Scientists are under extraordinary pressure to deliver answers and a lack of precedent and preparation, combined with severe political and social pressures, has made this an incredibly challenging time for them. Along with the disruption faced by most of the world's population - lockdown, remote working, isolation and anxiety - many researchers have felt an added pressure to understand, cure and mitigate the virus."

Survey respondents come from Frontiers' academic community of authors, editors and reviewers, representing diverse countries, roles, and areas of research.

Perceptions of the political response

Researchers are divided over their perceptions of the political response. Countries that showed a significantly higher level of dissatisfaction with policy makers' use of scientific advice during the pandemic include the US, Brazil, Chile and the UK, while those in New Zealand, China, and Greece were the most satisfied.

"While we do not know what advice was given and if it was used, this data suggests more comfort in those countries that are coping well - those who took early lockdown decisions, have had similar previous experience, for example with SARS, and who recognised science as key to pandemic management decision making." - Prof. Sir Peter Gluckman - chair, International Network for Government Science Advice.

Preparing for future disasters

The coronavirus pandemic is the biggest global emergency of recent times. But what about the months, years, and decades ahead? Concerns about future pandemics (28%) and climate change (21%), topped the list of future disasters that can be mitigated with the help of science, according the respondents.

Kamila Markram said: "We were not prepared for COVID-19, despite the warnings of a pandemic, and are only now beginning to fully grasp the cost of our complacency. The consequences of continuing to fail to respond to future threats, particularly the climate emergency, will be far worse and potentially irreversible if we do not act now. Researchers have warned policymakers for 30 years that we are damaging the natural environment at an unsustainable rate. Yet, deforestation, air and water pollution, intensive agriculture, and general environmental degradation continue to get worse. We must think about how we can fundamentally change our relationship with the natural world. The one positive that we can take from this pandemic is that it might be the catalyst for such change and instill a greater sense of urgency and responsibility."

Future of Funding Unclear

Findings reveal that COVID-19 has created a sense of uncertainty in the research community around funding. Almost half (47%) of those surveyed believe less funding will be available in the future as a result of COVID-19, signaling a potentially lasting impact to the scientific research landscape.

Kamila Markram said: "The impact of COVID-19 is manifesting itself across the funding landscape. While it is critical that collectively, we do everything we can right now to combat the virus, we must also recognize that diverting or the 'covidization' of funding away from other fields is not a sustainable solution. The environment, for example, is an area we simply cannot afford to neglect. Doing so will have potentially irreversible consequences. We have to adopt a more holistic, interdisciplinary approach to problem solving."

The science must go on

Findings also reveal scientific research itself has been able to continue for the large part, despite the disruption of COVID-19. When asked what they had been working on during the pandemic, 74% of respondents said they had been writing papers, 57% continuing with their research, and 42% virtual teaching.

Kamila Markram said: "It is encouraging that despite the massive disruption the first wave of coronavirus caused that the vast majority researchers said were able to continue to work. It gives us hope that the academic community will remain resilient to new waves of COVID-19, like those currently sweeping through Europe, and come together to find the solutions we urgently need to live healthy lives on a healthy planet."

Credit: 
Frontiers