Tech

Powerful new synthetic vaccines to combat epidemics

image: ADDomer: Two views seen from a different angle of synthetic multiepitope display scaffold for next generation vaccines

Image: 
University of Bristol

A new type of vaccine that can be stored at warmer temperatures, removing the need for refrigeration, has been developed for mosquito-borne virus Chikungunya in a major advance in vaccine technology. The findings, published in Science Advances today [Wednesday 25 September], reveal exceptionally promising results for the Chikungunya vaccine candidate, which has been engineered using a synthetic protein scaffold that could revolutionise the way vaccines are designed, produced and stored.

Infectious diseases continue to plague populations worldwide. Among the means at our disposal to counter this threat, vaccination has proven to be exceptionally powerful. Smallpox has been eradicated, measles, polio and tetanus constrained from the world by vaccination. However, severe challenges to human health persist, evidenced by epidemics caused by Ebola, Zika and others. This is particularly severe in developing countries which often lack adequate infrastructure and resources to prevent or manage outbreaks, bringing about disruption and damage in affected communities and massive economic shortfall.

A recent example is Chikungunya, a virus transmitted by the bite of an infected mosquito. The disease causes crippling headache, vomiting, swelling of limbs and can lead to death. Even if a fever ends abruptly, chronic symptoms such as intense joint pain, insomnia and extreme prostration remain. Formerly confined to sub-Saharan Africa, Chikungunya has recently spread worldwide as its mosquito host leaves its natural habitat due to deforestation and climate change, with recent outbreaks in USA and Europe causing alarm.

Researchers from the University of Bristol and the French National Centre for Scientific Research (CNRS) in Grenoble, France, teamed up with computer technology giant Oracle to find a way to make vaccines that are thermostable (able to withstand warm temperatures), can be designed quickly and are easily produced.

"We were working with a protein that forms a multimeric particle resembling a virus but is completely safe, because it has no genetic material inside, said Pascal Fender, expert virologist at CNRS. "Completely by chance, we discovered that this particle was incredibly stable even after months, without refrigeration."

"This particle has a very flexible, exposed surface that can be easily engineered, added Imre Berger, Director of the Max Planck-Bristol Centre for Minimal Biology in Bristol. "We figured that we could insert small, harmless bits of Chikungunya to generate a virus-like mimic we could potentially use as a vaccine."

To validate their design, the scientists employed cryo-electron microscopy, a powerful new technique recently installed in Bristol's state-of-the-art microscopy facility headed by Christiane Schaffitzel, co-author of the study. Cryo-EM yields very large data sets from which the structure of a sample can be determined at near atomic resolution, requiring massive parallel computing.

Enabled by Oracle's high-performance cloud infrastructure, the team developed a novel computational approach to create an accurate digital model of the synthetic vaccine. University of Bristol IT specialists Christopher Woods and Matt Williams, together with colleagues at Oracle, implemented software packages seamlessly on the cloud in this pioneering effort. Christopher explained: "We were able to process the large data sets obtained by the microscope on the cloud in a fraction of the time and at much lower cost than previously thought possible."

"Researchers have had a long tradition of building and installing their own super computers on-premises, but cloud computing is allowing them to run large data sets in record time, with fast connectivity and low latency. This is helping them crunch data and make scientific breakthroughs much faster. Going forward, technologies like machine learning and cloud computing will play a significant part in the scientific world, and we are delighted we could help the researchers with this important discovery," added Phil Bates, leading cloud architect at Oracle.

The particles the scientists designed yielded exceptionally promising results in animal studies, soundly setting the stage for a future vaccine to combat Chikungunya disease.

"We were thoroughly delighted," continued Imre Berger. "Viruses are waiting to strike, and we need to have the tools ready to tackle this global threat. Our vaccine candidate is easy to manufacture, extremely stable and elicits a powerful immune response. It can be stored and transported without refrigeration to countries and patients where it is most needed. Intriguingly, we can now rapidly engineer similar vaccines to combat many other infectious diseases just as well."

"It really ticks a lot of boxes," concluded Fred Garzoni, founder of Imophoron Ltd, a Bristol biotech start-up developing new vaccines derived from the present work. "Many challenges in the industry require innovative solutions, to bring powerful new vaccines to patients. Matching cutting-edge synthetic biology with cloud computing turned out to be a winner."

Credit: 
University of Bristol

Minimum pricing policy appears to have cut spending on alcohol in Scotland

The introduction of minimum unit pricing (MUP) in Scotland appears to have been successful in reducing the amount of alcohol purchased and, by inference, consumption by households, finds a study published by The BMJ today.

The effects were greatest in households who bought the most alcohol, suggesting that the policy "has achieved its ambition to make relatively cheap and strong alcohol less affordable, which in turn should positively impact public health over time," say the researchers.

In May 2018, Scotland became the first country in the world to introduce a national minimum pricing policy, setting a limit of 50p ($0.62; €0.56) per unit below which alcohol cannot be sold.

In the UK, one unit contains 10ml/8g of alcohol and is about equal to half a pint of ordinary strength beer or cider, or a small pub measure (25ml) of spirits. A small (125ml) glass of wine contains around 1.5 units (15ml/12g) alcohol.

To test whether the policy is already having an effect, researchers led by Peter Anderson at Newcastle University set out to assess the impact of MUP on alcohol purchases in Scotland in the eight months immediately after implementation.

Their findings are based on shopping data for 2015-18 from 5,325 Scottish households, compared with 54,807 English households as controls, and 10,040 households in northern England to control for potential cross border effects.

After adjusting for number of adults in each household, the introduction of MUP was followed by a price increase of 0.64p per gram (5.1p per UK unit; 7.9%) and a reduction of 9.5 g (1.2 UK units; 7.6%) in weekly 'off-trade' (shop) purchases of alcohol per adult per household.

Reductions were most notable for beer, spirits, and cider, including the own-brand spirits and high strength ciders that the policy sought to target.

What's more, the price increases were greatest in households that bought the largest amount of alcohol (just under £3 per adult per week) and among the lower income groups, supporting the idea that MUP effectively targets those most at risk of harm from alcohol with a minimal impact on household budgets.

This is an observational study, and as such, can't establish cause, and analyses were restricted to off-trade sales. But the authors point out that heavier drinkers are more likely to buy alcohol from shops than 'on-trade' in bars and restaurants.

They also acknowledge that heavy drinkers, particularly male drinkers or those with no fixed address, are likely to be under-represented in their study, and they did not include any health outcome data.

Nevertheless, they say their analyses indicate that MUP "is an effective policy option to reduce alcohol purchases, particularly affecting higher purchasers, and with no evidence of a significant differential negative impact on expenditure by lower income groups."

Our data supports the introduction of MUP as an effective policy option in other jurisdictions," they conclude.

This view is supported by public health experts in a linked editorial, who say the rest of the UK should follow Scotland's lead.

John Mooney from the University of Sunderland and Eric Carlin from the Royal College of Physicians of Edinburgh, point out that, in an age when complex public health issues such as harm from alcohol require whole system approaches, "no single policy lever should be seen as a panacea, and MUP is still regarded in Scotland as one component of the overall strategy."

Nevertheless, they say the observed reductions of up to 7.6% in purchases were more than double previous modelling estimates, indicating that real health benefits could be substantially greater.

"Surely it is time to follow Scotland's lead and implement MUP across the rest of the UK," they write. "Action is especially pressing for those regions, such as north east England, with comparable levels of harm from alcohol."

Credit: 
BMJ Group

Ditch the delicate wash cycle to save our seas

Delicate wash cycles in washing machines found to release more plastic microfibres than other cycles.

New research led by Newcastle University has shown that it is the volume of water used during the wash cycle, rather than the spinning action of the washing machine, which is the key factor in the release of plastic microfibres from clothes.

Millions of plastic microfibres are shed every time we wash clothes that contain materials such as nylon, polyester and acrylic.

Because these fibres are so small, they drain out of our washing machines and can ultimately enter the marine environment.

Once in the ocean, they are ingested by the animals living there and two years ago Newcastle University scientists showed for the first time these fibres have now reached the deepest parts of our ocean.

Working with Procter & Gamble in Newcastle, the team measured the release of plastic microfibres from polyester clothing for a range of cycles and water volumes.

Counting the fibres released, the team found the higher the volume of water the more fibres released, regardless of the speed and abrasive forces of the washing machine.

In fact, they found that on average, 800,000 more fibres were released in a delicate wash than a standard cycle.

Publishing their findings today in the academic journal Environmental Science and Technology, PhD student Max Kelly, who led the research, explained:

"Counterintuitively, we discovered that 'delicate' cycles release more plastic microfibres into the water, and then the environment, than standard cycles.

"Previous research has suggested the speed the drum spins at, the number of times it changes spinning direction during a cycle and the length of pauses in the cycle - all known as the machine agitation - is the most important factor in the amount of microfibre released.

"But we have shown here that even at reduced levels of agitation, microfibre release is still greatest with higher water-volume-to-fabric ratios.

"This is because the high volume of water used in a delicate cycle which is supposed to protect sensitive clothing from damage actually 'plucks' away more fibres from the material."

Plastic pollution in our oceans

Plastic pollution is one of the biggest challenges facing society today and understanding the key sources is an important process to help reduce our impact on the environment.

Laundry has been recognised as a major contributor of microplastics but until now, precisely measuring the release of these fibres has been difficult due to the fact that it's almost impossible to accurately simulate the reality of what happens in people's machines in a lab setting.

Using a tergotometer - a benchtop device comprising of eight (1000 mL) washing vessels that simulate full-scale domestic washing, the team were able to carry out tests under different conditions, making changes to water volume, spin speed, temperature and time.

A DigiEye camera - digital colour imaging system - was then used to accurately calculate the amount of microfibres released.

To test whether the observations made using the tergotometers were reflective of full-size domestic washing machines, the team then tested the fabrics on a delicate wash cycle using identical washing machines in the test centre at Procter and Gamble (P&G).

The team showed that previous recommendations by groups to move towards high water volumes and low levels of agitation as a way of reducing the amount of microfibre released was actually making the problem worse.

Neil Lant, Research Fellow at P&G and co-author on the study, said:

"The appliance industry has started to introduce microfibre filters in some new washing machines and the textile industry is looking to reduce the fibre shedding levels of new clothing.

"We hope that the issue will ultimately be solved by such actions, and our work on the mechanistic causes will help in the development of these solutions."

Max Kelly adds:

"Reducing the amount of plastic pollution is everyone's responsibility and often it's the small changes that make a huge difference.

"By avoiding high water-volume-to-fabric washes such as the delicate cycles and ensuring full wash loads then we can all do our bit to help reduce the amount of these synthetic fibres being released into the environment.

"Hopefully, these findings may also be used by manufacturers to influence the design of future washing machines and reduce our plastic footprint. Over time these changes could also see a global reduction in the amount of energy and water required to wash our clothes."

Credit: 
Newcastle University

Cancer: The origin of genetic mutations

image: Fluorescence microscopy images of the division of a normal cell (left row) and a cell with replication stress (middle row). The mitotic spindle is in green, the chromosomes in red. The right row details the cell errors with replication stress: a three-pole spindle (left) and the loss of a chromosome (right).

Image: 
© UNIGE -- Patrick Meraldi

When a cell divides into two daughter cells, it must replicate its DNA according to a very specific scenario. In the presence of some disruptive elements, however, cancer cells are unable to perform this operation optimally; replication then takes place more slowly and less efficiently. This phenomenon is called "replication stress". While known to be linked to the increase in genetic mutations, another phenomenon typical of cancer cells, the exact mechanism at work remained unknown until now. By deciphering how replication stress induces the loss or gain of whole chromosomes in the daughters of cancer cells, and even by reversing it in diseased cells, researchers at the University of Geneva (UNIGE) provide new knowledge that will ultimately lead to better diagnosis and possibly better treatment of cancer. Results can be discovered in the journal Nature Communications.

During a normal life cycle, the cell grows and, when all the «building blocks» necessary for DNA replication are ready, it replicates the chromosomes, which contain its genetic information. Once DNA replication is complete, the cell enters in mitosis, a term that refers to the steps governing cell division. A mitotic spindle is then created, in which the two replicated DNA strands are separated so that the two daughter cells inherit an identical number of chromosomes. "To ensure the correct distribution of chromosomes, the mitotic spindle has two poles", says Patrick Meraldi, professor in the Department of Cell Physiology and Metabolism and coordinator of the Translational Research Centre in Onco-haematology (CRTOH) at UNIGE Faculty of Medicine. "This bipolarization is essential for the genomic stability of both daughter cells."

Most of the time, replication stress is due to certain molecules that, when produced in excess, become harmful. For example, cyclin E protein, involved in DNA regulation, promotes the development of cancers when overexpressed. Indeed, under its influence, cancer cells tend to replicate too early. They do not have all the components necessary for DNA synthesis yet, and this is where the errors appear.

How to create and remove replication stress

To decipher this phenomenon, the researchers artificially induced replication stress in healthy human cells with a product that slows DNA replication, and thus prevents the process from proceeding normally. "We have observed that this stress causes a malformation of the mitotic spindle which, instead of having two poles, has three or four", explains Therese Wilhelm, a researcher in Professor Meraldi's team and co-first author of this work. "The cell is generally able to remove these supernumerary poles, but not fast enough to avoid erroneous connections between the chromosomes and the mitotic spindle." In the end, these erroneous connections promote a poor distribution of chromosomes, leading to the loss or gain of one or more chromosomes. This genetic instability thus allows the rapid anarchic evolution of cancer cells.

The scientists then successfully corrected the effects of replication stress in diseased cells by providing them with the missing components they needed for replication. "Not only have we established the link between replication stress and chromosomal errors, but we have been able to correct it, showing that this phenomenon, present in all cancer and even precancerous cells, is controllable", reports Anna-Maria Olziersky, a researcher in Professor Meraldi's team and co-first author.

Could therapies exploit this phenomenon?

Through a series of experiments targeting this mechanism, the researchers demonstrated the greatest sensitivity of cells to the abnormal mitotic spindle to paclitaxel, a chemotherapeutic drug acting on the mitotic spindle and used for the treatment of breast cancer. "This shows that, in principle, it is possible to specifically target these cells without affecting healthy cells", explains Patrick Meraldi. "The idea is not to correct the error, but rather to block the cell at this stage to prevent it from removing the additional poles, which automatically leads to its rapid death without causing damage to the still healthy neighbouring cells."

Credit: 
Université de Genève

Web tool prioritizes health risks for postmenopausal women

A web-based calculator that helps middle-aged women predict their risks of conditions that become more likely with age has been developed by public health, medical and computer science experts from throughout the U.S. and Saudi Arabia.

Led by physician John Robbins of UC Davis Health, the team's risk-prediction calculator is unique in that it accounts for multiple health conditions at once, rather than one at a time. It also identifies the changing probability of those conditions over time.

"It gives women and their physicians a sense of what to focus on," Robbins said. "Most are concerned about breast cancer and, of course, they absolutely should be. But if your history and lifestyle indicate that your greatest risk is heart disease, that should be your number one concern."

Based on WHI data

The calculator is based on data from the Women's Health Initiative (WHI), a long-term study of more than 160,000 diverse U.S. women aged 50 to 79. WHI's comprehensive demographic, lifestyle, medical history and health outcomes information has supported groundbreaking studies focused on improving care for postmenopausal women.

Robbins previously used the data to study how genetically derived ancestry affects disease risk. He also was principal investigator for the UC Davis WHI site.

A comparison of likely risks

The result of his current WHI-based study is an interactive, web-based calculator that, after answering about 35 to 50 questions related to current and past health and family history, shows a woman's probability of experiencing heart attack, stroke, hip fracture, or breast, lung or colorectal cancer within 5, 10 or 15 years.

The risk of one disease is always relative to the risks of another, and our tool accounts for those competing risks," Robbins said. "The goal is to help women stop worrying too much about health risks that aren't likely to be factors for them and then address the ones that are."

Credit: 
University of California - Davis Health

Epilepsy: Seizures not forecastable as expected

image: The scientists recorded subjects' brain waves using up to 70 implanted electrodes.

Image: 
© Photo: Gregor Gast/UKB

Epileptic seizures can probably not be predicted by changes in brain wave patterns that were previously assumed to be characteristic precursors. This is the conclusion reached by scientists from the University of Bonn in a recent study. The results are now published in the journal "Chaos: An Interdisciplinary Journal of Nonlinear Science".

During an epileptic seizure, large nerve cell clusters in the brain discharge simultaneously. The consequences are dramatic muscle spasms and loss of consciousness, which can be life-threatening. Many researchers assume that the brain has crossed a so-called "tipping point", which almost inevitably leads to a seizure.

The lead-up to this tipping point is supposedly heralded by characteristic changes in brain waves - so says a common hypothesis. According to this theory, nerve cell networks reproduce their own activity when close to this point: The brain waves they produce are very similar to previous ones. At the same time, they react to disturbances with much stronger discharges than normal. Additionally, it takes longer for their activity to normalize. "We call this 'critical slowing down', CSL for short," explains Prof. Dr. Klaus Lehnertz from the Department of Epileptology at the University Hospital Bonn.

Together with his former colleague Theresa Wilkat and his doctoral student Thorsten Rings, the physicist searched for such CSL events. For this purpose, the researchers analyzed brain wave recordings of 28 subjects with epilepsies that could not be treated with medication. Measurements were taken using electrodes implanted at various sites in the subjects' brains. "This is for diagnostic purposes, for example, to identify the site from which the seizures originate," explains Lehnertz.

Unsuitable as an early warning system

The subjects had up to 70 sensors each in their brains. The scientists analyzed each individual EEG curve recorded by the sensors using sophisticated statistical methods. "We not only considered the hours before an attack, but also looked at a period of up to two weeks," Wilkat explains.

The result was disappointing: "Although we found a number of CSL events, these usually occurred completely independent of a seizure," emphasizes Lehnertz. "Only in two subjects we were able to observe a weak relationship with subsequent seizures." His conclusion: "Critical slowing down" is not suitable as an early warning sign, even if this is claimed in literature again and again.

He considers it more promising not to look at individual sites in the brain, but to understand these as parts of a network that influence each other. The cause of a seizure is most likely not the activity of a single nerve cell cluster that gets out of control. "Instead, there are feedback and amplification effects that, as a whole, lead to this massive temporary brain malfunction," he emphasizes. Understanding these processes will also allow better forecasting techniques to be developed.

Epileptic seizures usually strike like a bolt from the blue, which significantly impacts the daily lives of those affected. For example, sufferers are not allowed to drive a car or carry out certain activities with a high risk of injury. Epileptologists, physicists and mathematicians have therefore been trying to predict the dangerous malfunctions of the brain for more than three decades - so far with mixed success: There certainly are systems that can detect seizure precursors (using indicators other than "critical slowing down"), but at present they work only for about half of the subjects and are not particularly reliable. They can not recognize every precursor of a seizure and are also prone to false alarms.

However, this is not the only reason why scientists around the globe are looking for more reliable indicators in order to be able to warn subjects in good time. They also hope to be able to prevent an attack in advance through appropriate interventions.

Credit: 
University of Bonn

Light work for superconductors

image: Visualizations of electron energies as the experiment ran.

Image: 
© 2019 Suzuki et al.

For the first time researchers successfully used laser pulses to excite an iron-based compound into a superconducting state. This means it conducted electricity without resistance. The iron compound is a known superconductor at ultralow temperatures, but this method enables superconduction at higher temperatures. It is hoped this kind of research could greatly improve power efficiency in electrical equipment and electronic devices.

"Put simply, we demonstrated that under the right conditions, light can induce a state of superconductivity in an iron compound. So it has no resistance to an electric current," explained Project Researcher Takeshi Suzuki from the Institute for Solid State Physics at the University of Tokyo. "In the past it may even have been called alchemy, but in reality we understand the physical processes that instantly changed a normal metal into a superconductor. These are exciting times for physics."

Superconduction is a hot topic in solid state physics, or rather a very, very cold one. As Suzuki explained, superconduction is when a material, frequently an electrical conductor, carries an electric current but does not add to the resistance of the circuit. If this can be realized, it would mean devices and infrastructure based on such principles could be extremely power efficient. In other words, it could one day save you money on your electricity bill -- imagine that.

However, at present there is a catch as to why you don't already see superconductor-based televisions and vacuum cleaners in the stores. Materials such as iron selenide (FeSe) the researchers investigated only superconduct when they are far below the freezing point of water. In fact, at ambient-pressure FeSe usually superconducts at around 10 degrees above absolute zero, or around minus 263 degrees Celsius, scarcely warmer than the cold, dark depths of space.

There is a way to coax FeSe into superconduction at slightly less forbidding temperatures of up to around minus 223 degrees Celsius, but this requires enormous pressures to be applied to the sample, around six gigapascals or 59,000 times standard atmosphere at sea level. That would prove impractical for the implementation of superconduction into useful devices. This then presents a challenge to physicists, albeit one that serves to motivate them as they strive to one day be the first to present a room-temperature superconductor to the world.

"Every material in our daily lives has its own character. Foam is soft, rubber is flexible, glass is transparent and a superconductor has a unique trait that current can flow smoothly with no resistance. This is a character we would all like to meet," said graduate student Mari Watanabe, also from the Institute for Solid State Physics. "With a high-energy, ultrafast laser, we successfully observed an emergent photo-excited phenomenon - superconduction - at the warmer temperature of minus 258 degrees Celsius, which would ordinarily require high pressures or other impractical compromises."

This research is the latest in a long line of steps from the discovery of superconduction to the long-awaited day when a room-temperature superconductor may become possible. And as with many emerging fields of study within physics, there may be applications that have not yet been envisaged. One possible use of this idea of photo-excitation is to achieve high-speed switching components for computation which would also produce little heat, thus maximize efficiency.

"Next, we will search for more favorable conditions for light-induced superconductivity by using a different kind of light, and eventually achieve room-temperature superconductivity," concluded Suzuki. "Superconductivity can dramatically reduce waste heat and energy if it can be used in everyday life at room temperature. We are keen to study superconductivity in order to solve the energy problem, which is one of the most serious problems in the world right now."

Credit: 
University of Tokyo

Tractor overturn prediction using a bouncing ball model could save the lives of farmers

In 2016, 417 farmers and farm workers died from a work-related injury in the United States and 312 in Japan, according to the Centers for Disease Control and Prevention's 2018 agricultural safety report. Overturning tractors are the leading cause of death for farmers around the world.

In order to reduce the rate of overturned tractors, researchers in Japan have developed a model for understanding the conditions that lead to a tractor overturning from an unlikely source: They based their model on one used to understand the unpredictability of a bouncing ball.

The study was published in the print issue of Biosystems Engineering in June.

"As most farm tractors don't have suspension systems, violent vibrations can occur in the tractor and the wheels sometimes depart from the ground in what is called a 'bouncing' phenomenon," said Kenshi Sakai, professor of environmental and agricultural engineering in the Institute of Agricultural Science at Tokyo University of Agriculture and Technology (TUAT). "Standard models of farm tractors have not accounted for this bouncing process."

Bouncing becomes particularly serious when tractors run on steep slopes such as from a paddy field to a farm road, according to Sakai. If the road slopes more than 18 degrees, the risk of the wheels leaving the road increases significantly. If the front wheels leave the road, the tractor bounces and can cause the back wheels to leave the roads as well, leading to overturning.

By using the same dynamics modeling system to understand how a ball bounces up from a surface and returns to a surface, the researchers examined how a tractor might respond to various bouncing scenarios through computer simulations. They also ran simulations based on several real accidents that occurred in Japan.

The researchers found that bouncing tractors behave in the same way as bouncing balls: in a nonlinear manner. According to Sakai, the unpredictability must be accounted for in behavior dynamics modelling to predict how a vehicle might respond and, eventually, incorporate a solution to compensate for the bouncing to prevent overturning.

"To describe such serious, violent and unexpected dynamics behavior occurring in farm tractors and vehicles, it is essential to implement the bouncing process in the model," Sakai said. "Our proposed model could be a new paradigm in farm tractor modelling."

"We are expanding the proposed tractor bouncing model to describe most of the typical cases of overturning accidents," Sakai said. "Our ultimate goal is to develop a farm tractor driving simulator which can realize such serious accidents in virtual space for training and education to reduce the fatal accidents caused by farm tractors.

Credit: 
Tokyo University of Agriculture and Technology

Researchers home in on extremely rare nuclear process

image: Part of the EXO-200 underground detector, which searched for a hypothetical nuclear decay that could reveal how neutrinos acquire their incredibly small mass.

Image: 
EXO-200 Collaboration

A hypothetical nuclear process known as neutrinoless double beta decay ought to be among the least likely events in the universe. Now the international EXO-200 collaboration, which includes researchers from the Department of Energy's SLAC National Accelerator Laboratory, has determined just how unlikely it is: In a given volume of a certain xenon isotope, it would take more than 35 trillion trillion years for half of its nuclei to decay through this process - an eternity compared to the age of the universe, which is "only" 13 billion years old.

If discovered, neutrinoless double beta decay would prove that neutrinos - highly abundant elementary particles with extremely small mass - are their own antiparticles. That information would help researchers determine how heavy neutrinos actually are and how they acquire their mass. Although the EXO-200 experiment did not observe the decay, its complete data set, published on the arXiv repository and accepted for publication in Physical Review Letters, defined some of the strongest limits yet for the decay's half-life and for the mass neutrinos may have.

EXO-200 operated at the Waste Isolation Pilot Plant (WIPP) in New Mexico from 2011 to 2018. Within its first months of operations, it discovered another rare process: the two-neutrino double beta decay of the same xenon isotope. EXO-200 was an important precursor for next-generation experiments, such as the proposed nEXO, that would have a much better chance of discovering the neutrinoless decay.

Credit: 
DOE/SLAC National Accelerator Laboratory

Aerosols from coniferous forests no longer cool the climate as much

Emissions of greenhouse gases have a warming effect on the climate, whereas small airborne particles in the atmosphere, aerosols, act as a cooling mechanism. That is the received wisdom in any case. However, new research from Lund University in Sweden can now show that the tiniest aerosols are increasing at the expense of the normal sized and slightly larger aerosols - and it is only the latter that have a cooling effect.

The air is full of small airborne particles - aerosols. Some are naturally produced, while others are caused by humankind's combustion of fuel. Some are harmful to our health, while others reflect sunlight.

One of the important natural sources of aerosols is the fragrant terpenes from coniferous forests. For example, the boreal coniferous forest area "the taiga" that stretches like a ribbon across the whole world, accounts for 14 per cent of the world's vegetation coverage, and is thus the world's largest coherent land ecosystem.

Through chemical reactions with the ozone in the atmosphere, the terpenes are transformed into highlyeavily oxygenatedidised organic molecules which bind with otherstick to aerosol particles that are already in the air. This leads to more cloud droplets, as each cloud droplet is formed through steam condensing on a sufficiently large aerosol particle. More cloud droplets lead to denser clouds and reduced insolation.

However, the new study published in Nature Communications shows that this "coniferous forest effect" has diminished due to industrialisation.

Emissions of ammonia from agriculture and sulphur dioxide from fossil fuels change the rules of the game: the terpenes as well as other organic molecules are instead divided into many more, but smaller, aerosol particles. As the diameter of very tiny aerosols is smaller than the wavelength of light, the particles are unable to reflect light.

Although sulphur dioxide and ammonia are gases, they generate new particles via chemical reactions in the atmosphere.

"Paradoxically, a larger number of aerosol particles can lead to the cooling effect from the organic molecules released from the forests being reduced or even eliminated", says Pontus Roldin, researcher in nuclear physics at Lund University in Sweden and first author of the article.

Together with an international research team he developed a model that for the first time reveals the process behind new particle formation of these aerosols.

"The heavily oxidised organic molecules have a significant cooling effect on the climate. With a warmer climate it's expected that forests will release more terpenes and thus create more cooling organic aerosols. However, the extent of that effect also depends on the emission volumes of sulphur dioxide and ammonia in the future. It's very clear, though, that this increase in organic aerosols cannot by any means compensate for the warming of the climate caused by our emissions of greenhouse gases," says Pontus Roldin.

This study can help to reduce uncertainty surrounding aerosol particles' effect on clouds and the climate.

There has already been a considerable reduction of sulphur dioxide emissions in Europe and USA since the 1980s and steps in the right direction have now also been noted in China. Instead, the main issue is the acidification problem observed in lakes, forests and air pollution.

"Relatively simple technical solutions are required to reduce sulphur dioxide, for example, cleaning of exhaust gases from ships and coal-fired power plants etc. It's much harder to reduce ammonia, as it's released directly from animals and when soil is fertilised", says Pontus Roldin.

It is estimated that in the future, global meat production will rise considerably as prosperity in poor countries, mainly in Asia, increases. Today, it is not known what the consequences of these changes will be, but to make an estimate requires the use of detailed models like the one that has now been developed.

In the next few years, Pontus Roldin will work within a research project that will contribute knowledge to next generation climate models, such as EC-Earth.

"We already know that the forest is a significant carbon sink. However, other factors, such as the cooling effect of aerosols, types of vegetation and emissions, affect the climate. Hopefully, our results can contribute to a more complete understanding of how forests and climate interact", concludes Pontus Roldin.

Credit: 
Lund University

Mosquito eye inspires artificial compound lens (video)

image: Researchers have developed compound lenses inspired by the mosquito eye.

Image: 
American Chemical Society

Anyone who's tried to swat a pesky mosquito knows how quickly the insects can evade a hand or fly swatter. The pests' compound eyes, which provide a wide field of view, are largely responsible for these lightning-fast actions. Now, researchers reporting in ACS Applied Materials & Interfaces have developed compound lenses inspired by the mosquito eye that could someday find applications in autonomous vehicles, robots or medical devices. Watch a video of the lenses here.

Compound eyes, found in most arthropods, consist of many microscopic lenses organized on a curved array. Each tiny lens captures an individual image, and the mosquito's brain integrates all of the images to achieve peripheral vision without head or eye movement. The simplicity and multifunctionality of compound eyes make them good candidates for miniaturized vision systems, which could be used by drones or robots to rapidly image their surroundings. Joelle Frechette and colleagues wanted to develop a liquid manufacturing process to make compound lenses with most of the features of the mosquito eye.

To make each microlens, the researchers used a capillary microfluidic device to produce oil droplets surrounded by silica nanoparticles. Then, they organized many of these microlenses into a closely packed array around a larger oil droplet. They polymerized the structure with ultraviolet light to yield a compound lens with a viewing angle of 149 degrees, similar to that of the mosquito eye. The silica nanoparticles coating each microlens had antifogging properties, reminiscent of nanostructures on mosquito eyes that allow the insect organs to function in humid environments. The researchers could move, deform and relocate the fluid lenses, allowing them to create arrays of compound lenses with even greater viewing capabilities.

Credit: 
American Chemical Society

Kids in poor, urban schools learn just as much as others

COLUMBUS, Ohio -- Schools serving disadvantaged and minority children teach as much to their students as those serving more advantaged kids, according to a new nationwide study.

The results may seem surprising, given that student test scores are normally higher in suburban and wealthier school districts than they are in urban districts serving mostly disadvantaged and minority children.

But those test scores speak more to what happens outside the classroom than how schools themselves are performing, said Douglas Downey, lead author of the new study and professor of sociology at The Ohio State University.

"We found that if you look at how much students are learning during the school year, the difference between schools serving mostly advantaged students and those serving mostly disadvantaged students is essentially zero," Downey said.

"Test scores at one point in time are not a fair way to evaluate the impact of schools."

Downey conducted the study with David Quinn of the University of Southern California and Melissa Alcaraz, a doctoral student in sociology at Ohio State. Their study was published online recently in the journal Sociology of Education and will appear in a future print edition.

Many school districts have moved away from evaluating schools by test scores, instead using a "growth" or "value-added" measure to see how much students learn over a calendar year.

While these growth models are considered by the researchers to be a big improvement over using test scores at one point in time, they still don't account for the summers, during which kids from advantaged areas don't backtrack in their learning the way children from disadvantaged areas often do.

This "summer loss" for disadvantaged students isn't surprising, given the difficulties they face with issues like family instability and food insecurity, Downey said.

"What is remarkable is not what happens in summer, but what happens when these disadvantaged students go back to school: The learning gap essentially disappears. They tend to learn at the same rate as those from the wealthier, suburban schools," he said.

"That is shocking to a lot of people who just assume that schools in disadvantaged areas are not as good."

Downey and his colleagues used data from the Early Childhood Longitudinal Study - Kindergarten Cohort 2010-2011, which involved more than 17,000 students in 230 schools around the country. This study used a subsample of about 3,000 of the children who participated.

Children took reading tests at the beginning and end of kindergarten and near completion of their first and second grades.

That allowed the researchers to calculate how much children learned during three school periods and compare that to what happened during the summers.

This approach is similar to how new drugs are sometimes tested in medical research, Downey explained. In drug trials, researchers compare how patients fare while they are taking a drug to when they are not.

"In our case, we think of schools as the treatment and the summers as the control period when the students aren't receiving treatment," he said.

The results showed that children in schools serving disadvantaged students, on average, saw their reading scores rise about as much during the school year as did those in more advantaged schools.

That doesn't mean all schools were equally good, Downey said. But the findings showed that the "good" schools weren't all concentrated in the wealthier areas and the "bad" schools in the poor areas.

Downey said there are limitations to this study, most importantly that the data doesn't allow researchers to see what happens to students in later grades.

But this isn't the first time Downey and his colleagues have found that schools in advantaged and disadvantaged areas produce similar learning. A 2008 study, also published in the Sociology of Education, found similar results, but with less comprehensive data than this new research.

Downey said he has been somewhat surprised that the 2008 study and this new research hasn't engaged education researchers more.

"The field has not responded as energetically as I expected. I think our findings undermine a lot of social science assumptions about what role schools play in promoting disadvantage," he said.

Instead of being "engines of inequality" - as some have argued - this new research suggests schools are neutral or even slightly compensate for inequality elsewhere.

Disadvantaged kids start with poorer home environments and neighborhoods and begin school behind students who come from wealthier backgrounds, Downey said.

"But when they go to school they stop losing ground. That doesn't agree with the traditional story about how schools supposedly add to inequality," he said.

"We are probably better off putting more energy toward addressing the larger social inequalities that are producing these large gaps in learning before kids even enter school."

Downey emphasized that the results don't mean that school districts don't need to invest in disadvantaged schools.

"As it stands, schools mostly prevent inequality from increasing while children are in school," he said.

"With more investments, it may be possible to create schools that play a more active role in reducing inequality."

Credit: 
Ohio State University

Machine learning finds new metamaterial designs for energy harvesting

image: An illustration of a dielectric metamaterial with infrared light shining on it.

Image: 
Willie Padilla, Duke University

DURHAM, N.C. -- Electrical engineers at Duke University have harnessed the power of machine learning to design dielectric (non-metal) metamaterials that absorb and emit specific frequencies of terahertz radiation. The design technique changed what could have been more than 2000 years of calculation into 23 hours, clearing the way for the design of new, sustainable types of thermal energy harvesters and lighting.

The study was published online on September 16 in the journal Optics Express.

Metamaterials are synthetic materials composed of many individual engineered features, which together produce properties not found in nature through their structure rather than their chemistry. In this case, the terahertz metamaterial is built up from a two-by-two grid of silicon cylinders resembling a short, square Lego.

Adjusting the height, radius and spacing of each of the four cylinders changes the frequencies of light the metamaterial interacts with.

Calculating these interactions for an identical set of cylinders is a straightforward process that can be done by commercial software. But working out the inverse problem of which geometries will produce a desired set of properties is a much more difficult proposition.

Because each cylinder creates an electromagnetic field that extends beyond its physical boundaries, they interact with one another in an unpredictable, nonlinear way.

"If you try to build a desired response by combining the properties of each individual cylinder, you're going to get a forest of peaks that is not simply a sum of their parts," said Willie Padilla, professor of electrical and computer engineering at Duke. "It's a huge geometrical parameter space and you're completely blind -- there's no indication of which way to go."

One way to find the correct combination would be to simulate every possible geometry and choose the best result. But even for a simple dielectric metamaterial where each of the four cylinders can have only 13 different radii and heights, there are 815.7 million possible geometries. Even on the best computers available to the researchers, it would take more than 2,000 years to simulate them all.

To speed up the process, Padilla and his graduate student Christian Nadell turned to machine learning expert Jordan Malof, assistant research professor of electrical and computer engineering at Duke, and Ph.D. student Bohao Huang.

Malof and Huang created a type of machine learning model called a neural network that can effectively perform simulations orders of magnitude faster than the original simulation software. The network takes 24 inputs -- the height, radius and radius-to-height ratio of each cylinder -- assigns random weights and biases throughout its calculations, and spits out a prediction of what the metamaterial's frequency response spectrum will look like.

First, however, the neural network must be "trained" to make accurate predictions.

"The initial predictions won't look anything like the actual correct answer," said Malof. "But like a human, the network can gradually learn to make correct predictions by simply observing the commercial simulator. The network adjusts its weights and biases each time it makes a mistake and does this repeatedly until it produces the correct answer every time."

To maximize the accuracy of the machine learning algorithm, the researchers trained it with 18,000 individual simulations of the metamaterial's geometry. While this may sound like a large number, it actually represents just 0.0022 percent of all the possible configurations. After training, the neural network can produce highly accurate predictions in just a fraction of a second.

Even with this success in hand, however, it still only solved the forward problem of producing the frequency response of a given geometry, which they could already do. To solve the inverse problem of matching a geometry to a given frequency response, the researchers returned to brute strength.

Because the machine learning algorithm is nearly a million times faster than the modeling software used to train it, the researchers simply let it solve every single one of the 815.7 million possible permutations. The machine learning algorithm did it in only 23 hours rather than thousands of years.

After that, a search algorithm could match any given desired frequency response to the library of possibilities created by the neural network.

"We're not necessarily experts on that, but Google does it every day," said Padilla. "A simple search tree algorithm can go through 40 million graphs per second."

The researchers then tested their new system to make sure it worked. Nadell hand drew several frequency response graphs and asked the algorithm to pick the metamaterial setup that would best produce each one. He then ran the answers produced through the commercial simulation software to see if they matched up well.

They did.

With the ability to design dielectric metamaterials in this way, Padilla and Nadell are working to engineer a new type of thermophotovoltaic device, which creates electricity from heat sources. Such devices work much like solar panels, except they absorb specific frequencies of infrared light instead of visible light.

Current technologies radiate infrared light in a much wider frequency range than can be absorbed by the infrared solar cell, which wastes energy. A carefully engineered metamaterial tuned to that specific frequency, however, can emit infrared light in a much narrower band.

"Metal-based metamaterials are much easier to tune to these frequencies, but when metal heats up to the temperatures required in these types of devices, they tend to melt," said Padilla. "You need a dielectric metamaterial that can withstand the heat. And now that we have the machine learning piece, it looks like this is indeed achievable."

Credit: 
Duke University

Portable electronics: a stretchable and flexible biofuel cell that runs on sweat

image: Study of the biofuel cell's mechanical and electrochemical resistance under 20% stretching in 2D directions.

Image: 
Xiaohong Chen, Département de chimie moléculaire (CNRS/Université Grenoble Alpes)

A unique new flexible and stretchable device, worn against the skin and capable of producing electrical energy by transforming the compounds present in sweat, was recently developed and patented by CNRS researchers from l'Université Grenoble Alpes and the University of San Diego (USA). This cell is already capable of continuously lighting an LED, opening new avenues for the development of wearable electronics powered by autonomous and environmentally friendly biodevices. This research was published in Advanced Functional Materials on September 25, 2019.

The potential uses for wearable electronic devices continue to increase, especially for medical and athletic monitoring. Such devices require the development of a reliable and efficient energy source that can easily be integrated into the human body. Using "biofuels" present in human organic liquids has long been a promising avenue.

Scientists from the Département de chimie moléculaire (CNRS/Université Grenoble Alpes), who specialize in bioelectrochemistry, decided to collaborate with an American team from the University of San Diego in California, who are experts in nanomachines, biosensors, and nanobioelectronics. Together they developed a flexible conductive material consisting of carbon nanotubes, crosslinked polymers, and enzymes joined by stretchable connectors that are directly printed onto the material through screen-printing[1].

The biofuel cell, which follows deformations in the skin, produces electrical energy through the reduction of oxygen and the oxidation of the lactate present in perspiration. Once applied to the arm, it uses a voltage booster to continuously power an LED. It is relatively simple and inexpensive to produce, with the primary cost being the production of the enzymes that transform the compounds found in sweat. The researchers are now seeking to amplify the voltage provided by the biofuel cell in order to power larger portable devices.

Credit: 
CNRS

Personalized wellness: Can science keep up with tech innovations and consumer demands?

image: Personalized Wellness: Can Science Keep Up with Tech Innovations and Consumer Demands?

Image: 
Padilla

CHICAGO (September 25, 2019) - The desire for personalization is permeating all aspects of life, and there is nothing more personal than decisions about health and wellness. As consumers increasingly seek products and services tailored to the individual level, personalized wellness can include everything from genetics-driven diet plans to digital disease management. Business opportunities abound in this ever-growing industry, but it is a high-stakes field: building a robust body of evidence to inform health and wellness products and services is non-negotiable. How can the field evolve so the science keeps pace with the new technology?

FoodMinds, a food and nutrition affairs company, addressed these complex topics in a new paper published in the peer-reviewed journal Nutrition Today. Titled Personalized Wellness Past and Future: Will the Science and Technology Co-Evolve? it captures the state of the field, posing thought-provoking questions about the science supporting the personalized wellness space and offering a look at the future of the sector.

"There is significant potential for companies and commodities to impact public health through product development, collaborations and integration with existing systems and services. We expect drastic growth in the next 3-5 years, including more AI-driven offerings as well as new modes of use, such as embeddables and ingestibles," said Ashley Desrosiers, M.S., R.D., vice president and Personalized Wellness Team lead. "FoodMinds can be a thought partner on the best approach for staking a claim in this evolving space."

FoodMinds partners with companies to help credibly define their position in target markets and support the co-evolution of personalized wellness science and technology. For companies developing or evolving their strategic approach to personalized wellness, FoodMinds delivers data-driven insights, research pipeline development, strategic communication programs, health professional and influencer engagement strategies, and regulatory landscape monitoring and analysis.

Credit: 
FoodMinds LLC