Tech

Ditching the car for walking or biking just one day a week cuts carbon footprint

Swapping the car for walking, cycling and e-biking even just one day a week makes a significant impact on personal carbon emissions in cities.

'Active transport' - cycling, e-biking or walking - can help tackle the climate crisis according to a new study led by the University of Oxford's Transport Studies Unit and including researchers from Imperial's Centre for Environmental Policy as part of the EU-funded project PASTA: Physical Activity Through Sustainable Transport Approaches.

Meeting greenhouse gas emissions reduction targets requires a significant move away from motorised transport. The team found that shifting to active transport could save as much as a quarter of personal carbon dioxide (CO2) emissions from transport.

Published in the journal Global Environmental Change, this is the first study of the carbon-reducing impact of city-based lifestyle changes, and reveals that increases in active mobility significantly lower carbon footprints, even in European cities that already have a high incidence of walking and cycling.

Co-author Dr Audrey de Nazelle, from the Centre for Environmental Policy at Imperial, said: "Our findings suggest that, even if not all car trips could be substituted by bicycle trips, the potential for decreasing emissions is huge.

"This is one more piece of evidence on the multiple benefits of active travel, alongside our previous studies showing cycling is the best way to get around cities for both physical and mental health, and that promoting cycling helps tackle obesity. This should encourage different sectors to work together to create desirable futures from multiple health, environmental and social perspectives."

Small swaps, big impact

The study followed nearly 2,000 people in seven European cities (Antwerp, Belgium; Barcelona, Spain; London, UK; Orebro, Sweden; Rome, Italy; Vienna, Austria; Zurich, Switzerland), collecting data on daily travel behaviour, journey purpose, as well as information on where their home and work or study location was, whether they have access to public transport, and socio-economic factors.

The team performed statistical modelling of the data to assess how changes in active mobility, the 'main mode' of daily travel, and cycling frequency influenced mobility-related CO2 emissions over time and space.

Lead researcher Dr Christian Brand , from the University of Oxford, said: "We found that those who switch just one trip per day from car driving to cycling reduce their carbon footprint by about 0.5 tonnes over a year, representing a substantial share of average per capita CO2 emissions.

"If just 10% of the population were to change travel behaviour, the emissions savings would be around 4% of lifecycle CO2 emissions from all car travel."

The largest benefits from shifts from car to active travel were for business travel, followed by social and leisure trips, and commuting to work or place of study. These results also showed that those who already cycled had 84% lower CO2 emissions from all daily travel than non-cyclists.

Doing something about climate change

For the cities in this study, average per capita (per person) CO2 emissions per year from transport (excluding international flights and shipping) ranged between 1.8 tonnes in the UK to 2.7 tonnes in Austria. According to the Global Carbon Atlas, average per capita CO2 emissions from all activities were eight tonnes per year in the UK.

Dr Brand said: "A typical response to the climate crisis is to 'do something', such as planting more trees, or switching to electric vehicles. While these are important and effective, they are neither sufficient nor fast enough to meet our ambitious climate targets.

"Doing more of a good thing combined with doing less of a bad thing - and doing it now - is much more compliant with a 'net zero' pathway and preserving our planet's and our own futures. Switching from car to active mobility is one thing to do, which would make a real difference, and we show here how good this can be in cities."

Multiple benefits

The team say this will not only be good for the climate, but also for reducing social inequalities and improving public health and quality of urban life in a post-COVID-19 world.

Dr de Nazelle said: "To improve active travel take-up, cities across the world will need to increase investment in high-quality infrastructure for pedestrians and cyclists and incorporate policy and planning concepts that require a fairly radical rethink of our cities.

"This is in turn likely to reduce inequalities, because the concepts involve mixing different population groups rather than maintaining the model of residential zoning by socioeconomic status currently used."

Credit: 
Imperial College London

The quantum advantage: a novel demonstration

Is a quantum machine really more efficient than a conventional machine for performing calculations? Demonstrating this 'advantage' experimentally is particularly complex and a major research challenge around the world1. Scientists from the CNRS2, the University of Edinburgh (Scotland) and the QC Ware, Corp., (France and USA) have just proved that a quantum machine can perform a given verification task in seconds when the same exercise would take a time equivalent to the age of the universe for a conventional computer. For this demonstration, they combined a complex interactive algorithm that solves a certain type of mathematical problem with limited information and a simple experimental photonics system that can be made in all advanced photonics laboratories. Their work was published on 8 February 2021 in Nature Communications.

Credit: 
CNRS

'Multiplying' light could be key to ultra-powerful optical computers

image: Schematic of light pulse interactions as the proposed optical computer solves higher order binary optimisation problems. The light phases coming from several light pulses combine to change the phases of each light pulse until the solution is found.

Image: 
Gleb Berloff

An important class of challenging computational problems, with applications in graph theory, neural networks, artificial intelligence and error-correcting codes can be solved by multiplying light signals, according to researchers from the University of Cambridge and Skolkovo Institute of Science and Technology in Russia.

In a paper published in the journal Physical Review Letters, they propose a new type of computation that could revolutionise analogue computing by dramatically reducing the number of light signals needed while simplifying the search for the best mathematical solutions, allowing for ultra-fast optical computers.

Optical or photonic computing uses photons produced by lasers or diodes for computation, as opposed to classical computers which use electrons. Since photons are essentially without mass and can travel faster than electrons, an optical computer would be superfast, energy-efficient and able to process information simultaneously through multiple temporal or spatial optical channels.

The computing element in an optical computer - an alternative to the ones and zeroes of a digital computer - is represented by the continuous phase of the light signal, and the computation is normally achieved by adding two light waves coming from two different sources and then projecting the result onto '0' or '1' states.

However, real life presents highly nonlinear problems, where multiple unknowns simultaneously change the values of other unknowns while interacting multiplicatively. In this case, the traditional approach to optical computing that combines light waves in a linear manner fails.

Now, Professor Natalia Berloff from Cambridge's Department of Applied Mathematics and Theoretical Physics and PhD student Nikita Stroev from Skolkovo Institute of Science and Technology have found that optical systems can combine light by multiplying the wave functions describing the light waves instead of adding them and may represent a different type of connections between the light waves.

They illustrated this phenomenon with quasi-particles called polaritons - which are half-light and half-matter - while extending the idea to a larger class of optical systems such as light pulses in a fibre. Tiny pulses or blobs of coherent, superfast-moving polaritons can be created in space and overlap with one another in a nonlinear way, due to the matter component of polaritons.

"We found the key ingredient is how you couple the pulses with each other," said Stroev. "If you get the coupling and light intensity right, the light multiplies, affecting the phases of the individual pulses, giving away the answer to the problem. This makes it possible to use light to solve nonlinear problems."

The multiplication of the wave functions to determine the phase of the light signal in each element of these optical systems comes from the nonlinearity that occurs naturally or is externally introduced into the system.

"What came as a surprise is that there is no need to project the continuous light phases onto '0' and '1' states necessary for solving problems in binary variables," said Stroev. "Instead, the system tends to bring about these states at the end of its search for the minimum energy configuration. This is the property that comes from multiplying the light signals. On the contrary, previous optical machines require resonant excitation that fixes the phases to binary values externally."

The authors have also suggested and implemented a way to guide the system trajectories towards the solution by temporarily changing the coupling strengths of the signals.

"We should start identifying different classes of problems that can be solved directly by a dedicated physical processor," said Berloff. "Higher-order binary optimisation problems are one such class, and optical systems can be made very efficient in solving them."

There are still many challenges to be met before optical computing can demonstrate its superiority in solving hard problems in comparison with modern electronic computers: noise reduction, error correction, improved scalability, guiding the system to the true best solution are among them.

"Changing our framework to directly address different types of problems may bring optical computing machines closer to solving real-world problems that cannot be solved by classical computers," said Berloff.

Credit: 
University of Cambridge

Machine learning could aid mental health diagnoses

A way of using machine learning to more accurately identify patients with a mix of psychotic and depressive symptoms has been developed by researchers at the University of Birmingham.

Patients with depression or psychosis rarely experience symptoms of purely one or the other illness. Historically, this has meant that mental health clinicians give a diagnosis of a 'primary' illness, but with secondary symptoms. Making an accurate diagnosis is a big challenge for clinicians and diagnoses often do not accurately reflect the complexity of individual experience or indeed neurobiology.

Clinicians diagnosing psychosis, for example, would frequently regard depression as a secondary illness, with implications for treatment decisions which focus more on psychosis symptoms (e.g. hallucinations or delusions).

A team at the University of Birmingham's Institute for Mental Health and Centre for Human Brain Health, working with researchers from the PRONIA consortium wanted to explore the possibility of using machine learning to create highly accurate models of 'pure' forms of both illnesses and to use these to investigate the diagnostic accuracy of a cohort of patients with mixed symptoms. Their results are published in Schizophrenia Bulletin.

"The majority of patients have co-morbidities, so people with psychosis also have depressive symptoms and vice versa", explains lead author Paris Alexandros Lalousis. "That presents a big challenge for clinicians in terms of diagnosing and then delivering treatments that are designed for patients without co-morbidity. It's not that patients are misdiagnosed, but the current diagnostic categories we have do not accurately reflect the clinical and neurobiological reality".

The researchers examined questionnaire responses, detailed clinical interviews and data from structural magnetic resonance imaging from a cohort of 300 patients taking part in the PRONIA study, a European Union-funded cohort study taking place across seven European research centres.

Within this cohort, the researchers identified small subgroups of patients who could be classified as suffering either from psychosis without any symptoms of depression, or from depression without any psychotic symptoms.

Using this data, the team identified machine learning models of 'pure' depression, and 'pure' psychosis. They were then able to use machine learning methods to apply these models to patients with symptoms of both illnesses. The aim was to build a highly accurate disease profile for each patient and test that against their diagnosis to see how accurate it was.

The team found that, while patients with depression as a primary illness were more likely to be diagnosed accurately, patients with psychosis with depression had symptoms which most frequently tended towards the depression dimension. This may indicate that depression plays a greater part in the illness than had previously been thought.

Mr Lalousis added: "There is a pressing need for better treatments for psychosis and depression, conditions which constitute a major mental health challenge worldwide. Our study highlights the need for clinicians to understand better the complex neurobiology of these conditions, and the role of 'co-morbid' symptoms; in particular considering carefully the role that depression is playing in the illness".

"In this study we have shown how using sophisticated machine learning algorithms which take into account clinical, neurocognitive, and neurobiological factors can aid our understanding of the complexity of mental illness. In the future, we think machine learning could become a critical tool for accurate diagnosis. We have a real opportunity to develop data-driven diagnostic methods - this is an area in which mental health is keeping pace with physical health and it's really important that we keep up that momentum."

Credit: 
University of Birmingham

Halt cell recycling to treat cancer

image: Jun-Lin Guan, PhD, Francis Brunning Professor and Chair of UC's Department of Cancer Biology.

Image: 
University of Cincinnati

Recycling cans and bottles is a good practice. It helps keep the planet clean.

The same is true for recycling within cells in the body. Each cell has a way of cleaning out waste in order to regenerate newer, healthier cells. This "cell recycling" is called autophagy.

Targeting and changing this process has been linked to helping control or diminish certain cancers. Now, University of Cincinnati researchers have shown that completely halting this process in a very aggressive form of breast cancer may improve outcomes for patients one day.

These results are published in the Feb. 8 print edition of the journal Developmental Cell.

"Autophagy is sort of like cell cannibalism," says corresponding author Jun-Lin Guan, PhD, Francis Brunning Professor and Chair of UC's Department of Cancer Biology. "They eat the nasty components of themselves and come out strong and undamaged; however, we do not want cancer cells doing this to create stronger, healthier versions of themselves. Previous studies found that disabling this process slowed down the growth of another type of breast cancer, but it was unknown whether blocking autophagy could be beneficial for a particularly aggressive type of breast cancer, known as HER2-positive breast cancer."

This type of breast cancer grows rapidly, and while there are effective treatments, unfortunately, these particular cancer cells find a way to become resistant to therapy, leading to relapse and a higher death rate in patients.

Researchers in this study used animal models to show that blocking autophagy eliminated the development and growth of this type of breast cancer "even to a greater extent than our previous studies in other types of breast cancer," says Guan, also a member of the UC Cancer Center.

He adds that researchers also uncovered that by blocking this activity, they were able to impact the other activities and mechanisms within the cancer cells completely, changing their roles and reactions.

"It altered trafficking patterns of the HER2 protein after it is produced by the cancer cells," he continues. "Instead of being put in its 'normal' location on the cell surface to cause cancer development, it is incorporated into some small fluid-filled pouches, known as vesicles, and secreted out of the tumor cells."

Guan says these findings are particularly important as they show a completely different way to potentially treat this type of breast cancer and may work as a combination therapy with current treatments to prevent resistance and relapse.

"It would be harder for the cancer cells to develop ways to evade two different ways to be blocked," he adds. "Future clinical studies will be needed to validate the treatment in human patients. Also, the HER2 protein plays a role in several other cancers including lung, gastric [stomach] and prostate cancers, so future studies will need to examine whether this new mechanism may also be beneficial in treating those cancers as well.

"This study really shows the value of basic research in beating cancer in the future. Breakthroughs, like this one, are sometimes made from curiosity-driven research that result in surprising findings that could one day help people."

Lead author on the study Mingang Hao, PhD, who is a postdoctoral fellow in Guan's lab, says he was handling two separate cancer research projects at the same time, but this study inspired findings for the other, which also involved vesicles or "bubbles" in cancer spread.

"Cancer research has so many intricate twists and turns, but so much of it can be interconnected, even in tiny ways," Hao says. "Working with the teams at UC has shown me some really innovative ways to tackle this disease, and I'm able to apply things I'm learning in one lab to research in another, to ultimately help find solutions for this terrible disease."

Co-author Kevin Turner, MD, a resident in the Department of Surgery at UC, says his work with this science helps him understand more about cancer development and spread to better treat patients.

"As a surgical resident planning to pursue a career in surgical oncology, having the opportunity to work in a science lab with Dr. Guan and his team has allowed me to develop a deeper understanding of the workings of a disease I have seen in my patients," he says. "I hope to continue studies on this as we work toward clinical trials and applying it in patients."

Credit: 
University of Cincinnati

A magnetic twist to graphene

image: Schematic of a valley-spiral in magnetically encapsulated twisted bilayer graphene.

Image: 
Jose Lado

Electrons in materials have a property known as 'spin', which is responsible for a variety of properties, the most well-known of which is magnetism. Permanent magnets, like the ones used for refrigerator doors, have all the spins in their electrons aligned in the same direction. Scientists refer to this behaviour as ferromagnetism, and the research field of trying to manipulate spin as spintronics.

Down in the quantum world, spins can arrange in more exotic ways, giving rise to frustrated states and entangled magnets. Interestingly, a property similar to spin, known as "the valley," appears in graphene materials. This unique feature has given rise to the field of valleytronics, which aims to exploit the valley property for emergent physics and information processing, very much like spintronics relies on pure spin physics.

'Valleytronics would potentially allow encoding information in the quantum valley degree of freedom, similar to how electronics do it with charge and spintronics with the spin.' Explains Professor Jose Lado, from Aalto's Department of applied physics, and one of the authors of the work. 'What's more, valleytronic devices would offer a dramatic increase in the processing speeds in comparison with electronics, and with much higher stability towards magnetic field noise in comparison with spintronic devices.'

Structures made of rotated, ultra-thin materials provide a rich solid-state platform for designing novel devices. In particular, slightly twisted graphene layers have recently been shown to have exciting unconventional properties, that can ultimately lead to a new family of materials for quantum technologies. These unconventional states which are already being explored depend on electrical charge or spin. The open question is if the valley can also lead to its own family of exciting states.

Making materials for valleytronics

For this goal, it turns out that conventional ferromagnets play a vital role, pushing graphene to the realms of valley physics. In a recent work, Ph.D. student Tobias Wolf, together with Profs. Oded Zilberberg and Gianni Blatter at ETH Zurich, and Prof. Jose Lado at Aalto University, showed a new direction for correlated physics in magnetic van der Waals materials.

The team showed that sandwiching two slightly rotated layers of graphene between a ferromagnetic insulator provides a unique setting for new electronic states. The combination of ferromagnets, graphene's twist engineering, and relativistic effects force the "valley" property to dominate the electrons behaviour in the material. In particular, the researchers showed how these valley-only states can be tuned electrically, providing a materials platform in which valley-only states can be generated. Building on top of the recent breakthrough in spintronics and van der Waals materials, valley physics in magnetic twisted van der Waals multilayers opens the door to the new realm of correlated twisted valleytronics.

'Demonstrating these states represents the starting point towards new exotic entangled valley states.' Said Professor Lado, 'Ultimately, engineering these valley states can allow realizing quantum entangled valley liquids and fractional quantum valley Hall states. These two exotic states of matter have not been found in nature yet, and would open exciting possibilities towards a potentially new graphene-based platform for topological quantum computing.'

Credit: 
Aalto University

'Magnetic graphene' forms a new kind of magnetism

image: Illustration of the magnetic structure of iron thiophosphate (FePS3), a two-dimensional material which undergoes a transition from an insulator to a metal when compressed.

Image: 
University of Cambridge

Researchers have identified a new form of magnetism in so-called magnetic graphene, which could point the way toward understanding superconductivity in this unusual type of material.

The researchers, led by the University of Cambridge, were able to control the conductivity and magnetism of iron thiophosphate (FePS3), a two-dimensional material which undergoes a transition from an insulator to a metal when compressed. This class of magnetic materials offers new routes to understanding the physics of new magnetic states and superconductivity.

Using new high-pressure techniques, the researchers have shown what happens to magnetic graphene during the transition from insulator to conductor and into its unconventional metallic state, realised only under ultra-high pressure conditions. When the material becomes metallic, it remains magnetic, which is contrary to previous results and provides clues as to how the electrical conduction in the metallic phase works. The newly discovered high-pressure magnetic phase likely forms a precursor to superconductivity so understanding its mechanisms is vital.

Their results, published in the journal Physical Review X, also suggest a way that new materials could be engineered to have combined conduction and magnetic properties, which could be useful in the development of new technologies such as spintronics, which could transform the way in which computers process information.

Properties of matter can alter dramatically with changing dimensionality. For example, graphene, carbon nanotubes, graphite and diamond are all made of carbon atoms, but have very different properties due to their different structure and dimensionality.

"But imagine if you were also able to change all of these properties by adding magnetism," said first author Dr Matthew Coak, who is jointly based at Cambridge's Cavendish Laboratory and the University of Warwick. "A material which could be mechanically flexible and form a new kind of circuit to store information and perform computation. This is why these materials are so interesting, and because they drastically change their properties when put under pressure so we can control their behaviour."

In a previous study by Sebastian Haines of Cambridge's Cavendish Laboratory and the Department of Earth Sciences, researchers established that the material becomes a metal at high pressure, and outlined how the crystal structure and arrangement of atoms in the layers of this 2D material change through the transition.

"The missing piece has remained however, the magnetism," said Coak. "With no experimental techniques able to probe the signatures of magnetism in this material at pressures this high, our international team had to develop and test our own new techniques to make it possible."

The researchers used new techniques to measure the magnetic structure up to record-breaking high pressures, using specially designed diamond anvils and neutrons to act as the probe of magnetism. They were then able to follow the evolution of the magnetism into the metallic state.

"To our surprise, we found that the magnetism survives and is in some ways strengthened," co-author Dr Siddharth Saxena, group leader at the Cavendish Laboratory. "This is unexpected, as the newly-freely-roaming electrons in a newly conducting material can no longer be locked to their parent iron atoms, generating magnetic moments there - unless the conduction is coming from an unexpected source."

In their previous paper, the researchers showed these electrons were 'frozen' in a sense. But when they made them flow or move, they started interacting more and more. The magnetism survives, but gets modified into new forms, giving rise to new quantum properties in a new type of magnetic metal.

How a material behaves, whether conductor or insulator, is mostly based on how the electrons, or charge, move around. However, the 'spin' of the electrons has been shown to be the source of magnetism. Spin makes electrons behave a bit like tiny bar magnets and point a certain way. Magnetism from the arrangement of electron spins is used in most memory devices: harnessing and controlling it is important for developing new technologies such as spintronics, which could transform the way in which computers process information.

"The combination of the two, the charge and the spin, is key to how this material behaves," said co-author Dr David Jarvis from the Institut Laue-Langevin, France, who carried out this work as the basis of his PhD studies at the Cavendish Laboratory. "Finding this sort of quantum multi-functionality is another leap forward in the study of these materials."

"We don't know exactly what's happening at the quantum level, but at the same time, we can manipulate it," said Saxena. "It's like those famous 'unknown unknowns': we've opened up a new door to properties of quantum information, but we don't yet know what those properties might be."

There are more potential chemical compounds to synthesise than could ever be fully explored and characterised. But by carefully selecting and tuning materials with special properties, it is possible to show the way towards the creation of compounds and systems, but without having to apply huge amounts of pressure.

Additionally, gaining fundamental understanding of phenomena such as low-dimensional magnetism and superconductivity allows researchers to make the next leaps in materials science and engineering, with particular potential in energy efficiency, generation and storage.

As for the case of magnetic graphene, the researchers next plan to continue the search for superconductivity within this unique material. "Now that we have some idea what happens to this material at high pressure, we can make some predictions about what might happen if we try to tune its properties through adding free electrons by compressing it further," said Coak.

"The thing we're chasing is superconductivity," said Saxena. "If we can find a type of superconductivity that's related to magnetism in a two-dimensional material, it could give us a shot at solving a problem that's gone back decades."

Credit: 
University of Cambridge

Neural roots/origins of alcoholism identified by British and Chinese researchers

image: The Medial Orbitofrontal Cortex - Dorsal Periaqueductal Gray top-down regulations are linked to impulsive and compulsive drinking.

Image: 
University of Warwick

A pathway in the brain where alcohol addiction first develops has been identified by a team of British and Chinese researchers in a new study

Could lead to more effective interventions when tackling compulsive and impulsive drinking

More than 3 million deaths every year are related to alcohol use globally, according to the World Health Organisation

The physical origin of alcohol addiction has been located in a network of the human brain that regulates our response to danger, according to a team of British and Chinese researchers, co-led by the University of Warwick, the University of Cambridge, and Fudan University in Shanghai.

The medial orbitofrontal cortex (mOFC) at the front of the brain senses an unpleasant or emergency situation, and then sends this information to the dorsal periaqueductal gray (dPAG) at the brain's core, the latter area processing whether we need to escape the situation.

A person is at greater risk of developing alcohol use disorders when this information pathway is imbalanced in the following two ways:

Alcohol inhibits the dPAG (the area of the brain that processes adverse situations), so that the brain cannot respond to negative signals, or the need to escape from danger -- leading a person to only feel the benefits of drinking alcohol, and not its harmful side effects. This is a possible cause of compulsive drinking.

A person with alcohol addiction will also generally have an over-excited dPAG, making them feel that they are in an adverse or unpleasant situation they wish to escape, and they will urgently turn to alcohol to do so. This is the cause of impulsive drinking.

Professor Jianfeng Feng, from the Department of Computer Science at the University of Warwick and who also teaches at Fudan University, comments:

"I was invited to comment on a previous study on mice for the similar purpose: to locate the possible origins of alcohol abuse. It is exciting that we can replicate these murine models in humans, and, of course, go a step further to identify a dual-pathway model that links alcohol abuse to a tendency to exhibit impulsive behaviour."

Professor Trevor Robbins from the Dept of Psychology at the University of Cambridge comments: "It is remarkable that these neural systems in the mouse concerned with responding to threat and punishment have been shown to be relevant to our understanding of the factors leading to alcohol abuse in adolescents."

Dr Tianye Jia from the Institute of Science and Technology for Brain-inspired Intelligence at Fudan University, also affiliated with King's College London, comments

"We have found that the same neural top-down regulation could malfunction in two completely different ways, yet leading to similar alcohol abuse behaviour."

Published in the journal Science Advances, the research is led by an international collaboration, co-led by Dr Tianye Jia from Fudan University, Professor Jianfeng Feng from the University of Warwick and Fudan University, and Professor Trevor Robbins from the University of Cambridge and Fudan University.

The research team had noticed that previous rodent models showed that the mPFC and dPAG brain areas could underlie precursors of alcohol dependence.

They then analysed MRI brain scans from the IMAGEN dataset -- a group of 2000 individuals from the UK, Germany, France and Ireland who take part in scientific research to advance knowledge of how biological, psychological and environmental factors during adolescence may influence brain development and mental health.

The participants undertook task-based functional MRI scans, and when they did not receive rewards in the tasks (which produced negative feelings of punishment), regulation between the mOFC and dPAG was inhibited more highly in participants who had exhibited alcohol abuse.

Equally, in a resting state, participants who demonstrated a more overexcited regulation pathway between the mOFC and dPAG, (leading to feelings of needing urgently to escape a situation), also had increased levels of alcohol abuse.

Alcohol use disorder (AUD) is one of the most common and severe mental illnesses. According to a WHO report in 2018, more than 3 million deaths every year are related to alcohol use worldwide, and harmful alcohol use contributes to 5.1% of the global burden of disease. Understanding how alcohol addiction forms in the human brain could lead to more effective interventions to tackle the global problem of alcohol abuse.

Credit: 
University of Warwick

AI researchers ask: What's going on inside the black box?

image: Researchers can train artificial brain-like neural networks to classify images, such as cat pictures. Using a series of manipulated images, the scientists can figure out what part of the image--say the whiskers--is used to identify it as a cat. However, when the same technology is applied to DNA, researchers are not certain what parts of the sequence are important to the neural net. This unknown decision process is known as a "black box".

Image: 
Ben Wigler/CSHL, 2021

Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo and collaborator Matt Ploenzke reported a way to train machines to predict the function of DNA sequences. They used "neural nets", a type of artificial intelligence (AI) typically used to classify images. Teaching the neural net to predict the function of short stretches of DNA allowed it to work up to deciphering larger patterns. The researchers hope to analyze more complex DNA sequences that regulate gene activity critical to development and disease.

Machine-learning researchers can train a brain-like "neural net" computer to recognize objects, such as cats or airplanes, by showing it many images of each. Testing the success of training requires showing the machine a new picture of a cat or an airplane and seeing if it classifies it correctly. But, when researchers apply this technology to analyzing DNA patterns, they have a problem. Humans can't recognize the patterns, so they may not be able to tell if the computer identifies the right thing. Neural nets learn and make decisions independently of their human programmers. Researchers refer to this hidden process as a "black box". It is hard to trust the machine's outputs if we don't know what is happening in the box.

Koo and his team fed DNA (genomic) sequences into a specific kind of neural network called a convolutional neural network (CNN), which resembles how animal brains process images. Koo says:

"It can be quite easy to interpret these neural networks because they'll just point to, let's say, whiskers of a cat. And so that's why it's a cat versus an airplane. In genomics, it's not so straightforward because genomic sequences aren't in a form where humans really understand any of the patterns that these neural networks point to."

Koo's research, reported in the journal Nature Machine Intelligence, introduced a new method to teach important DNA patterns to one layer of his CNN. This allowed his neural network to build on the data to identify more complex patterns. Koo's discovery makes it possible to peek inside the black box and identify some key features that lead to the computer's decision-making process.

But Koo has a larger purpose in mind for the field of artificial intelligence. There are two ways to improve a neural net: interpretability and robustness. Interpretability refers to the ability of humans to decipher why machines give a certain prediction. The ability to produce an answer even with mistakes in the data is called robustness. Usually, researchers focus on one or the other. Koo says:

"What my research is trying to do is bridge these two together because I don't think they're separate entities. I think that we get better interpretability if our models are more robust."

Koo hopes that if a machine can find robust and interpretable DNA patterns related to gene regulation, it will help geneticists understand how mutations affect cancer and other diseases.

Credit: 
Cold Spring Harbor Laboratory

Synchronization of brain hemispheres changes what we hear

How come we don't hear everything twice: After all, our ears sit on opposite sides of our head and most sounds do not reach both our ears at exactly the same time. "While this helps us determine which direction sounds are coming from, it also means that our brain has to combine the information from both ears. Otherwise, we would hear an echo," explains Basil Preisig of the Department of Psychology at the University of Zurich.

In addition, input from the right ear reaches the left brain hemisphere first, while input from the left ear reaches the right brain hemisphere first. The two hemispheres perform different tasks during speech processing: The left side is responsible for distinguishing phonemes and syllables, whereas the right side recognizes the speech prosody and rhythm. Although each hemisphere receives the information at a different time and processes different features of speech, the brain integrates what it hears into a unified speech sound.

Brain waves establish connection

The exact mechanism behind this integration process was not known until now. In earlier studies, however, Preisig had found indications that measurable oscillations elicited by the brain - known as gamma waves - played a role. Now he has managed to demonstrate that the process of integrating what we hear is directly linked to synchronization by gamma waves. Neurolinguists from UZH worked on the project alongside researchers from the Netherlands and France.

Processing ambiguous information

The study, which took place at the Donders Center for Cognitive Neuroimaging in Nijmegen, the Netherlands, involved 28 healthy subjects who had to repeatedly solve a listening task: An ambiguous syllable (a speech sound between ga and da) was played in their right ear while a click containing a fragment of the syllables da or ga was played unnoticed in the left ear. Depending on what was played in their left ear, the participants heard either ga or da and then had to report what sound they had heard. During the process, the researchers were tracking activity in both hemispheres of the brain using functional magnetic resonance imaging (fMRI).

Electric stimulation impairs synchronization

During the experiments, the researchers disrupted the natural activity pattern of gamma waves by stimulating both hemispheres of the brain with electrodes attached to the head. This manipulation affected participants' ability to correctly identify the syllable they heard. The fMRI analysis showed that there were also changes in the activity of the neural connections between the right and the left brain hemispheres: The strength of the connection changed depending on whether the rhythm of the gamma waves was influenced by electric stimulation in the two brain hemispheres synchronously or asynchronously. This disruption also impaired the integration process. Thus, synchronization of the gamma waves seems to serve to balance the different inputs from the two hemispheres of the brain, providing a unified auditory impression.

Possible therapy for tinnitus

"Our results suggest that gamma wave-mediated synchronization between different brain areas is a fundamental mechanism for neural integration," says Preisig. "Moreover, this research shows for the first time, using human hearing as an example, that the connection between the two hemispheres of the brain can be successfully modulated by electric stimulation," adds Alexis Hervais-Adelman, head of neurolinguistics at the UZH Department of Psychology, who was also involved in the study.

These findings could thus also find clinical application in the near future. "Previous studies show that disturbances in the connection between the two hemispheres of the brain are associated with auditory phantom perceptions such as tinnitus and auditory verbal hallucinations," Preisig adds. "Thus, electric brain stimulation may present a promising avenue for the development of therapeutic interventions."

Credit: 
University of Zurich

Combined bark beetle outbreaks and wildfire spell uncertain future for forests

image: A forest in the San Juan range of the Rocky Mountains, with dead Engelmann spruce trees alongside live aspen trees.

Image: 
Robert Andrus

Bark beetle outbreaks and wildfire alone are not a death sentence for Colorado's beloved forests--but when combined, their toll may become more permanent, shows new research from the University of Colorado Boulder.

It finds that when wildfire follows a severe spruce beetle outbreak in the Rocky Mountains, Engelmann spruce trees are unable to recover and grow back, while aspen tree roots survive underground. The study, published last month in Ecosphere, is one of the first to document the effects of bark beetle kill on high elevation forests' recovery from wildfire.

"The fact that Aspen is regenerating prolifically after wildfire is not a surprise," said Robert Andrus, who conducted this research while working on his PhD in physical geography at CU Boulder. "The surprising piece here is that after beetle kill and then wildfire, there aren't really any spruce regenerating."

Andrus' previous research found that bark beetle outbreaks are not a death sentence to Colorado forests--even after overlapping outbreaks with different kinds of beetles--and that spruce bark beetle infestations do not affect fire severity.

This new research, conducted in the San Juan range of the Rocky Mountains, shows that subalpine forests that have not been attacked by bark beetles will likely recover after wildfire. But for forests that suffer from a severe bark beetle outbreak followed by wildfire within about five years, conifers cannot mount a comeback. While these subalpine forests can often take a century to recover from fire, this research on short-term recovery is a good predictor of longer-term trends.

"This combination, the spruce beetle outbreak and the fire, can alter the trajectory of the forest to dominance by aspen," said Andrus, who is now a postdoctoral researcher at Washington State University.

For those worried about the future of Rocky Mountain forests farther north, more research is needed on areas burned in the 2020 East Troublesome Fire to understand how the mountain pine beetle outbreak prior to that fire will affect forest recovery, according to Andrus.

The next generation

Each bark beetle species specializes in attacking--and usually killing--a specific host tree species or closely related species. Several species of bark beetle are native to Colorado and usually exist at low abundances, killing only dying or weakened trees. But as the climate becomes hotter and drier, their populations can explode, causing outbreaks which kill large numbers of even the healthiest trees.

These large, healthy Engelmann spruce and subalpine fir trees are the ones that produce the most seeds. When bark beetles kill these trees and then fire sweeps in, the researchers found there simply aren't enough seeds being produced in the burned areas to regenerate the forest.

Aspens, however, regrow from their root systems. While all three of these higher elevation trees have thin bark and die when exposed to fire, with their regenerative roots underground, aspens can bounce back where conifers cannot.

The researchers focused specifically on areas of forest affected by spruce bark beetle outbreaks, which attack Engelmann spruce, where fires such as Papoose, West Fork and Little Sands burned in 2012 and 2013 in Rio Grande National Forest. They found that for forests that suffer from a severe bark beetle outbreak followed by wildfire within about five years, Engelmann spruce and subalpine fir trees failed to recover in 74% of the 45 sites sampled.

This information will help inform land managers and policy makers about the implications for high elevation forest recovery following a combination of stressors and events.

And it's more important information than ever. Not only do bark beetle outbreaks leave behind swaths of dead, dry trees--and fewer trees to produce seeds--but the climate is getting hotter and droughts are becoming more frequent, promoting larger fires.

"Bark beetle outbreaks have been killing lots and lots of trees throughout the western United States. And especially at higher elevation forests, what drives bark beetle outbreaks and what drives fire are similar conditions: generally warmer and drier conditions," said Andrus.

But there is good news: The aspens that may come to dominate these forests can anchor their recovery, and keep forests from transitioning into grasslands.

"Where the aspen are regenerating, we expect to see a forest in those areas," said Andrus.

Credit: 
University of Colorado at Boulder

New method developed for 'up-sizing' mini organs used in medical research

image: 3D projection of a multi-organoid aggregate

Image: 
University of Cambridge

A team of engineers and scientists has developed a method of 'multiplying' organoids: miniature collections of cells that mimic the behaviour of various organs and are promising tools for the study of human biology and disease.

The researchers, from the University of Cambridge, used their method to culture and grow a 'mini-airway', the first time that a tube-shaped organoid has been developed without the need for any external support.

Using a mould made of a specialised polymer, the researchers were able to guide the size and shape of the mini-airway, grown from adult mouse stem cells, and then remove it from the mould when it reached the point where it could support itself.

Whereas the organoids currently used in medical research are at the microscopic scale, the method developed by the Cambridge team could make it possible to grow life-sized versions of organs. Their results are reported in the journal Advanced Science.

Organoids are tiny, three-dimensional cell assemblies that mimic the cell arrangement of fully-grown organs. They can be a useful way to study human biology and how it can go wrong in various diseases, and possibly how to develop personalised or regenerative treatments. However, assembling them into larger organ structures remains a challenge.

Other research teams have experimented with 3D printing techniques to develop larger mini-organs, but these often require an external support structure.

"Mini-organs are very small and highly fragile," said Dr Yan Yan Shery Huang from Cambridge's Department of Engineering, who co-led the research. "In order to scale them up, which would increase their usefulness in medical research, we need to find the right conditions to help the cells self-organise."

Huang and her colleagues have proposed a new organoid engineering approach called Multi-Organoid Patterning and Fusion (MOrPF) to grow a miniature version of a mouse airway using stem cells. Using this technique, the scientists achieved faster assembly of organoids into airway tubes with uninterrupted passageways. The mini-airways grown using the MOrPF technique showed potential for scaling up to match living organ structures in size and shape, and retained their shape even in the absence of an external support.

The MOrPF technique involves several steps. First, a polymer mould - like a miniature version of a cake or jelly mould - is used to shape a cluster of many small organoids. The cluster is released from the mould after one day, and then grown for a further two weeks. The cluster becomes one single tubular structure, covered by an outer layer of airway cells. The moulding process is just long enough for the outer layer of the cells to form an envelope around the entire cluster. During the two weeks of further growth, the inner walls gradually disappear, leading to a hollow tubular structure.

"Gradual maturation of the cells is really important," said Dr Joo-Hyeon Lee from Cambridge's Wellcome - MRC Cambridge Stem Cell Institute, who co-led the research. "The cells need to be well-organised before we can release them so that the structures don't collapse."

The organoid cluster can be thought of like soap bubbles, initially packed together to form to the shape of the mould. In order to fuse into a single gigantic bubble from the cluster of compressed bubbles, the inner walls need to be broken down. In the MOrPF process, the fused organoid clusters are released from the mould to grow in floating, scaffold-free conditions, so that the cells forming the inner walls of the fused cluster can be taken out of the cluster. The mould can be made into different sizes or shapes, so that the researchers can pre-determine the shape of the finished mini-organ.

"The interesting thing is, if you think about the soap bubbles, the resulting big bubble is always spherical, but the special mechanical properties of the cell membrane of organoids make the resulting fused shape preserve the shape of the mould," said co-author Professor Eugene Terentjev from Cambridge's Cavendish Laboratory.

The team say their method closely approximated the natural process of organ tube formation in some animal species. They are hopeful that their technique will help create biomimetic organs to facilitate medical research.

The researchers first plan to use their method to build a three-dimensional 'organ on a chip', which enables real-time continuous monitoring of cells, and could be used to develop new treatments for disease while reducing the number of animals used in research. Eventually, the technique could also be used with stem cells taken from a patient, in order to develop personalised treatments in future.

Credit: 
University of Cambridge

Geisinger researchers find AI can predict death risk

DANVILLE, Pa. - Researchers at Geisinger have found that a computer algorithm developed using echocardiogram videos of the heart can predict mortality within a year.

The algorithm--an example of what is known as machine learning, or artificial intelligence (AI)--outperformed other clinically used predictors, including pooled cohort equations and the Seattle Heart Failure score. The results of the study were published in Nature Biomedical Engineering.

"We were excited to find that machine learning can leverage unstructured datasets such as medical images and videos to improve on a wide range of clinical prediction models," said Chris Haggerty, Ph.D., co-senior author and assistant professor in the Department of Translational Data Science and Informatics at Geisinger.

Imaging is critical to treatment decisions in most medical specialties and has become one of the most data-rich components of the electronic health record (EHR). For example, a single ultrasound of the heart yields approximately 3,000 images, and cardiologists have limited time to interpret these images within the context of numerous other diagnostic data. This creates a substantial opportunity to leverage technology, such as machine learning, to manage and analyze this data and ultimately provide intelligent computer assistance to physicians.

For their study, the research team used specialized computational hardware to train the machine learning model on 812,278 echocardiogram videos collected from 34,362 Geisinger patients over the last ten years. The study compared the results of the model to cardiologists' predictions based on multiple surveys. A subsequent survey showed that when assisted by the model, cardiologists' prediction accuracy improved by 13 percent. Leveraging nearly 50 million images, this study represents one of the largest medical image datasets ever published.

"Our goal is to develop computer algorithms to improve patient care," said Alvaro Ulloa Cerna, Ph.D., author and senior data scientist in the Department of Translational Data Science and Informatics at Geisinger. "In this case, we're excited that our algorithm was able to help cardiologists improve their predictions about patients, since decisions about treatment and interventions are based on these types of clinical predictions."

Credit: 
Geisinger Health System

"Prediabetes" diagnosis less useful in older patients

Older adults who are classified as having "prediabetes" due to moderately elevated measures of blood sugar usually don't go on to develop full-blown diabetes, according to a study led by researchers at Johns Hopkins Bloomberg School of Public Health.

Doctors still consider prediabetes a useful indicator of future diabetes risk in young and middle-aged adults. However, the study, which followed nearly 3,500 older adults, of median age 76, for about six and a half years, suggests that prediabetes is not a useful marker of diabetes risk in people of more advanced age.

The results were published February 8 in JAMA Internal Medicine.

"Our results suggest that for older adults with blood sugar levels in the prediabetes range, few will actually develop diabetes," says study senior author Elizabeth Selvin, PhD, professor in the Department of Epidemiology at the Bloomberg School. "The category of prediabetes doesn't seem to be helping us identify high-risk people. Doctors instead should focus on healthy lifestyle changes and important disease risk factors such as smoking, high blood pressure, and high cholesterol."

Type 2 diabetes leads to a chronically excess blood level of glucose, which stresses organs including the kidneys, weakens the immune system, and damages blood vessels, promoting heart disease and stroke among other conditions. The prevalence of diagnosed type 2 diabetes in the United States has gone from less than one percent in the 1950s to more than 7 percent today--and researchers believe that the actual figure now, including undiagnosed diabetes, is over 12 percent. This sharp increase is due to the aging U.S. population and increased rates of overweight and obesity.

Doctors have used the concept of prediabetes--involving blood glucose levels that are higher than normal but not yet in the diabetic range--as an indicator of elevated diabetes risk in younger and middle-aged people. However, the utility of the concept in older adults--especially those 70 and older--has been less clear.

"It's very common for older adults to have at least mildly elevated blood glucose levels, but how likely they are to progress to diabetes has been an unresolved question," Selvin says.

To get a better picture of how older adults with prediabetes fare, Selvin and colleagues turned to the Atherosclerosis Risk in Communities Study. This large epidemiological cohort project, funded by the U.S. National Heart, Lung, and Blood Institute and including both Black and white participants, has been running at four U.S. medical centers, including Johns Hopkins, since 1987. For their prediabetes analysis, the researchers selected 3,412 ARIC study participants who had attended a follow-up visit during 2011-13--a time when the participants were between 71 and 90 years old--and did not have any history of diabetes. The researchers then looked at how measures of the participants' blood glucose levels had changed at the next follow-up visit during 2016-17.

As expected, the researchers found that "prediabetes," defined according to two different blood-test measures, was very common among the participants at the 2011-13 visit. Those with prediabetes, defined by moderately high blood levels of glucose following overnight fasting (the impaired fasting glucose test, or IFG), represented 59 percent of the initial sample, and those with prediabetes defined with a different blood test for glycated hemoglobin (HbA1c), represented 44 percent of the initial sample.

However, the results showed that only small numbers of the participants who had prediabetes in 2011-13 had developed diabetes by the time of the 2016-17 visit--8 percent of the IFG-defined prediabetics, and 9 percent of the HbA1c-defined prediabetics.

By contrast, 44 percent of the IFG group and 13 percent of the HbA1c group had improved enough by the 2016-17 visit that their test results were back in the normal range. Moreover, 16 and 19 percent of these two groups had died of other causes by the 2016-17 visit.

The results show that older adults with prediabetes, over intervals like the one in the study, are more likely to have lower blood sugar levels--or to die for other reasons--than to progress to diabetes.

"It appears that in older adults, 'prediabetes' is just not a robust diagnosis," Selvin says.

"Our findings support a focus on lifestyle improvements, including exercise and diet when feasible and safe, for older adults with prediabetes," says Mary Rooney, PhD, a postdoctoral fellow at the Bloomberg School and the paper's first author. "This approach has broad benefits for patients."

Selvin and her colleagues recommend that for older adults, physicians should focus their screening efforts on risk factors, such as hypertension, that are more useful in predicting illness and mortality in this population.

Credit: 
Johns Hopkins Bloomberg School of Public Health

Radiative cooling and solar heating from one system, no electricity needed

image: The new tech that provides both radiative cooling and solar heating, all is one system and without using electricity or fuel.

Image: 
University at Buffalo

BUFFALO, N.Y. -- Passive cooling, like the shade a tree provides, has been around forever.

Recently, researchers have been exploring how to turbo charge a passive cooling technique -- known as radiative or sky cooling -- with sun-blocking, nanomaterials that emit heat away from building rooftops. While progress has been made, this eco-friendly technology isn't commonplace because researchers have struggled to maximize the materials' cooling capabilities.

New research led by University at Buffalo engineers makes significant progress in this area.

A study published Feb. 8 in the journal Cell Reports Physical Science describes a uniquely designed radiative cooling system that:

Lowered the temperature inside a test system in an outdoor environment under direct sunlight by more than 12 degrees Celsius (22 degrees Fahrenheit).

Lowered the temperature of the test box in a laboratory, meant to simulate the night, by more than 14 degrees Celsius (25 degrees Fahrenheit).

Simultaneously captured enough solar power that can be used to heat water to about 60 degrees Celsius (140 degrees Fahrenheit).

While the system tested was only 70 centimeters (27.5 inches) squared, it could eventually be scaled up to cover rooftops, engineers say, with the goal of reducing society's reliance on fossil fuels for cooling and heating. It also could aid communities with limited access to electricity.

"There is a great need for heating and cooling in our daily life, especially cooling in the warming world," says the study's lead author Qiaoqiang Gan, PhD, professor of electrical engineering in the UB School of Engineering and Applied Sciences.

The research team includes Zongfu Yu, PhD, University of Wisconsin-Madison; Boon Ooi, PhD, King Abdullah University of Science and Technology (KAUST) in Saudi Arabia; and members of Gan's lab at UB, and Ooi's lab at KAUST.

System design and materials key to success

The system consists of what are essentially two mirrors, made of 10 extremely thin layers of silver and silicon dioxide, which are placed in a V-shape.

These mirrors absorb incoming sunlight, turning solar power from visible and near-infrared waves into heat. The mirrors also reflect mid-infrared waves from an "emitter" (a vertical box in between the two mirrors), which then bounces the heat they carry into the sky.

"Since the thermal emission from both surfaces of the central thermal emitter is reflected to the sky, the local cooling power density on this emitter is doubled, resulting in a record high temperature reduction," says Gan.

"Most radiative cooling systems scatter the solar energy, which limits the system's cooling capabilities," Gan says. "Even with a perfect spectral selection, the upper limit for the cooling power with an ambient temperature of 25 degrees Celsius is about 160 watts per square meter. In contrast, the solar energy of about 1000 watts per square meter on top of those systems was simply wasted."

Spinoff company aims to commercialize technology

Gan co-founded a spinoff company, Sunny Clean Water LLC, which is seeking partners to commercialize this technology.

"One of the key innovations of our system is the ability to separate and retain the solar heating and radiative cooling at different components in a single system," says co-first author Lyu Zhou, a PhD candidate in electrical engineering in the School of Engineering and Applied Sciences. "During the night, radiative cooling is easy because we don't have solar input, so thermal emissions just go out and we realize radiative cooling easily. But daytime cooling is a challenge because the sun is shining. In this situation, you need to find strategies to separate solar heating from the cooling area."

The work builds upon previous research Gan's lab led that involved creating a cone-shaped system for electricity-free cooling in crowded cities to adapt to climate change.

"The new double-sided architecture realized a record local cooling power density beyond 280 watts per square meter. Under standard atmospheric pressure with no vacuum thermal isolation, we realized a temperature reduction of 14.5 degrees Celsius below the ambient temperature in a laboratory environment, and over 12 degrees Celsius in an outdoor test using a simple experimental system," says the other co-first author, Haomin Song, PhD, a research assistant professor of electrical engineering in the School of Engineering and Applied Sciences.

"Importantly, our system does not simply waste the solar input energy. Instead, the solar energy is absorbed by the solar spectral selective mirrors, and it can be used for solar water heating, which is widely used as an energy efficient device in developing countries," says Gan. "It can retain both the solar heating and radiative cooling effects in a single system with no need of electricity. It's really sort of a 'magic' system of ice and fir."

The research team will continue to investigate ways to improve the technology, including examining how to capture enough solar power to boil water, making it suitable for drinking.

Credit: 
University at Buffalo