Tech

Winners and losers of energy transition

The European Green Deal aims to drastically reduce greenhouse gas emissions in the electricity sector, which could have substantial economic and social impacts across Central European regions. Some regions might benefit more than others from new employment opportunities and from reduced air pollution, while others face threats to employment from phasing out coal and nuclear power plants. Such a transition to renewable electricity thus risks creating new regional winners and losers. In a study published in Nature Communications, scientists from the University of Geneva (UNIGE) quantify regional impacts associated with Central European electricity targets, as well as regional inequalities in costs, jobs, greenhouse gas and particulate matter emissions, and land use. The study demonstrates that a uniform and fair distribution of technologies between the 650 regions included in the analysis is possible, with an acceptable trade-off on cost increase.

The future success of reducing greenhouse gas emissions will depend on the public acceptance of available technologies, their costs, but also their impacts on society and regions. "Certain technologies are likely to impact some regions more than others. Wind turbines, for example, are preferentially located in windy regions near ocean coasts where they can generate most electricity over the year. Although wind turbines could provide new jobs in those regions, they can also degrade the landscape and negatively impact the public acceptance of new installations," says Jan-Philipp Sasse, a doctoral student at the Institute of Environmental Sciences of the Faculty of Science at UNIGE. Central European regions vary quite significantly in terms of their ability to generate renewable electricity, as well as costs. A focus solely on cost efficiency can therefore lead to inequalities between regions. Conversely, setting only social objectives would make electricity less affordable.

Zoom on regions

In order to find the right compromise, the UNIGE's research group on renewable energy systems has modelled 100 technically and economically feasible scenarios that could achieve the Central European electricity sector targets by 2035. From these scenarios, the researchers highlight trade-offs in the aims of cost-efficiency, renewable electricity, and regional equality.

To address the regional aspects, a very high spatial resolution has been taken into account. "The size of regions included in the study is the equivalent of a Swiss canton. This is the largest modelling study that accounts for equity considerations at such high spatial detail," says Jan-Philipp Sasse. In concrete terms, the study focuses on 650 regions in six Central European countries: Switzerland, France, Poland, Austria, Germany and Denmark. It considers all major technologies to generate electricity, such as solar, wind, biomass and hydropower, and assesses their regional impacts on costs, employment, emissions and land use. More than 100 scenarios were modelled for the study. "We selected the three most extreme scenarios to highlight trade-offs in improving different objectives," explains Jan-Philipp Sasse.

Regional equality at affordable cost

The study shows that these three scenarios require very different implementation pathways and lead to different regional impacts. The cost-minimisation scenario encourages a spatial concentration of costs and jobs to only a few regions, and therefore encourages regional inequalities. The regional equality scenario encourages a more even distribution costs and jobs, as well as lower emissions, but would have a negative impact on land use. The scenario maximising renewable electricity generation has the most severe impacts in terms of high costs and land use impacts. "No objective can therefore be achieved without impacting the others, and compromises must be made to make the transition," adds the researcher. The study demonstrates that a viable compromise is possible to ensure that the costs and benefits of the renewable transition are shared evenly across regions, without favouring or prejudicing any one region. "Rising costs cannot be avoided, but might still be acceptable if environmental and social goals can be improved in an equitable way," says Jan-Philipp Sasse.

Justice and equity

Central European countries have set up ambitious strategies and plans to reduce greenhouse gas emissions within the next 15 years. "For example," says Jan-Philipp Sasse, "by 2035, Switzerland aims to produce roughly four times as much electricity from non-hydro-renewable energy sources as it does today. France, for its part, would like to reduce its nuclear production by 50%. That's huge!" Energy system models are popular tools for policy-makers to navigate the transition by quantifying the technical feasibility and cost effectiveness of green energy strategies. However, most models fail to provide a more holistic picture of the impacts and inequalities that are associated with the transition. "Politicians and decision-makers know that without equity and justice, citizens will not adhere to these objectives, which risk becoming obsolete," he adds. This study now provides them with a powerful tool that integrates regional equality and social objectives.

Credit: 
Université de Genève

Statins may reduce cancer risk through mechanisms separate to cholesterol

Cholesterol-lowering drugs called statins may reduce cancer risk in humans through a pathway unrelated to cholesterol, says a study published today in eLife.

Statins reduce levels of LDL-cholesterol, the so-called 'bad' cholesterol, by inhibiting an enzyme called HMG-CoA-reductase (HMGCR). Clinical trials have previously demonstrated convincing evidence that statins reduce the risk of heart attacks and other cardiovascular diseases. But evidence for the potential effect of statins to reduce the risk of cancer is less clear.

"Previous laboratory studies have suggested that lipids including cholesterol play a role in the development of cancer, and that statins inhibit cancer development," explains lead author Paul Carter, Cardiology Academic Clinical Fellow at the Department of Health and Primary Care, University of Cambridge, UK. "However, no trials have been designed to assess the role of statins for cancer prevention in clinical practice. We decided to assess the potential effect of statin therapy on cancer risk using evidence from human genetics."

To do this, Carter and the team studied genetic variants that mimic the effect of statins using a technique known as Mendelian randomization in UK Biobank, a large study of UK residents that tracks the diagnosis and treatment of many serious illnesses. Mendelian randomization assesses associations between genetically predicted levels of a risk factor and a disease outcome, in order to predict the extent to which that risk factor causes the outcome. For example, it can compare the risk of cancer in patients who inherit a genetic predisposition to high or low levels of cholesterol, in order to predict whether lowering cholesterol levels will reduce the risk of cancer. This study is the first Mendelian randomization analysis of lipid subtypes for a range of cancers across the human body.

The team obtained associations of lipid-related genetic variants with the risk of overall cancer and 22 cancer types for 367,703 individuals in UK Biobank. In total, 75,037 of these individuals had a cancer event.

Their analysis revealed that variants in the HMGCR gene region, which represent proxies for statin treatment, were associated with overall cancer risk, suggesting that statins could lower overall cancer risk. Interestingly, variants in gene regions that represent other cholesterol-lowering treatments that work differently to statins were not associated with cancer risk, and genetically predicted LDL-cholesterol was not associated with overall cancer risk.

"Taken together, these results suggest that inhibiting HMGCR with statins may help reduce cancer risk though non-lipid lowering mechanisms, and that this role may apply across cancer sites," Carter says. "This effect may operate through other properties of statins, including dampening down inflammation or reducing other chemicals produced by the same cellular machinery which synthesises cholesterol."

Despite the large sample size of more than 360,000 participants and the broad set of outcomes analysed in this study, the team adds that there are a number of limitations to this work. For example, for many cancer types, there were not enough outcome events needed in the analysis to rule out the possibility of moderate causal effects.

"While there is evidence to support our assumption that genetic variants in relevant gene regions can be used as proxies for pharmacological interventions, our findings should be considered with caution until they are confirmed in clinical trials. However, our work highlights that the effectiveness of statins must be urgently evaluated by large clinical trials for potential use in cancer prevention," says senior author Stephen Burgess, Group Leader at the Medical Research Council Biostatistics Unit, part of the University of Cambridge. "While statins do have some adverse effects, our findings further weight the balance in favour of these drugs reducing the risk of major disease."

Credit: 
eLife

UCF researchers are working on tech so machines can thermally 'breathe'

image: UCF mechanical and aerospace engineering researchers Khan Rabbi and Shawn Putnam are developing new ways to cool machines and electronics. Rabbi is a doctoral candidate in the department, and Putnam is an associate professor.

Image: 
Karen Norum, University of Central Florida Office of Research

ORLANDO, Oct. 13, 2020 - In the era of electric cars, machine learning and ultra-efficient vehicles for space travel, computers and hardware are operating faster and more efficiently. But this increase in power comes with a trade-off: They get superhot.

To counter this, University of Central Florida researchers are developing a way for large machines to "breathe" in and out cooling blasts of water to keep their systems from overheating.

The findings are detailed in a recent study in the journal Physical Review Fluids.

The process is much like how humans and some animals breath in air to cool their bodies down, except in this case, the machines would be breathing in cool blasts of water, says Khan Rabbi, a doctoral candidate in UCF's Department of Mechanical and Aerospace Engineering and lead author of the study.

"Our technique used a pulsed water-jet to cool a hot titanium surface," Rabbi says. "The more water we pumped out of the spray jet nozzles, the greater the amount of heat that transferred between the solid titanium surface and the water droplets, thus cooling the titanium. Fundamentally, an idea of optimum jet-pulsation needs to be generated to ensure maximum heat transfer performance."

"It is essentially like exhaling the heat from the surface," he says.

The water is emitted from small water-jet nozzles, about 10 times the thickness of a human hair, that douse a hot surface of a large electronic system and the water is collected in a storage chamber, where it can be pumped out and circulated again to repeat the cooling process. The storage chamber in their study held about 10 ounces of water.

Using high-speed, infrared thermal imaging, the researchers were able to find the optimum amount of water for maximum cooling performance.

Rabbi says everyday applications for the system could include cooling large electronics, space vehicles, batteries in electric vehicles and gas turbines.

Shawn Putnam, an associate professor in UCF's Department of Mechanical and Aerospace Engineering and study co-author, says that this research is part of an effort to explore different techniques to efficiently cool hot devices and surfaces.

"Most likely, the most versatile and efficient cooling technology will take advantage of several different cooling mechanisms, where pulsed jet cooling is expected to be one of these key contributors," Putnam says.

The researcher says there are multiple ways to cool hot hardware, but water-jet cooling is a preferred method because it can be adjusted to different directions, has good heat-transfer ability, and uses minimum amounts of water or liquid coolant.

However, it has its drawbacks, namely either over or underwatering that results in floods or dry hotspots.
The UCF method overcomes this problem by offering a system that is tunable to hardware needs so that the only water applied is the amount needed and in the right spot.

The technology is needed since once device temperatures surpass a threshold value, for example, 194 degrees Fahrenheit, the device's performance decreases, Rabbi says.

"For this reason, we need better cooling technologies in place to keep the device temperature well within the maximum temperature for optimum operation," he says. "We believe this study will provide engineers, scientists and researchers a unique understanding to develop future generation liquid cooling systems."

Credit: 
University of Central Florida

Computer model uses virus 'appearance' to better predict winter flu strains

Combining genetic and experimental data into models about the influenza virus can help predict more accurately which strains will be most common during the next winter, says a study published recently in eLife.

The models could make the design of flu vaccines more accurate, providing fuller protection against a virus that causes around half a million deaths each year globally.

Vaccines are the best protection we have against the flu. But the virus changes its appearance to our immune system every year, requiring researchers to update the vaccine to match. Since a new vaccine takes almost a year to make, flu researchers must predict which flu viruses look the most like the viruses of the future.

The gold-standard ways of studying influenza involve laboratory experiments looking at a key molecule that coats the virus called haemagglutinin. But these methods are labour-intensive and take a long time. Researchers have focused instead on using computers to predict how the flu virus will evolve from the genetic sequence of haemagglutinin alone, but these data only give part of the picture.

"The influenza research community has long recognised the importance of taking into account physical characteristics of the flu virus, such as how haemagglutinin changes over time, as well as genetic information," explains lead author John Huddleston, a PhD student in the Bedford Lab at Fred Hutchinson Cancer Research Center and Molecular and Cell Biology Program at the University of Washington, Seattle, US. "We wanted to see whether combining genetic sequence-only models of influenza evolution with other high-quality experimental measurements could improve the forecasting of the new strains of flu that will emerge one year down the line."

Huddleston and the team looked at different components of virus 'fitness' - that is, how likely the virus is to thrive and continue to evolve. These included how similar the antigens of the virus are to previously circulating strains (antigens being the components of the virus that trigger an immune response). They also measured how many mutations the virus has accumulated, and whether they are beneficial or harmful.

Using 25 years of historical flu data, the team made forecasts one year into the future from all available flu seasons. Each forecast predicted what the future virus population would look like using the virus' genetic code, the experimental data, or both. They compared the predicted and real future populations of flu to find out which data types were more helpful for predicting the virus' evolution.

They found that the forecasts that combined experimental measures of the virus' appearance with changes in its genetic code were more accurate than forecasts that used the genetic code alone. Models were most informative if they included experimental data on how flu antigens changed over time, the presence of likely harmful mutations, and how rapidly the flu population had grown in the past six months. "Genetic sequence alone could not accurately predict future flu strains - and therefore should not take the place of traditional experiments that measure the virus' appearance," Huddleston says.

"Our results highlight the importance of experimental measurements to quantify the effects of changes to virus' genetic code and provide a foundation for attempts to forecast evolutionary systems," concludes senior author Trevor Bedford, Principal Investigator at the Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, Washington. "We hope the open-source forecasting tools we have developed can immediately provide better forecasts of flu populations, leading to improved vaccines and ultimately fewer illnesses and deaths from flu."

Credit: 
eLife

Scientists develop detector for investigating the sun

image: Photo. Device prototype: (1) the body of the detector consisting of scintillation disks, (2) fiber optics in a protective coating, (3) control boards for managing offset voltage and data acquisition -- developed at the Institute for Nuclear Research of the Russian Academy of Sciences, (4) prototype frame and stand for ground-based observations.

Image: 
Egor Stadnichuk et al./Journal of Instrumentation

Researchers from MIPT have developed a prototype detector of solar particles. The device is capable of picking up protons at kinetic energies between 10 and 100 megaelectronvolts, and electrons at 1-10 MeV. This covers most of the high-energy particle flux coming from the sun. The new detector can improve radiation protection for astronauts and spaceships, as well as advancing our understanding of solar flares. The research findings are reported in the Journal of Instrumentation.

As energy gets converted from one form to another in the active regions of the solar atmosphere, streams of particles -- or cosmic rays -- are born with energies roughly between 0.01-1,000 MeV. Most of these particles are electrons and protons, but nuclei from helium to iron are also observed, albeit in far smaller numbers.

The current consensus is that the particle flux has two principal components. First, there are the narrow streams of electrons in brief flares lasting from tens of minutes to several hours. And then there are the flares with broad shockwaves, which last up to several days and mostly contain protons, with some occasional heavier nuclei.

Despite the vast arrays of data supplied by solar orbiters, some fundamental questions remain unresolved. Scientists do not yet understand the specific mechanisms behind particle acceleration in the shorter- and longer-duration solar flares. It is also unclear what the role of magnetic reconnection is for particles as they accelerate and leave the solar corona, or how and where the initial particle populations originate before accelerating on impact waves. To answer these questions, researchers require particle detectors of a novel type, which would also underlie new spaceship security protocols that would recognize the initial wave of electrons as an early warning of the impending proton radiation hazard.

A recent study by a team of physicists from MIPT and elsewhere reports the creation of a prototype detector of high-energy particles. The device consists of multiple polystyrene disks, connected to photodetectors. As a particle passes through polystyrene, it loses some of its kinetic energy and emits light, which is registered by a silicon photodetector as a signal for subsequent computer analysis.

The project's principal investigator Alexander Nozik from the Nuclear Physics Methods Laboratory at MIPT said: "The concept of plastic scintillation detectors is not new, and such devices are ubiquitous in Earth-based experiments. What enabled the notable results we achieved is using a segmented detector along with our own mathematical reconstruction methods."

Part of the paper in the Journal of Instrumentation deals with optimizing the detector segment geometry. The dilemma is that while larger disks mean more particles analyzed at any given time, this comes at the cost of instrument weight, making its delivery into orbit more expensive. Disk resolution also drops as the diameter increases. As for the thickness, thinner disks determine proton and electron energies with more precision, yet a large number of thin disks also necessitates more photodetectors and bulkier electronics.

The team relied on computer modeling to optimize the parameters of the device, eventually assembling a prototype that is small enough to be delivered into space. The cylinder-shaped device has a diameter of 3 centimeters and is 8 centimeters tall. The detector consists of 20 separate polystyrene disks, enabling an acceptable accuracy of over 5%. The sensor has two modes of operation: It registers single particles in a flux that does not exceed 100,000 particles per second, switching to an integrated mode under more intense radiation. The second mode makes use of a special technique for analyzing particle distribution data, which was developed by the authors of the study and does not require much computing power.

"Our device has performed really well in lab tests," said study co-author Egor Stadnichuk of the MIPT Nuclear Physics Methods Laboratory. "The next step is developing new electronics that would be suitable for detector operation in space. We are also going to adapt the detector's configuration to the constraints imposed by the spaceship. That means making the device smaller and lighter, and incorporating lateral shielding. There are also plans to introduce a finer segmentation of the detector. This would enable precise measurements of electron spectra at about 1 MeV."

Credit: 
Moscow Institute of Physics and Technology

Without the North American monsoon, reining in wildfires gets harder

The North American monsoon has dictated the length of wildfire season for centuries in the U.S.-Mexico border region, according to new University of Arizona research that can inform land management amid global climate change.

But this year was anything but normal. The 2020 monsoon season was the second-driest on record, and many high-profile wildfires swept across the Sonoran Desert and surrounding sky islands. Putting an end to severe fires may only become harder as climate change makes monsoon storms less frequent and more extreme, say the authors of a new study published in the International Journal of Wildland Fire.

The U.S. may be able to learn from Mexico's wildfire management strategy, the researchers say.

"These large fire years are the result of many factors, but fire weather and seasonal climate loom very large in the picture. In the case of (Tucson's) Bighorn Fire (this summer), for example, we had a combination of unusually hot weather, low humidity and strong winds. When the monsoon is delayed, that means the fire season lasts longer, giving fires more time to burn," said study co-author Don Falk, professor in the School of Natural Resources and the Environment.

"It's well-known that what really gets us through May and June in Arizona is winter rainfall and snowpack. It adds moisture to the system," said lead study author Alexis Arizpe, who was a UArizona research specialist and graduate student in the Laboratory of Tree-Ring Research when he did the research. "The start of the fire season is controlled by winter rain, but the monsoon controls the end of it. The physical rainfall limits fire with deceased temperatures, increased humidity and soil moisture."

Arizpe, who is now a technician at the Gregor Mendel Institute of Molecular Plant Biology in Vienna, Austria, and his colleagues analyzed patterns of widespread fire years going back more than 400 years using tree-ring samples extracted from sky island mountain ranges - isolated mountaintops that are ecologically distinct from the surrounding valleys - in southern Arizona and northern Mexico.

"Realistically, there's only one factor that could give you simultaneous large fires in multiple sky islands, and that's climate," Falk said. "We now understand that the sky island bioregion has a distinctive 'monsoon fire regime' different from anywhere else."

Connie Woodhouse, a Regents Professor in the School of Geography, Development and Environment, and Tom Swetnam, Regents Professor Emeritus of Dendrochronology, were also co-authors on the study.

Researchers have learned about the monsoon's role in regulating wildfires from records of past fires, but they didn't have the ability to assess it over long periods, Falk said. That recently changed when former Laboratory of Tree-Ring Research graduate student Daniel Griffin, now on the faculty at the University of Minnesota, compiled the first-ever tree-ring record that teased out monsoon rainfall from winter rainfall.

When the team sorted out the interactions between winter and monsoon rainfall and wildfires, interesting relationships emerged.

"When it's wet in both seasons, you never get a big fire," Falk said. "When it's wet in winter and dry in the monsoon, you see fire occasionally. When it's dry in winter but a good monsoon, every now and then you get some big fires. But the real action is when it's dry in both seasons. Think of it as how much time fires have to burn."

Also, heavy winter rainfall promotes fuel buildup. When a wet winter is followed by an especially dry year, there's a lot of built-up fuel, such as dry grass, that's primed for lightning strikes or stray sparks.

Another pattern emerged in the data: Large, high-severity fires burn more often in the U.S. than in Mexico. This is mostly due to land management differences, Arizpe said. In many areas of Mexico, traditional land management practices continue. This includes seasonal grazing combined with local prescribed burning to renew grasslands, allowing low-severity fires to burn naturally as they have for centuries.

In contrast, for the last 100 years, the U.S. Forest Service has focused heavily on fire suppression, meaning fires are snuffed out as soon as possible. As a result, fuels accumulate, providing fuel for more severe fires later, Falk said. Currently, nearly half of the U.S. Forest Service's budget funds fire suppression. As a result, the U.S. hadn't experienced many destructive wildfires until droughts of the 21st century produced unmitigated fuel for some of the largest wildfires the region has ever seen.

"Ironically, our investment is paying off in the form of gigantic fires that threaten our forests," Falk said.

Using the tree-ring record, the team found that for centuries, low-intensity wildfires scarred but didn't kill trees about once every 10 years. This natural process is healthy for the regional ecosystem. Small fires that stay low to the ground clear up dead foliage on the forest floor and turn it into nutritious ash from which new plants can grow.

Larger, more destructive fires cause public outcry that triggers a cycle that exacerbates the problem, Falk said. Fire suppression tactics allow fuels to accumulate. More fuel means more highly destructive fires and more public outcry. As a result, policy makers pour more money into suppression to protect human assets like towns and power lines.

Such large, destructive fires do much more than scar trees. High severity fires can leave large landscapes with damaged soils and few living trees. It can take decades or even centuries for the ecosystem to recover, which can be detrimental to native species. In the meantime, erosions and landslides may occur.

Climate change researchers predict that the region will only get hotter and drier as climate change progresses, resulting in more common and devastating wildfires, said Christopher Castro, associate professor of hydrology and atmospheric sciences, who was not involved with the research.

Castro uses climate models to forecast future monsoons. His research shows that monsoon storms will become more intense but less frequent. This could also possibly delay the end of the wildfire season.

"This is all bad from a wildfire perspective," he said.

Mexico, on the other hand, has let fires naturally run their course relative to the U.S. As a result, they have relatively more natural and less destructive wildfire patterns, although large fires have occurred there as well, Falk said.

"Looking forward, we have to accept the new reality that fire seasons will be longer and more severe in the future. This is simply the new world that we have created for ourselves and for nature by propelling climate change so rapidly," Falk said. "Managers will have a massive challenge on their hands dealing with this new reality, but certainly in the short run we need to provide them with the means to manage forests more proactively, including forest thinning, prescribed burning and other measures to reduce fuels."

Credit: 
University of Arizona

Magnitude comparison distinguishes small earthquakes from explosions in US west

By comparing two magnitude measurements for seismic events recorded locally, researchers can tell whether the event was a small earthquake or a single-fire buried chemical explosion.

The findings, published in the Bulletin of the Seismological Society of America, give seismologists one more tool to monitor nuclear explosions, particularly low-yield explosions that are detected using seismic stations that are 150 kilometers (about 93 miles) or less from the explosion site.

Seismologists use a variety of methods to distinguish earthquakes from explosions, such as analyzing the ratio of P waves (which compress rock in the same direction as a wave's movement) to S waves (which move rock perpendicular to the wave direction). However, methods like the P/S-wave ratio do not work as well for events of magnitude 3 or smaller, making it essential to develop other discrimination techniques, said University of Utah seismologist Keith Koper. Scientists have debated, for instance, whether a small seismic event that took place on 12 May 2010 in North Korea was a natural earthquake or an earthquake induced by a low-yield nuclear explosion.

The new study looks at the difference between local magnitude (ML) and coda duration magnitude (MC) measurements. Local magnitude, sometimes referred to as Richter magnitude, estimates magnitude based on the maximum amplitude of seismic waves detected. Coda duration magnitude is based on the duration of a seismic wave train and the resulting length of the seismogram it produces.

Koper and his students stumbled across the potential usefulness of this comparison in one of his graduate seminars about four years ago, as the students practiced programming and comparing different types of magnitudes. "It turned out that when you looked at these magnitude differences, there was a pattern," he said. "All these earthquakes in Utah that are associated with coal mining have a bigger coda magnitude, with seismograms longer than normal."

Compared to naturally occurring earthquakes, seismic events caused by human activity tend to have a larger MC than ML, the researchers concluded in a 2016 paper. Very shallow seismic events have a larger MC than deeper buried events, they found, while noting that most human activities that would induce earthquakes take place at shallow depths in the crust, compared to the deeper origins of natural earthquakes.

The findings suggested that ML-MC difference could be useful in detecting nuclear explosions at a local level, but the multiple detonations in a coal mining operation, scattered in space and time, produce a different seismic signature than the compact single shot of a nuclear explosion.

To further test the discrimination method, the researchers searched for "explosions that were better proxies, compact, and not your typical industrial explosions," Koper said.

In the BSSA study, Koper and colleagues applied the ML-MC difference to three experiments in the U.S. West that recorded data on local networks from buried single-fire explosions as well as natural earthquakes: the 2010 Bighorn Arch Seismic Experiment (BASE) in northern Wyoming, the imaging Magma Under St. Helens (iMUSH) experiment in Washington State from 2014 to 2016, and the Phase I explosions of the Source Physics Experiment (SPE) in Nevada from 2011 to 2016.

The method was able to successfully separate explosions from natural earthquakes in the data from all three sites, the researchers found, confirming that it would be potentially useful for identifying small underground nuclear explosions in places that are only covered by a local seismic network.

Beyond explosion seismology, the method might also help identify and analyze other earthquakes that have shallow sources, including some earthquakes induced by human activities such as oil and gas recovery, Koper said.

Credit: 
Seismological Society of America

Cameras that can learn

image: A Convolutional Neural Network (CNN) on the SCAMP-5D vision system classifying hand gestures at 8,200 frames per second

Image: 
University of Bristol, 2020

Intelligent cameras could be one step closer thanks to a research collaboration between the Universities of Bristol and Manchester who have developed cameras that can learn and understand what they are seeing.

Roboticists and artificial intelligence (AI) researchers know there is a problem in how current systems sense and process the world. Currently they are still combining sensors, like digital cameras that are designed for recording images, with computing devices like graphics processing units (GPUs) designed to accelerate graphics for video games.

This means AI systems perceive the world only after recording and transmitting visual information between sensors and processors. But many things that can be seen are often irrelevant for the task at hand, such as the detail of leaves on roadside trees as an autonomous car passes by. However, at the moment all this information is captured by sensors in meticulous detail and sent clogging the system with irrelevant data, consuming power and taking processing time. A different approach is necessary to enable efficient vision for intelligent machines.

Two papers from the Bristol and Manchester collaboration have shown how sensing and learning can be combined to create novel cameras for AI systems.

Walterio Mayol-Cuevas, Professor in Robotics, Computer Vision and Mobile Systems at the University of Bristol and principal investigator (PI), commented: "To create efficient perceptual systems we need to push the boundaries beyond the ways we have been following so far.

"We can borrow inspiration from the way natural systems process the visual world - we do not perceive everything - our eyes and our brains work together to make sense of the world and in some cases, the eyes themselves do processing to help the brain reduce what is not relevant."

This is demonstrated by the way the frog's eye has detectors that spot fly-like objects, directly at the point where the images are sensed.

The papers, one led by Dr Laurie Bose and the other by Yanan Liu at Bristol, have revealed two refinements towards this goal. By implementing Convolutional Neural Networks (CNNs), a form of AI algorithm for enabling visual understanding, directly on the image plane. The CNNs the team has developed can classify frames at thousands of times per second, without ever having to record these images or send them down the processing pipeline. The researchers considered demonstrations of classifying handwritten numbers, hand gestures and even classifying plankton.

The research suggests a future with intelligent dedicated AI cameras - visual systems that can simply send high-level information to the rest of the system, such as the type of object or event taking place in front of the camera. This approach would make systems far more efficient and secure as no images need be recorded.

The work has been made possible thanks to the SCAMP architecture developed by Piotr Dudek, Professor of Circuits and Systems and PI from the University of Manchester, and his team. The SCAMP is a camera-processor chip that the team describes as a Pixel Processor Array (PPA). A PPA has a processor embedded in each and every pixel which can communicate to each other to process in truly parallel form. This is ideal for CNNs and vision algorithms.

Professor Dudek said: "Integration of sensing, processing and memory at the pixel level is not only enabling high-performance, low-latency systems, but also promises low-power, highly efficient hardware.

"SCAMP devices can be implemented with footprints similar to current camera sensors, but with the ability to have a general-purpose massively parallel processor right at the point of image capture."

Dr Tom Richardson, Senior Lecturer in Flight Mechanics, at the University of Bristol and a member of the project has been integrating the SCAMP architecture with lightweight drones.

He explained: 'What is so exciting about these cameras is not only the newly emerging machine learning capability, but the speed at which they run and the lightweight configuration.

"They are absolutely ideal for high speed, highly agile aerial platforms that can literally learn on the fly!'

The research, funded by the Engineering and Physical Sciences Research Council (EPSRC), has shown that it is important to question the assumptions that are out there when AI systems are designed. And things that are often taken for granted, such as cameras, can and should be improved towards the goal of more efficient intelligent machines.

Credit: 
University of Bristol

Technique to recover lost single-cell RNA-sequencing information

image: MIT researchers have greatly boosted the amount of information that can be obtained using Seq-Well, a technique for rapidly sequencing RNA from single cells. This advance should enable scientists to learn much more about the critical genes that are expressed in each cell, and help them to discover subtle differences between healthy and diseased cells for designing new preventions and cures. This image illustrates the improved resolution, right, using the new technique.

Image: 
MIT

CAMBRIDGE, MA -- Sequencing RNA from individual cells can reveal a great deal of information about what those cells are doing in the body. MIT researchers have now greatly boosted the amount of information gleaned from each of those cells, by modifying the commonly used Seq-Well technique.

With their new approach, the MIT team could extract 10 times as much information from each cell in a sample. This increase should enable scientists to learn much more about the genes that are expressed in each cell, and help them to discover subtle but critical differences between healthy and dysfunctional cells.

"It's become clear that these technologies have transformative potential for understanding complex biological systems. If we look across a range of different datasets, we can really understand the landscape of health and disease, and that can give us information as to what therapeutic strategies we might employ," says Alex K. Shalek, an associate professor of chemistry, a core member of the Institute for Medical Engineering and Science (IMES), and an extramural member of the Koch Institute for Integrative Cancer Research at MIT. He is also a member of the Ragon Institute of MGH, MIT and Harvard and an institute member of the Broad Institute.

In a study appearing this week in Immunity, the research team demonstrated the power of this technique by analyzing approximately 40,000 cells from patients with five different skin diseases. Their analysis of immune cells and other cell types revealed many differences between the five diseases, as well as some common features.

"This is by no means an exhaustive compendium, but it's a first step toward understanding the spectrum of inflammatory phenotypes, not just within immune cells, but also within other skin cell types," says Travis Hughes, an MD/PhD student in the Harvard-MIT Program in Health Sciences and Technology and one of the lead authors of the paper.

Shalek and J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering and a member of the Koch Institute and Ragon Institute, are the senior authors of the study. MIT graduate student Marc Wadsworth and former postdoc Todd Gierahn are co-lead authors of the paper with Hughes.

Recapturing information

A few years ago, Shalek, Love, and their colleagues developed a method called Seq-Well, which can rapidly sequence RNA from many single cells at once. This technique, like other high-throughput approaches, doesn't pick up as much information per cell as some slower, more expensive methods for sequencing RNA. In their current study, the researchers set out to recapture some of the information that the original version was missing.

"If you really want to resolve features that distinguish diseases, you need a higher level of resolution than what's been possible," Love says. "If you think of cells as packets of information, being able to measure that information more faithfully gives much better insights into what cell populations you might want to target for drug treatments, or, from a diagnostic standpoint, which ones you should monitor."

To try to recover that additional information, the researchers focused on one step where they knew that data was being lost. In that step, cDNA molecules, which are copies of the RNA transcripts from each cell, are amplified through a process called polymerase chain reaction (PCR). This amplification is necessary to get enough copies of the DNA for sequencing. Not all cDNA was getting amplified, however. To boost the number of molecules that made it past this step, the researchers changed how they tagged the cDNA with a second "primer" sequence, making it easier for PCR enzymes to amplify these molecules.

Using this technique, the researchers showed they could generate much more information per cell. They saw a fivefold increase in the number of genes that could be detected, and a tenfold increase in the number of RNA transcripts recovered per cell. This extra information about important genes, such as those encoding cytokines, receptors found on cell surfaces, and transcription factors, allows the researchers to identify subtle differences between cells.

"We were able to vastly improve the amount of per cell information content with a really simple molecular biology trick, which was easy to incorporate into the existing workflow," Hughes says.

Signatures of disease

Using this technique, the researchers analyzed 19 patient skin biopsies, representing five different skin diseases -- psoriasis, acne, leprosy, alopecia areata (an autoimmune disease that causes hair loss), and granuloma annulare (a chronic degenerative skin disorder). They uncovered some similarities between disorders -- for example, similar populations of inflammatory T cells appeared active in both leprosy and granuloma annulare.

They also uncovered some features that were unique to a particular disease. In cells from several psoriasis patients, they found that cells called keratinocytes express genes that allow them to proliferate and drive the inflammation seen in that disease.

The data generated in this study should also offer a valuable resource to other researchers who want to delve deeper into the biological differences between the cell types studied.

"You never know what you're going to want to use these datasets for, but there's a tremendous opportunity in having measured everything," Shalek says. "In the future, when we need to repurpose them and think about particular surface receptors, ligands, proteases, or other genes, we will have all that information at our fingertips."

The technique could also be applied to many other diseases and cell types, the researchers say. They have begun using it to study cancer and infectious diseases such as tuberculosis, malaria, HIV, and Ebola, and they are also using it to analyze immune cells involved in food allergies. They have also made the new technique available to other researchers who want to use it or adapt the underlying approach for their own single-cell studies.

Credit: 
Massachusetts Institute of Technology

New method uses noise to make spectrometers more accurate

Optical spectrometers are instruments with a wide variety of uses. By measuring the intensity of light across different wavelengths, they can be used to image tissues or measure the chemical composition of everything from a distant galaxy to a leaf. Now researchers at the UC Davis Department of Biomedical Engineering have come up a with a new, rapid method for characterizing and calibrating spectrometers, based on how they respond to "noise."

Rendering of prism and spectrum

Optical spectroscopy splits light and measures the intensity of different wavelengths. It is a powerful technique across a wide range of applications. UC Davis engineers Aaron Kho and Vivek Srinivasan have now found a new way to characterize and cross-calibrate spectroscopy instruments using excess "noise" in a light signal. (Getty Images)

Spectral resolution measures how well a spectrometer can distinguish light of different wavelengths. It's also important to be able to calibrate the spectrometer so that different instruments will give reliably consistent results. Current methods for characterizing and calibrating spectrometers are relatively slow and cumbersome. For example, to measure how the spectrometer responds to different wavelengths, you would shine multiple lasers of different wavelengths on it.

Noise is usually seen as being a nuisance that confuses measurements. But graduate student Aaron Kho, working with Vivek Srinivasan, associate professor in biomedical engineering and ophthalmology, realized that the excess noise in broadband, multiwavelength light could also serve a useful purpose and replace all those individual lasers.

"The spectrometer's response to noise can be used to infer the spectrometer's response to a real signal," Srinivasan said. That's because the excess noise gives each channel of the spectrum a unique signature.

Faster, more accurate calibration

Instead of using many single-wavelength lasers to measure the spectrometer's response at each wavelength, the new approach uses only the noise fluctuations that are naturally present in a light source with many wavelengths. In this way, it's possible to assess the spectrometer's performance in just a few seconds. The team also showed that they could use a similar approach to cross-calibrate two different spectrometers.

Kho and Srinivasan used the excess noise method in Optical Coherence Tomography (OCT), a technique for imaging living eye tissue. By increasing the resolution of OCT, they were able to discover a new layer in the mouse retina.

The excess noise technique has similarities to laser speckle, Kho said. Speckle - granular patterns formed when lasers are reflected off surfaces - was originally seen as a nuisance but turns out to be useful in imaging, by providing additional information such as blood flow.

"Similarly, we found that excess noise can be useful too," he said.

These new approaches for characterization and cross-calibration will improve the rigor and reproducibility of data in the many fields that use spectrometers, Srinivasan said, and the insight that excess noise can be useful could lead to the discovery of other applications.

Credit: 
University of California - Davis

Mathematical tools predict if wave-energy devices stay afloat in the ocean

Ocean waves represent an abundant source of renewable energy. But to best use this natural resource, wave-energy converters need to be capable of physically handling ocean waves of different strengths without capsizing.

Texas A&M University researchers have developed analytical tools that can help characterize the movements of floating but anchored wave-energy devices. Unlike complicated simulations that are expensive and time-consuming, they said their technique is fast, yet accurate enough to estimate if wave-energy devices will turn over in an ever-changing ocean environment.

"Wave-energy converters need to take advantage of large wave motions to make electricity. But when a big storm comes, you don't want big wave, wind and current motions to destroy these devices," said Dr. Jeffrey Falzarano, professor in the Department of Ocean Engineering. "We have developed much simpler analytical tools to judge the performance of these devices in a dynamic ocean environment without necessitating massive amounts of simulations or physical model tests that take a lot of time to run and are cost-prohibitive."

The mathematical tools are described online in the journal Ships and Offshore Structures in July.

Wave-energy devices function in two modes. In "normal mode," they convert the energy from tidal waves into electricity. Thus, this mode largely determines whether the design of the wave-energy device is economically efficient. However, in "survival" mode, or when incident waves cause large motions in the wave-energy devices, the performance of the wave-energy devices is largely determined by a system of moorings that anchor the devices to a location at the bottom of the body of water.

Moorings can be of several types, including wharfs and anchor buoys, and can be arranged in different configurations. In addition, there are considerable variations in the shape of wave-energy devices, making the prediction of whether the device will capsize nontrivial.

"Ships come in a variety of shapes and sizes; tankers, for example, are very different from fishing vessels or other military ships. These different geometries affect the ship's motion in the water," said Falzarano. "Similarly, the shape of wave-energy devices can be quite diverse."

For the analysis, Hao Wang, Falzarano' s graduate student, used a cylindrical wave-energy device. This generic shape allowed the researchers to simplify the problem of prediction and extended their analysis to other wave-energy converters of similar shape. He also considered three mooring configurations.

Hao used two analytical methods, the Markov and Melnikov approaches, to predict the risks of turning over under random excitation. More specifically, using information from the wave-energy device's geometry, the configuration of the mooring system and tidal wave properties, the methods yield a graph containing an envelope-like region. Intuitively, if the waves are really big, like during a storm, and the floating vessel escapes this envelope, it will likely turnover.

The researchers noted that although the analytical models were completely different, they yielded almost the same results, validating their merit and accuracy. They also said that their mathematical approach can be applied to assess the performance of other floating devices, such as floating wind turbines.

"The platform for a floating wind turbine is the same as the one for wave-energy devices, and so floating turbines can also pitchpole or turnover if the waves are very high," said Falzarano. "My group has been leaders in developing methods for predicting ship stability. We're now looking at applying those approaches to renewable, floating energy devices."

Credit: 
Texas A&M University

Controlling the speed of enzyme motors brings biomedical applications of nanorobots closer

image: a) Trajectory of an enzyme-powered nanomotor prepared with lipase in a closed conformation and without controlled orientation during immobilization on the silicon nanoparticle surface. b) Trajectory of an enzyme-powered nanomotor prepared with lipase in an open conformation and with controlled orientation during immobilization on the silicon nanoparticle Surface. The central panel shows a scanning electron microscopy image of nanomotors like those used in the experiment.

Image: 
CNIC/ IBEC

A study by scientists at the Centro Nacional de Investigaciones Cardiovasculares (CNIC), the Universidad Complutense (UCM), Universidad de Girona (UdG), and the Institute for Bioengineering of Catalonia (IBEC), working together with other international centers, has overcome one of the key hurdles to the use of nanorobots powered by lipases, enzymes that play essential roles in digestion by breaking down fats in foods so that they can be absorbed.

The study was coordinated by Marco Filice of the CNIC Microscopy and Dynamic Imaging Unit--part of the ReDIB Infraestructura Científico Técnica Singular (ICTS)--, professor at Pharmacy Faculty (UCM) and ICREA Research Professor Samuel Sánchez of the IBEC. The article, published in the journal Angewandte Chemie International Edition, describes a tool for modulating motors powered by enzymes, broadening their potential biomedical and environmental applications.

Microorganisms are able to swim through complex environments, respond to their surroundings, and organize themselves autonomously. Inspired by these abilities, over the past 20 years scientists have managed to artificially replicate these tiny swimmers, first at the macro-micro scale and then at the nano scale, finding applications in environmental remediation and biomedicine.

"The speed, load-bearing capacity, and ease of surface functionalization of micro and nanomotors has seen recent research advances convert these devices into promising instruments for solving many biomedical problems. However, a key challenge to the wider use of these nanorobots is choosing an appropriate motor to propel them," explained Sánchez.

Over the past 5 years, the IBEC group has pioneered the use of enzymes to generate the propulsive force for nanomotors. "Bio-catalytic nanomotors use biological enzymes to convert chemical energy into mechanical force, and this approach has sparked great interest in the field, with urease, catalase, and glucose oxidase among the most frequent choices to power these tiny engines," said Sánchez.

The CNIC group is a leader in the structural manipulation and immobilization of lipase enzymes on the surface of different nanomaterials. Lipases make excellent nanomotor components because their catalytic mechanism involves major conformational changes between an open, active form and a closed,

"In this project, we investigated the effect of modulating the catalytic activity of lipase enzymes to propel silicon-based nanoparticles," explained Filice.

In addition to the 3-dimensional conformation of the enzyme, the team also investigated how controlling the orientation of the enzyme during its immobilization on the nanomotor surface affects its catalytic activity and therefore the propulsion of the nanorobots.

The researchers chemically modified the surface of silicon nanoparticles to generate three specific combinations of lipase conformations and orientations during immobilization: 1) open conformation plus controlled orientation; 2) closed conformation plus uncontrolled orientation; 3) a situation intermediate between 1 and 2.

The team analyzed the three types of nanorobot with spectroscopic techniques, assays to assess catalytic parameters related to enzyme activity, Dynamic Molecular simulations (performed by Professor Silvia Osuna's team at UdG), and direct tracking of individual nanomotor trajectories by microscopy techniques. "The results demonstrate that combining an open enzyme conformation with a specific orientation on the nanomotor is critical to achieving controlled propulsion."

Credit: 
Centro Nacional de Investigaciones Cardiovasculares Carlos III (F.S.P.)

Smartphone data helps predict schizophrenia relapses

ITHACA, N.Y. - Passive data from smartphones - including movement, ambient sound and sleep patterns - can help predict episodes of schizophrenic relapse, according to new Cornell Tech research.

Two papers from the lab of Tanzeem Choudhury, professor of integrated health and technology at Cornell Tech, examined how smartphone data can predict patients' own self-assessments of their condition, as well as changes in their behavior patterns in the 30 days leading to a relapse.

Early prediction of schizophrenic relapses - potentially dangerous episodes which may involve hallucinations, fears of harm, depression or withdrawal - could prevent hospitalizations, in addition to providing clinicians and patients with valuable information that could improve and personalize their care.

"The goal of this work was to predict digital indicators that are early warning signs of relapse, but these symptoms or changes can be very, very different from one individual to another," said Dan Adler, doctoral student at Cornell Tech and first author of "Predicting Early Warning Signs of Psychotic Relapse From Passive Sensing Data: An Approach Using Encoder-Decoder Neural Networks," which published in the Journal of Medical Internet Research mHealth and uHealth.

"We tried to create an approach where we could tell a clinician: Not only is this participant experiencing unusual behavior, these are the specific things that are different in this particular patient," Adler said. "If we can predict when someone's symptoms are going to change before relapse, we can get them early treatment and possibly prevent an inpatient visit."

The researchers collected smartphone data from 60 participants over one year, 18 of whom experienced relapse during that time. They used encoder-decoder neural networks - a kind of machine learning that is good at learning complex features amid highly irregular data - to detect behavior patterns such as sleep, number of missed calls, and the duration and frequency of conversations.

The method found a median 108% increase in behavior anomalies in the 30 days leading up to relapses, compared with behavior during days of relative health.

The paper used data collected in collaboration with the University of Washington, Dartmouth College and Northwell Health System. Based on the same data set, another paper - "Using Behavioral Rhythms and Multi-Task Learning to Predict Fine-Grained Symptoms of Schizophrenia," which published in Scientific Reports - used machine learning to better understand and predict symptoms from changes in behavioral rhythms passively detected by smart devices.

"We wanted to provide some actionable steps or clinically interpretable features so we can either tell the patient to take some actions or tell the clinician to suggest some early interventions," said Cornell Tech doctoral student Vincent Tseng, the Scientific Reports paper's co-first author.

Credit: 
Cornell University

Sleep health dictates success of practicing mindfulness

image: Diagrams indicate the correlation between sleep health and mindfulness.

Image: 
University of South Florida

Sleeping an extra 29 minutes each night can be the key to improving mindfulness, a critical resource that has benefits for daily well-being and work performance. Mindfulness is achieved by purposefully bringing an individual's awareness and attention to experiences occurring in the present moment without forming an opinion. Unlike previous studies, new research published in Sleep Health looked at how multiple dimensions of nightly sleep impact daily mindfulness, rather than just focusing on sleep quality or duration.

The study led by the University of South Florida found better sleep improves next-day mindfulness, which in turn, reduces sleepiness during the day. The research focused on nurses, the largest group of healthcare professionals whose need for optimal sleep and mindful attention are particularly high. Sleep problems are common in this population due to long shifts, lack of situational control and close proximity to life-threatening health conditions. Their optimal sleep health and mindful attention are particularly important as they work the frontline of the COVID-19 pandemic.

"One can be awake and alert, but not necessarily mindful. Similarly, one can be tired or in low arousal but still can be mindful," said lead author Soomi Lee, assistant professor of aging studies at USF. "Mindful attention is beyond being just being awake. It indicates attentional control and self-regulation that facilitates sensitivity and adaptive adjustment to environmental and internal cues, which are essential when providing mindful care to patients and effectively dealing with stressful situations."

Lee and her colleagues from USF and Moffitt Cancer Center followed 61 nurses for two weeks and examined multiple characteristics of sleep health, They found that nurses' mindful attention was greater than their usual after nights with greater sleep sufficiency, better sleep quality, lower efficiency and longer sleep duration (an extra half-hour longer). Daily mindful attention contributed to less same-day sleepiness. Those with greater mindful attention were also 66% less likely to experience symptoms of insomnia during the two-week study period.

Researchers come to these conclusions by using a variety of tools to measure how much participants were mindful each daily moment and how their mental states were impacted by sleep. Participants were prompted to answer daily mindfulness and sleepiness questions three times a day for two weeks using the smartphone application, RealLife Exp. Daily mindfulness was measured by the Mindful Attention Awareness Scale, which asked questions such as, "I was doing something automatically, without being aware of what I was doing," and "I was finding it difficult to stay focused on what was happening." Participants also wore an Actiwatch Spectrum device for the same two weeks that measured wrist movement activity to quantify sleep and wake patterns.

Findings from this study provide insight into developing a behavioral health intervention strategy for a broader array of healthcare workers who need better sleep and mindful attention. Given the association between mindful attention and better patient care, improving sleep in this population may provide important benefits to patient health outcomes as well.

Credit: 
University of South Florida

Foreign election interference: A global response

image: provides global, interdisciplinary coverage of election law, policy, and administration

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, October 13, 2020—The increasing threat of foreign interference in elections has driven six nations to take similar approaches to combat this pervasive threat. A review of the details to their responses brings out valuable differences and insights. These are presented in a forthcoming special issue of the peer-reviewed Election Law Journal. Click here to read the full-text article.

The special issue reveals a common set of solutions developed by Canada, the United Kingdom, the Netherlands, Northern Ireland, Australia, and New Zealand. These solutions coalesce around the same general set of ideas: “better educating citizens about the perils of cyber speech, increasing transparency about who is promoting online communications, building better barriers to exclude foreign funding of electoral communications, and trying to remove the most egregiously false statements from political discourse.”

Exploring the details of how each nation put these efforts into practice makes for fascinating reading and important lessons learned.

“The United States experienced significant interference in our 2016 presidential election and Russia seems just as active in their efforts to influence our presidential election this year. We have much to learn from the experiences of other nations that have attempted to address this growing problem,” states Election Law Journal Editor-in-Chief David Canon, University of Wisconsin.

About the Journal
Election Law Journal is an authoritative peer-reviewed journal published quarterly online with open access options and in print that provides global, interdisciplinary coverage of election law, policy, and administration. Led by Editor-in-Chief David Canon, University of Wisconsin, the Journal covers the field of election law for practicing attorneys, election administrators, political professionals, legal scholars, and social scientists, and covers election design and reform on the federal, state, and local levels. Complete tables of contents and a sample issue are available on the Election Law Journal website.

About the Publisher
Mary Ann Liebert, Inc., publishers is known for establishing authoritative peer-reviewed journals in many promising areas of science and biomedical research and law. A complete list of the firm’s 90 journals, books, and newsmagazines is available on the Mary Ann Liebert, Inc., publishers website.

DOI

10.1089/elj.2020.0683

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News