Tech

New research reveals privacy risks of home security cameras

An international study has used data from a major home Internet Protocol (IP) security camera provider to evaluate potential privacy risks for users.

IP home security cameras are Internet-connected security cameras that can be installed in people's homes and remotely monitored via the web. These cameras are growing in popularity and the global market is expected to reach $1.3 billion by 2023.

For the study, researchers from the Chinese Academy of Science and Queen Mary University of London tested if an attacker could infer privacy-compromising information about a camera's owner from simply tracking the uploaded data passively without inspecting any of the video content itself.

The findings, published at the IEEE International Conference on Computer Communications (6-9 July 2020), showed that the traffic generated by the cameras could be monitored by attackers and used to predict when a house is occupied or not.

The researchers even found that future activity in the house could be predicted based on past traffic generated by the camera, which could leave users more at risk of burglary by discovering when the house it unoccupied. They confirmed that attackers could detect when the camera was uploading motion, and even distinguish between certain types of motion, such as sitting or running. This was done without inspecting the video content itself but, instead, by looking at the rate at which cameras uploaded data via the Internet.

Dr Gareth Tyson, Senior Lecturer at Queen Mary University of London, said: "Once considered a luxury item, these cameras are now commonplace in homes worldwide. As they become more ubiquitous, it is important to continue to study their activities and potential privacy risks. Whilst numerous studies have looked at online video streaming, such as YouTube and Netflix, to the best of our knowledge, this is the first study which looks in detail at video streaming traffic generated by these cameras and quantifies the risks associated with them. By understanding these risks, we can now look to propose way to minimise the risks and protect user privacy."

Credit: 
Queen Mary University of London

Study indicates that Medicaid expansion has led to earlier cancer detection among individuals with low income

New research found that the likelihood of being diagnosed with advanced cancer decreased among individuals with low income after expansion of Medicaid coverage. The findings are published early online in CANCER, a peer-reviewed journal of the American Cancer Society (ACS).

The Affordable Care Act expanded Medicaid coverage for most adults in the United States with incomes up to 138% of the federal poverty level, and many states opted to do so starting in 2014. This led to increased enrollment in Medicaid, with most new enrollees reporting that they had previously been uninsured.

Providing insurance coverage to these individuals may lead to more consistent care, including a greater likelihood that people will be routinely screened for cancer. To examine whether Medicaid expansion has led to earlier cancer detection, Uriel Kim, PhD, a medical student and researcher at Case Western Reserve University School of Medicine's Center for Community Health Integration in Cleveland, Ohio, and his colleagues at the Case Comprehensive Cancer Center analyzed information pertaining to 12,760 individuals in Ohio aged 30 to 64 years who were diagnosed with invasive breast, cervical, colorectal, or lung cancer in 2011 to 2016 and were uninsured or had Medicaid insurance at the time of diagnosis. The investigators compared data before Medicaid expansion (2011 to 2013) and after Medicaid expansion (2014 to 2016), noting whether patients were diagnosed with early (non-metastatic) or advanced (metastatic) cancer.

The team found that individuals with low income diagnosed after Medicaid expansion had 15 percent lower odds of having metastatic cancer compared with those diagnosed before expansion. As a control, a separate analysis that focused on individuals with private insurance from high-income communities found nonsignificant pre/post-expansion changes in the odds of being diagnosed with metastatic cancer.

"Cancer stage is the strongest predictor of survival for patients. Long-standing disparities in mortality from screening-amenable cancers between high-income and low-income adults have been driven in large part by differences in metastatic cancer rates," said Kim. "Medicaid expansion under the Affordable Care Act was associated with a significant reduction in the likelihood of being diagnosed with deadly metastatic cancer among Americans with low income. These improvements represent substantial progress in closing a persistent gap in cancer survival between Americans with high and low income."

The study's senior author, Johnie Rose, MD, PhD, assistant professor at Case Western Reserve University School of Medicine's Center for Community Health Integration, added that the study highlights the important role that Medicaid expansion has played in increasing access to preventive services, which impact health beyond cancer. "This fact is particularly relevant in the era of the COVID-19 pandemic as tens of millions of people have lost their jobs, and record numbers are expected to rely on safety net programs like Medicaid," he said.

An accompanying editorial notes that the findings are consistent with those from other studies that have examined the impact of Medicaid expansion on cancer outcomes. "The totality of these findings suggests that health coverage through Medicaid is an important predictor of cancer burden, and--critically--of early cancer identification," the authors wrote.

Credit: 
Wiley

High-throughput sequencing tracks historical spread of grapevine viruses

Grapevine is infected by more than 90 viruses, with new viral species discovered yearly as a result of the newer technology introduced by High-Throughput Sequencing (HTS). Within the last decade, HTS is used for virome identification (the assemblage of viral genomes), which in turn helps plant pathologists with future research.

A group of scientists based in France used systematical datamining to gather information on two grapevine trichoviruses, grapevine Pinot gris virus (GPGV) and grapevine berry inner necrosis virus (GINV). They were able to capture new complete viral genomes from around the world. They gathered this information using their own HTS-datasets and by tapping into worldwide HTS databases. This compiled information helped unravel the evolutionary history of these two emerging viruses that display a new threat to viticulture.

Their work showed that GPGV and GINV most likely originated in Asia. GPGV was introduced to Europe in the 1950s or 60s and from there moved onto South and North America during the 1980s and 90s, as a result of increasing amounts of land being devoted to the wine industry. Interestingly, all intercontinental movements of GPGV seems to have occurred before the virus was identified and characterized in Italy in 2012.

They also discovered that while the two viruses were closely related, they also varied greatly in their epidemiological success. They hypothesized that this difference might be the result of successful quarantine and control measures applied upon intercontinental transfer of material in relation to the highly symptomatic GINV, preventing it from spreading, while these same measures have been ineffective for GPGV often displaying heterogeneous symptoms on grapevine.

"This work is the first of its kind, redefining the worldwide evolutionary history of two viruses that infect plants," said Jean-Michel Hily, the paper's first author. "While previous work showed some important movements of plant viruses, those were always limited to the continent level, never described at the world scale as presented in our paper."

Hily also comments on the power of the combination of HTS and other modeling tools. "The program we used gave us some dates of viral movements between countries and continents that correlated impressively well with the history of grapevine around the world."

Credit: 
American Phytopathological Society

Double take: New study analyzes global, multiple-tailed lizards

image: An image of an Australian barred wedgesnout skink lizard (Ctenotus schomburgkii) with two tails. Photo credit: Mr Damian Lettoof, Curtin University.

Image: 
Mr Damian Lettoof, Curtin University

Curtin research into abnormal regeneration events in lizards has led to the first published scientific review on the prevalence of lizards that have re-generated not just one, but two, or even up to six, tails.

PhD Candidate Mr James Barr, from Curtin University's School of Molecular and Life Sciences, said while the phenomena of multiple-tailed lizards are widely known to occur, documented events were generally limited to opportunistic, single observations of one in its natural environment.

"This limited available research about multiple-tailed lizards has made it difficult for biologists to fully understand their ecological importance, and our study helps to highlight this knowledge gap," Mr Barr said.

Many species of lizards have the ability to self-amputate a portion of their tail, an event known as caudal autotomy, as a defence mechanism when they are being attacked by a predator.

Most commonly the tail grows back as a single rod of cartilage, but Mr Barr explained that sometimes an anomaly occurs, resulting in the regeneration of more than just one tail.

"Sometimes following an incomplete autotomy event, when the lizard's original tail does not fully separate from its body, a secondary tail regenerates, resulting in the lizard having two separate tails," Mr Barr said.

"There have even been records of lizards re-generating up to six tails.

"Our study indicates that this phenomenon may actually be occurring more frequently in lizards than previously thought.

"We analysed the available two-tailed lizard data from more than 175 species across 22 families, from 63 different countries. Contrasting this data with all comparable lizard population numbers, our findings suggest an average of 2.75 per cent of all lizards within populations could have two tails or more at any one time.

"This is quite a surprisingly high number, and it really begins to make us wonder what ecological impacts this could have, especially noting that to the lizard, an extra tail represents a considerable increase in body mass to drag around."

Co-researcher Curtin University Associate Professor Bill Bateman explained that while there is a significant lack of studies to understand these potential ecological impacts, his team believes that having two tails might affect the overall fitness and life history for individual lizards, and their overall populations.

"Shedding a tail to escape a predator and then regenerating it seems like a good tactic; however, when this regeneration goes awry and results in multiple abnormal tails, this is likely to have an effect on the lizard.

"It could affect a range of things, such as their kinetic movements, restrictions they might have when trying to escape a predator, their anti-predation tactics, and socially speaking, how other lizards might react to them," Professor Bateman said.

"For example, could having two tails potentially affect their ability to find a mate, and therefore reduce opportunities for reproduction? Or on the contrary, could it potentially be of benefit?

"Behaviourally testing out these hypotheses would be an interesting and important future research direction, so biologists can learn more about the lifestyles of these multiple-tailed lizards."

Credit: 
Curtin University

Community science birding data does not yet capture global bird trends

image: Quechua woman looking for birds through a birdwatching telescope in montane rainforest, San Miguel Polylepis Forest, Cochabamba, Bolivia.

Image: 
Ça?an ?ekercio?lu.

Binoculars in hand, birders around the world contribute every day to a massive database of bird sightings worldwide. But while community science observations of birds can be useful data, it may not be enough to fill the data gaps in developing countries where professional bird surveys are insufficient or absent.

Ornithologists at the University of Utah say that community science bird data shows different trends in bird populations than professional bird surveys do, especially in developing countries. Researchers look for trends to know whether the number of individuals in a species is increasing, stable or decreasing--with the latter as a warning sign that the species is in trouble. Their results are published in Biological Conservation. More observations are needed, the researchers say, both by birders and professionals.

"We hope that this study will encourage birdwatchers to be more conscientious in their recording," says Monte Neate-Clegg, doctoral student and lead author of the study, "to think of these data not just as a personal record but as contributing to a wider cause."

Birding is a long tradition, but as paper guidebooks and life lists have given way to digital records and mobile apps, birders have become more connected, compiling their data into near real-time global snapshots of where and when birders are seeing species. For this study, the authors accessed data from eBird.

Developed by the Cornell Lab of Ornithology, eBird is the world's largest biodiversity-related community science project, the lab says, with more than 100 million bird sightings contributed each year. Birders submit sightings and checklists to eBird, which reaches out to birding experts when a sighting seems out of the ordinary.

U ornithologist Çağan Şekercioğlu is a world-class eBirder, currently ranked fifth in the world for spotting more than 8,000 bird species--more than 76% of all the species that eBirders have ever reported.

In 2018, former Şekercioğlu lab member JJ Horns found that eBird trend data matched the U.S. Breeding Bird Survey to within 0.4%. The results of the three-year project were encouraging--maybe eBird, they hoped, could serve to accurately fill in data for countries that didn't have the same level of governmental or professional surveys.

So, to compare eBird trends with worldwide trends, they turned to BirdLife International, an independent global partnership of conservation organizations.

"BirdLife amasses data and expert opinion across the world," Neate-Clegg says. Their methods for assessing bird populations and trends vary, though. "Some estimates are based on complete population counts or interpolated surveys," he says. "Most are indirectly assessed via changes in habitat or other impacts, such as hunting or wildlife trade."

Downloading and analyzing eBird data is not an Excel-scale task. The U's Center for High Performance Computing assisted in processing the data, which includes more than 800 million records. Using observations from the past 20 years, Neate-Clegg further filtered the data to focus on the best-quality observations and to match the list of species with those reported by BirdLife International. Calculating the trends in bird counts over time, Neate-Clegg rated them as increasing, decreasing or stable.

For the final list of 8,121 species, BirdLife listed 624 (7.7%) as increasing, 3,616 (44.5%) as stable and 3,881 (47.8%) as decreasing.  The eBird trends differed: 1,974 (24.3%) species were rated as increasing, 4,942 (60.9%) as stable, and 1,205 (14.8%) as decreasing. Only a little more than a third of the species displayed trends that agreed between the two data sources. Unfortunately, that's not much better than would have been expected by chance.

"This isn't particularly reassuring," Neate-Clegg says.

Part of the disagreement is due to the different experience of birdwatching in the tropics as compared to the U.S.

"Birdwatchers in the tropics tend to be more targeted in their approach," Neate-Clegg says, "meaningfully searching for particular species. This may mean that, although a species is declining, eBirders are still finding them reliably and so we do not detect that decline in the eBird data."

"In some cases," Şekercioğlu adds, "the rarer bird species can be seen more often by birders who may overlook the common species nearby that they have already seen before."

Some results of the study were encouraging, though.

As in the earlier study, Neate-Clegg's study shows that the rate of agreement with BirdLife trends for a species increases as the number of eBird checklists for that species increases. "This suggests that our accuracy will increase as more people gather data in the tropics," he says. The rate of agreement is also higher for species where population trends are directly estimated rather than indirectly inferred. "This suggests that we still need in situ population trend estimation by experts to validate eBird trends," he adds.

Neate-Clegg says that the results of this study are far from the end of the story. "It is really important that we carry out studies such as these to validate the use of eBird data," he says. "It would be great to get to the point where we can successfully leverage what will soon exceed 1 billion bird records to estimate population trends."

With a need for more quality data, Neate-Clegg encourages eBirders to include as much additional information in their checklists as possible. For example, he says, eBirders have the option of recording all species seen or counts of every species, as well as associated metadata such as the duration of the birdwatching period and the distance traveled.

"All of these data are important for maximizing the number of checklists we can use while controlling for variation in effort," he says.

Birding in many different places, and not just hotspots with high species numbers, is also important. "You should be birding everywhere you go," Şekercioğlu says, "which also has the personal satisfaction of being a pioneer as you are adding data from places with little or no bird data."

In other words, keep watching the skies. And the trees. And the wetlands. Birders' efforts do not go unnoticed. The researchers express their gratitude to the Cornell Lab of Ornithology, BirdLife International and the millions of birders who contribute to eBird and other community science efforts like iNaturalist. "The centuries-long symbiosis between birdwatchers and ornithologists is the best example of the collaboration of community scientists, professional scientists and conservationists," Şekercioğlu says.

Find this release here.

Find the full study here.

Credit: 
University of Utah

Common inherited genetic variant identified as frequent cause of deafness in adults

A common inherited genetic variant is a frequent cause of deafness in adults, meaning that many thousands of people are potentially at risk, reveals research published online in the Journal of Medical Genetics.

Deafness in adults is known to be inherited. But, unlike childhood deafness, the genetic causes largely remain a mystery, say the researchers, who suggest that their discovery makes it an ideal candidate for gene therapy.

Deafness is one of the most prevalent disabilities worldwide and has a major impact on quality of life. So far, 118 genes have been associated with the heritable form. Variants in these genes explain a large proportion of congenital and childhood deafness, but not adult deafness.

This is despite the fact that between 30% and 70% of hearing loss in adults is thought to be inherited.

The researchers had already discovered the chromosomal region involved in hearing loss in one family, but not the gene involved.

To explore this further, they carried out gene sequencing of this family among whom hearing loss in one or both ears had occurred as well as 11 other families (200 people in all).

Each family member had a general ear, nose and throat check and their hearing was tested in both ears.

The genetic sequencing in the first family revealed a missing section of the RIPOR2 gene in 20 of the 23 family members with confirmed hearing loss.

But this genetic variant was also found in three other family members aged 23, 40, and 51, who didn't yet have any hearing loss.

This prompted the researchers to carry out gene sequencing, and the same medical and hearing examinations, in a further 11 families affected by hearing loss.

The identical genetic variant was found in 39 of 40 family members with confirmed hearing loss as well as in two people aged 49 and 50 who weren't affected by hearing loss.

What's more, the RIPOR2 genetic variant was found in a further 18 out of 22,952 randomly selected people for whom no information on hearing loss was available.

Four family members with hearing loss didn't have the RIPOR2 genetic variant. Their deafness might have been associated with heavy smoking or genetic abnormalities other than that in RIPOR2, suggest the researchers.

While the particular manifestations of this genetically induced hearing loss varied, as did the age at which hearing problems began, its prevalence suggests that it is common (highly penetrant) and that many thousands of people might be at risk of deafness as a result, explain the researchers.

Based on their findings the researchers estimate that in The Netherlands alone the RIPOR2 genetic variant is likely present "in more than 13,000 individuals who are therefore at risk of developing [hearing loss] or who have developed [hearing loss] already due to this variant."

And they suggest that a further 30,000 people in northern Europe are likely to have this genetic variant and therefore be at risk of deafness.

"Because of the large number of subjects estimated to be at risk for [hearing loss] due to the c.1696_1707 del RIPOR2 variant, it is an attractive target for the development of a genetic therapy," they conclude.

Credit: 
BMJ Group

Solar assisted heating networks reduce environmental impact and energy consumption

image: Scientists from the SUSCAPE research group that have participated in the study. From left to right: Dieter Boer, Laureano Jiménez, Manel Vallès i Mohamed Abokersh.

Image: 
URV

More than 40% of energy consumption in the European Union is by buildings and 63% of this figure is due to residential dwellings. Furthermore, more than 75% of domestic energy consumption is related to heating and the production of clean hot water. These data highlight the need to look for alternative sources of energy in line with the EU's 2030 targets that seek a sustainable transition from fossil fuels to renewable energies and to reduce emissions of greenhouse gases. With this in mind, a group led by the URV has created a diagnostic tool that uses artificial intelligence to demonstrate the reduction in environmental impact and in energy consumption from solar assisted heating networks. The results of the study have been published in the journal Applied Energy.

Solar assisted heating networks are installations that have centralized systems for producing hot water on a large scale. Thermal solar collectors capture solar energy to meet the demand for heating and clean hot water in residential dwellings throughout a given neighbourhood. The principal difference between solar assisted heating networks and the thermal energy storage systems that can already be found in some homes is that the latter can only use the energy in a short period after it has been collected (in the following hours or, at the most, the following seven days). In contrast, solar assisted heating networks store the excess thermal solar energy during the sunniest months in large underground tanks (up to 10,000 cubic metres) and very well insulated so the heat can be used during the autumn and winter, when there is less solar energy.

To test these systems in residential areas and after simulating plants in various climatic zones of Europe, the research group has carried out a theoretical pilot study in Madrid, where they applied the artificial-intelligence based diagnostic tool to determine the optimum design for solar assisted heating networks with a working life of 40 years. The study looked at different sizes of urban districts, including communities of 10, 24, 50 and 100 buildings.

The tool found that the implementation of solar assisted heating networks could lead to a significantly reduced environmental impact compared with traditional forms of heat generation that use gas heaters and boilers. The reduction is even greater as the size of the district increases, reaching 89.3% for a community of 100 buildings. Moreover, the system has also demonstrated a clear economic benefit insofar as it reduces the total cost by up to 66, depending on the size of the community. In terms of getting a return on the initial investment, the system is more effective the larger the community. For example, in a district of 100 buildings, the investment s recuperated within 13.7 years. Regarding thermal performance, the diagnostic tool shows that 82% of the energy used came from the sun.

Despite this promising future, there are still technical, administrative and financial barriers that limit the roll-out of these systems. The principal installations are in northern countries which consume a large amount of energy for heating, such as Canada, Sweden, Denmark, German and Austria. In Catalonia, there are 60 of these district networks (such as the 22@ in Barcelona), but they use biomass and other sources of energy rather than solar energy.

The study has been led by the researchers Dieter Boer, Manel Vallès and Mohamed Hany Abokersh (doctoral student on the Martí i Franquès COFUND programme) from the URV's Department of Mechanical Engineering, and Luisa F. Cabeza, from the University of Lleida. The study forms part of the MATCE project, coordinated by researchers from the URV, UdL and UB and funded by the Spanish Ministry for the Economy and Competition. It is also supported by the ICREA Academia programme, which is part of the Horizon2020 programme of the European Union and the Sklodowska-Curie Actions, and the Spanish Network for the Storage of Thermal Energy.

Credit: 
Universitat Rovira i Virgili

New breakthrough in 'spintronics' could boost high speed data technology

image: New breakthrough in 'spintronics' could boost high speed data technology.

Image: 
University of Exeter

Scientists have made a pivotal breakthrough in the important, emerging field of spintronics - which could lead to a new high speed energy efficient data technology.

An international team of researchers, including the University of Exeter, has made a revolutionary discovery that has the potential to provide high speed, low power-usage for some of the world's most well-used electronic devices.

While today's information technology relies on electronics that consumes a huge amount of energy, the electrons within electric currents can also transfer a form of angular momentum called spin.

'Spin-based electronics or 'spintronics', that exploits spin current, has the potential to be not just significantly faster, but also more energy efficient.

Scientists have recently discovered that some electrically insulating antiferromagnetic materials are exceptionally good conductors of pure spin current.

In the new research, scientists from Exeter, in collaboration with the Universities of Oxford, California Berkeley, and the Advanced and Diamond Light Sources, have experimentally demonstrated that high frequency alternating spin currents can be transmitted by, and sometimes amplified within, thin layers of antiferromagnetic NiO.

The results demonstrate that the spin current in thin NiO layers is mediated by evanescent spin waves, a mechanism akin to quantum mechanical tunnelling.

The use of thin NiO layers for transfer and amplification of ac spin current at room temperature and gigahertz frequencies may lead to more efficient future wireless communication technology.

The research is published in Physical Review Letters.

Maciej Dabrowski, first author from the University of Exeter said: "Confirmation of the evanescent spin wave mechanism shown by our experiment indicates that the transfer of angular momentum between the spins and the crystal lattice of an antiferromagnet can be realized in thin NiO films and opens the door to the construction of nanoscale spin current amplifiers"

Credit: 
University of Exeter

A new way towards super-fast motion of vortices in superconductors discovered

image: Abrikosov lattice at moderate vortex velocities (left); ultra-fast moving Abrikosov-Josephson "vortex rivers" (right)

Image: 
© Oleksandr Dobrovolskiy, University of Vienna

Superconductivity is a physical phenomenon occurring at low temperatures in many materials which manifests itself through a vanishing electrical resistance and the expulsion of magnetic fields from the material's interior. Superconductors are already used for medical imaging, fast digital circuits or sensitive magnetometers and hold a great potential for further applications. However, the conductivity of the majority of technologically important superconductors is in fact not "super". In these so-called type II superconductors an external magnetic field penetrates the material in the form of quantized lines of magnetic flux. These flux lines are known as Abrikosov vortices, named after Alexei Abrikosov whose prediction brought him the Nobel prize in Physics in 2003. Already at moderately strong electric currents, the vortices begin to move and the superconductor can no longer carry the current without resistance.

In most superconductors, a low-resistive state is limited by vortex velocities of the order of 1 km/s setting the practical limits of use of superconductors in various applications. At the same time, such velocities are not high enough to address the rich physics generic to nonequilibrium collective systems. Now, an international team of scientists from the University of Vienna, the Goethe University Frankfurt, the Institute for Microstructures of RAS, the V. Karazin National University of Kharkiv, the B. Verkin Institute for Low Temperature Physics and Engineering of NAS has found a new superconducting system in which magnetic flux quanta can move at velocities of 10-15 km/s. The new superconductor exhibits a rare combination of properties - high structural uniformity, large critical current and fast relaxation of heated electrons. The combination of these properties ensures that the phenomenon of flux-flow instability - abrupt transition of a superconductor from the low-resistive to the normal conducting state - takes place at sufficiently large transport currents.

"In recent years, there have appeared experimental and theoretical works pointing to a remarkable issue; it has been argued that current-driven vortices can move even faster than the superconducting charge carriers.", says Oleksandr Dobrovolskiy, lead author of the recent publication in Nature Communications and head of the Superconductivity and Spintronics Laboratory at the University of Vienna. "However, these studies used locally non-uniform structures. Initially, we worked with high-quality clean films, but later it turned out that dirty superconductors are better candidate materials to support ultra-fast vortex dynamics. Though the intrinsic pinning in these is not necessarily as weak as in other amorphous superconductors, the fast relaxation of heated electrons becomes the dominating factor allowing for ultrafast vortex motion".

For their investigations the researchers fabricated a Nb-C superconductor by focused ion beam induced deposition in the group of Prof. Michael Huth at the Goethe University in Frankfurt am Main, Germany. Remarkably, in addition to ultra-fast vortex velocities in Nb-C, the direct-write nanofabrication technology allows one to fabricate complex-shaped nano-architectures and 3D fluxonic circuits with intricate interconnectivity which may find application in quantum information processing.

Challenges for investigations of ultra-fast vortex matter

"In order to reach the maximum current that a superconductor can carry, the so-called depairing current, one needs rather uniform samples over a macroscopic length scale which is partially owed to small defects in a material. Reaching the depairing current is not only a fundamental problem, but it is also important for applications; a micrometer-wide superconducting strip can be switched to a resistive state by a single near infrared or optical photon if the strip is biased by a current close to the depairing current value, as was predicted and confirmed in recent experiments. This approach opens perspectives for building large-area single photon detectors which could be used in e.g. confocal microscopy, free-space quantum cryptography, deep space optical communication," says Denis Vodolazov, Senior Researcher at the Institute for Microstructures of RAS, Russia.

The researchers successfully studied how fast vortices can move in dirty Nb-C superconducting strips having a critical current at zero magnetic field close to the depairing current. Their results indicate that the flux flow instability starts near the edge where vortices enter the sample because of the locally enhanced current density. This offers insights into the applicability of widely used flux flow instability models and suggest Nb-C to be a good candidate material for fast single-photon detectors.

Credit: 
University of Vienna

Quantum physics: Realization of an anomalous Floquet topological system

An international team led by physicists from the Ludwig-Maximilians Universitaet (LMU) in Munich realized a novel genuine time-dependent topological system with ultracold atoms in periodically-driven optical honeycomb lattices.

Topological phases of matter have attracted a lot of interest due to their unique electronic properties that often result in exotic surface or boundary modes, whose existence is rooted in the non-trivial topological properties of the underlying system. In particular, the robustness of these properties makes them interesting for applications.

Periodic driving has emerged as an important technique to emulate the physics of undriven topological solid-state systems. The properties of driven topological systems, however, transcend those of their static counterparts. Using a BEC of 39K loaded into a periodically-modulated optical honeycomb lattice, we could generate such a time-dependent topological system. For certain modulation parameters the system is in a so-called anomalous Floquet regime, where the Chern numbers of all bulk bands are equal to zero, while at the same time chiral edge modes exist in all quasienergy gaps. These non-trivial topological properties stem from the non-trivial winding of the quasienergy spectrum and cannot occur in undriven systems.

By combining energy gap and local Hall deflection measurements, the full set of topological invariants describing the time-dependent system was determined experimentally for the first time and the existence of chiral edge modes could be revealed even in a geometry with smooth boundaries. Due to its remarkable properties, especially in the presence of disorder, the anomalous Floquet phase promises the realization of interacting, periodically-driven systems, that may support a many-body-localized bulk, but thermalizing edge modes - an intriguing non-equilibrium many-body phase that may prove resilient to conventional Floquet heating.

Credit: 
Ludwig-Maximilians-Universität München

New nano-engineering strategy shows potential for improved advanced energy storage

image: Novel material nanoarchitecture enables the development of new-generation high-energy batteries beyond Li-ion chemistry

Image: 
Supplied by University of Technology Sydney

The rapid development of renewable energy resources has triggered tremendous demands in large-scale, cost-efficient and high-energy-density stationary energy storage systems.

Lithium ion batteries (LIBs) have many advantages but there are much more abundant metallic elements available such as sodium, potassium, zinc and aluminum.

These elements have similar chemistries to lithium and have recently been extensively investigated, including sodium-ion batteries (SIBs), potassium-ion batteries (PIBs), zinc-ion batteries (ZIBs), and aluminium-ion batteries (AIBs). Despite promising aspects relating to redox potential and energy density the development of these beyond-LIBs has been impeded by the lack of suitable electrode materials

New research led by Professor Guoxiu Wang from the University of Technology Sydney, and published in Nature Communications, describes a strategy using interface strain engineering in a 2D graphene nanomaterial to produce a new type of cathode. Strain engineering is the process of tuning a material's properties by altering its mechanical or structural attributes.

"Beyond-lithium-ion batteries are promising candidates for high-energy-density, low-cost and large-scale energy storage applications. However, the main challenge lies in the development of suitable electrode materials," " Professor Wang, Director of the UTS Centre for Clean Energy Technology, said.

"This research demonstrates a new type of zero-strain cathodes for reversible intercalation of beyond-Li+ ions (Na+, K+, Zn2+, Al3+) through interface strain engineering of a 2D multilayered VOPO4-graphene heterostructure.

When applied as cathodes in K+-ion batteries, we achieved a high specific capacity of 160 mA h g-1 and a large energy density of ~570 W h kg?1, presenting the best reported performance to date. Moreover, the as-prepared 2D multilayered heterostructure can also be extended as cathodes for high-performance Na+, Zn2+, and Al3+-ion batteries.

The researchers say this work heralds a promising strategy to utilize strain engineering of 2D materials for advanced energy storage applications.

"The strategy of strain engineering could be extended to many other nanomaterials for rational design of electrode materials towards high energy storage applications beyond lithium-ion chemistry," Professor Wang said.

Credit: 
University of Technology Sydney

Towards lasers powerful enough to investigate a new kind of physics

image: The paper "Thin plate compression of a sub-petawatt Ti:Sa laser pulses" made the cover of the journal Applied Physics Letters, Volume 116, Issue 24 published June 15, 2020.

Image: 
AIP Publishing

In a paper that made the cover of the journal Applied Physics Letters, an international team of researchers has demonstrated an innovative technique for increasing the intensity of lasers. This approach, based on the compression of light pulses, would make it possible to reach a threshold intensity for a new type of physics that has never been explored before: quantum electrodynamics phenomena.

 

Researchers Jean-Claude Kieffer of the Institut national de la recherche scientifique (INRS), E. A. Khazanov of the Institute of Applied Physics of the Russian Academy of Sciences and in France Gérard Mourou, Professor Emeritus of the Ecole Polytechnique, who was awarded the Nobel Prize in Physics in 2018, have chosen another direction to achieve an intensity of around 10^23 W/cm^2. Rather than increasing the energy of the laser, they decrease the pulse duration to only a few femtoseconds. This would keep the system within a reasonable size and keep operating costs down.

 

To generate the shortest possible pulse, the researchers are exploiting the effects of non-linear optics. “A laser beam is sent through an extremely thin and perfectly homogeneous glass plate. The particular behaviour of the wave inside this solid medium broadens the spectrum and allows for a shorter pulse when it is recompressed at the exit of the plate,” explains Jean-Claude Kieffer, co-author of the study published online on 15 June 2020 in the journal Applied Physics Letters.

 

Installed in the Advanced Laser Light Source (ALLS) facility at INRS, the researchers limited themselves to an energy of 3 joules for a 10-femtosecond pulse, or 300 terawatts (1012W). They plan to repeat the experiment with an energy of 13 joules over 5 femtoseconds, or an intensity of 3 petawatts (1015 W). “We would be among the first in the world to achieve this level of power with a laser that has such short pulses,” says Professor Kieffer.

 

“If we achieve very short pulses, we enter relativistic problem classes. This is an extremely interesting direction that has the potential to take the scientific community to new horizons,” says Professor Kieffer. “It was a very nice piece of work solidifying the paramount potential of this technique,” concludes Gérard Mourou.

 

About the study:

 

“Thin plate compression of a sub-petawatt Ti:Sa laser pulses” by S. Yu. Mironov, S. Fourmaux, P. Lassonde, V. N. Ginzburg, S. Payeur, J.-C. Kieffer, E. A. Khazanov, and G. Mourou was published in June 2020 in the journal Applied Physics Letters. They have received financial support from the Natural Sciences and Engineering Research Council of Canada, the Canada Foundation for Innovation, the ministère de l’Économie, de la Science et de l’Innovation du Québec and the Ministry of Science and Higher Education of the Russian Federation.

DOI: 10.1063/5.0008544

 

Journal

Applied Physics Letters

DOI

10.1063/5.0008544

Credit: 
Institut national de la recherche scientifique - INRS

Spawning fish and embryos most vulnerable to climate's warming waters

Spawning fish and embryos are far more vulnerable to Earth's warming waters than fish in other life stages, according to a new study, which uniquely relates fish physiological tolerance to temperature across the lifecycles of nearly 700 fish species. The results reveal a critical bottleneck in the lifecycle of fish and suggest that many ecologically and economically important fish species are threatened by climate warming more than studies based on adult fish thermal tolerance alone have shown. Understanding an organism's physiological limits to temperature changes can provide a window into how some species will likely respond to Earth's changing climate. However, thermal tolerances often vary throughout an organism's life. Thus, the vulnerability of a species to climate change hinges on its most temperature-sensitive life stages. But due to a lack of experimental data, large-scale climate risk assessments often simplify or fail to account for the effects of these thermal bottlenecks across the entire lifecycle of many major animal groups, including fish. Whether our current climate mitigation targets are sufficient enough to sustain healthy fish populations remains poorly understood. Flemming Dahlke and colleagues compiled published observational and experimental data to assess the life stage-specific thermal tolerances for 694 marine and freshwater fish from waters worldwide. Their large-scale meta-analysis found that spawning adults and embryos were much more susceptible to temperature changes than other life stages across fish species. Dahlke et al.'s findings suggest that, with unchecked warming, up to 60% of the fish species they evaluated will not be able to exist in the current geographic range of their most vulnerable life stages within a century - a sharp contrast to the 10% of species estimated if using only the thermal tolerance of adults. "The minute thermal safety margins of spawning fish and embryos in the tropics suggest that there are limited fish species on Earth that can tolerate warmer or less oxygenated habitats. Intensified efforts to stabilize global warming are warranted more than ever," writes Jennifer Sunday in a related Perspective.

Credit: 
American Association for the Advancement of Science (AAAS)

Crystal structure discovered almost 200 years ago could hold key to solar cell revolution

image: Perovskite structure.

Image: 
John Labram, Oregon State University.

CORVALLIS, Ore. - Solar energy researchers at Oregon State University are shining their scientific spotlight on materials with a crystal structure discovered nearly two centuries ago.

Not all materials with the structure, known as perovskites, are semiconductors. But perovskites based on a metal and a halogen are, and they hold tremendous potential as photovoltaic cells that could be much less expensive to make than the silicon-based cells that have owned the market since its inception in the 1950s.

Enough potential, researchers say, to perhaps someday carve significantly into fossil fuels' share of the energy sector.

John Labram of the OSU College of Engineering is the corresponding author on two recent papers on perovskite stability, in Communications Physics and the Journal of Physical Chemistry Letters, and also contributed to a paper published today in Science.

The study in Science, led by researchers at the University of Oxford, revealed that a molecular additive - a salt based on the organic compound piperidine - greatly improves the longevity of perovskite solar cells.

The findings outlined in all three papers deepen the understanding of a promising semiconductor that stems from a long-ago discovery by a Russian mineralogist. In the Ural Mountains in 1839, Gustav Rose came upon an oxide of calcium and titanium with an intriguing crystal structure and named it in honor of Russian nobleman Lev Perovski.

Perovskite now refers to a range of materials that share the crystal lattice of the original. Interest in them began to accelerate in 2009 after a Japanese scientist, Tsutomu Miyasaka, discovered that some perovskites are effective absorbers of light.

"Because of their low cost, perovskite solar cells hold the potential to undercut fossil fuels and revolutionize the energy market," Labram said. "One poorly understood aspect of this new class of materials, however, is their stability under constant illumination, an issue which represents a barrier to commercialization."

Over the past two years, Labram's research group in the School of Electrical Engineering and Computer Science has built unique experimental apparatus to study changes in conductance of solar materials over time.

"Teaming up with the University of Oxford, we demonstrated that light-induced instability occurs over many hours, even in the absence of electrical contact," he said. "The findings help clarify similar results observed in solar cells and hold the key to improving the stability and commercial viability of perovskite solar cells."

Solar cell efficiency is defined by the percentage of power from sunlight hitting a cell that is converted to usable electrical power.

Seven decades ago, Bell Labs developed the first practical solar cell. It had a modest, by today's standards, efficiency of 6% and was costly to make, but it found a niche in powering the satellites launched during the nascent days of the space race.

Over time, manufacturing costs decreased and efficiencies climbed, even though most cells have not changed very much - they still consist of two layers of nearly pure silicon doped with an additive. Absorbing light, they use the energy from it to create an electric current across the junction between them.

In 2012, one of Labram's collaborators, Henry Snaith of Oxford, made the breakthrough discovery that perovskites could be used as the main component in solar cells, rather than just as a sensitizer. This led to a storm of research activity and thousands of scientific papers being published each year on the subject. Eight years of research later, perovskite cells can now operate at 25% efficiency - making them, at least in the lab, on par with commercial silicon cells.

Perovskite cells can be inexpensively manufactured from commonly available industrial chemicals and metals and can be printed onto flexible films of plastic and mass produced. Silicon cells, conversely, are rigid and made from thinly sliced wafers of almost pure silicon in an expensive, high-temperature process.

One issue with perovskites is their tendency to be somewhat unstable when temperatures rise, and another is a vulnerability to moisture - a combination that can make the cells decompose. That's a problem for a product that needs to last two or three decades in open air.

"In general, to be able to sell a solar panel in the U.S. and Europe requires a 25-year warranty," Labram said. "What that means in reality is the solar cell should show no less than 80% of its original performance after 25 years. The current technology, silicon, is pretty good for that. But silicon has to be expensively produced in temperatures of greater than 2,000 degrees Celsius under controlled conditions, to form perfect, defect-free crystals, so they function properly."

Perovskites on the other hand are highly defect tolerant, Labram said.

"They can be dissolved in a solvent, then printed at close to room temperature," he said. "This means they could eventually be produced at a fraction of the cost of silicon, and hence undercut fossil fuels. However, for this to happen, they need to be certifiable with a 25-year warranty. This requires us to understand and improve the stability of these materials."

One path to the marketplace is a tandem cell made of both silicon and perovskites that could turn more of sunlight's spectrum into energy. Lab tests on tandem cells have produced efficiencies of 28%, and efficiencies in the mid-30s seem realistic, Labram said.

"Tandem cells might allow solar panel producers to offer a performance beyond anything silicon alone might achieve," he said. "The dual approach could help remove the barrier to perovskites entering the market, on the way to perovskites eventually acting as stand-alone cells."

Semi-transparent, perovskite films may also one day be used on windows, or in greenhouses, converting part of the incoming sunlight to electricity while letting the rest pass through.

"When it comes to energy generation, cost is the most important factor," Labram said. "Silicon and perovskites now show roughly the same efficiency. In the long term, however, perovskite solar cells have the potential to be made at a fraction of the cost of silicon solar cells. And while history has shown us that political action on climate change is largely ineffective, if you can generate electricity from renewable sources at a lower cost than fossil fuels, all you have to do is to make the product, then the market will take care of the rest."

Credit: 
Oregon State University

Research reflects how AI sees through the looking glass

ITHACA, N.Y. - Things are different on the other side of the mirror.

Text is backward. Clocks run counterclockwise. Cars drive on the wrong side of the road. Right hands become left hands.

Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell University researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards - findings with implications for training machine learning models and detecting faked images.

"The universe is not symmetrical. If you flip an image, there are differences," said Noah Snavely, associate professor of computer science at Cornell Tech and senior author of the study, "Visual Chirality," presented at the 2020 Conference on Computer Vision and Pattern Recognition, held virtually June 14-19. "I'm intrigued by the discoveries you can make with new ways of gleaning information."

Zhiqui Lin is the paper's first author; co-authors are Abe Davis, assistant professor of computer science, and Cornell Tech postdoctoral researcher Jin Sun.

Differentiating between original images and reflections is a surprisingly easy task for AI, Snavely said - a basic deep learning algorithm can quickly learn how to classify if an image has been flipped with 60% to 90% accuracy, depending on the kinds of images used to train the algorithm. Many of the clues it picks up on are difficult for humans to notice.

For this study, the team developed technology to create a heat map that indicates the parts of the image that are of interest to the algorithm, to gain insight into how it makes these decisions.

They discovered, not surprisingly, that the most commonly used clue was text, which looks different backward in every written language. To learn more, they removed images with text from their data set, and found that the next set of characteristics the model focused on included wrist watches, shirt collars (buttons tend to be on the left side), faces and phones - which most people tend to carry in their right hands - as well as other factors revealing right-handedness.

The researchers were intrigued by the algorithm's tendency to focus on faces, which don't seem obviously asymmetrical. "In some ways, it left more questions than answers," Snavely said.

They then conducted another study focusing on faces and found that the heat map lit up on areas including hair part, eye gaze - most people, for reasons the researchers don't know, gaze to the left in portrait photos - and beards.

Snavely said he and his team members have no idea what information the algorithm is finding in beards, but they hypothesized that the way people comb or shave their faces could reveal handedness.

"It's a form of visual discovery," Snavely said. "If you can run machine learning at scale on millions and millions of images, maybe you can start to discover new facts about the world."

Each of these clues individually may be unreliable, but the algorithm can build greater confidence by combining multiple clues, the findings showed. The researchers also found that the algorithm uses low-level signals, stemming from the way cameras process images, to make its decisions.

Though more study is needed, the findings could impact the way machine learning models are trained. These models need vast numbers of images in order to learn how to classify and identify pictures, so computer scientists often use reflections of existing images to effectively double their datasets.

Examining how these reflected images differ from the originals could reveal information about possible biases in machine learning that might lead to inaccurate results, Snavely said.

"This leads to an open question for the computer vision community, which is, when is it OK to do this flipping to augment your dataset, and when is it not OK?" he said. "I'm hoping this will get people to think more about these questions and start to develop tools to understand how it's biasing the algorithm."

Understanding how reflection changes an image could also help use AI to identify images that have been faked or doctored - an issue of growing concern on the internet.

"This is perhaps a new tool or insight that can be used in the universe of image forensics, if you want to tell if something is real or not," Snavely said.

Credit: 
Cornell University