Tech

Molecular additives enhance mechanical properties of organic solar cell material

image: Showcasing research from Professor Ganesh
Balasubramanian's laboratory (Group for Interfacial and Nanoengineering), Department of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, USA.

Despite the recent advances in the power conversion efficiency of organic solar cells, an insight into the processing-driven thermo-mechanical stability of bulk heterojunction active layers is still warranted. Correlating the elasto-morphology with device performance requires a deeper understanding of the molecular-level physics, as presented in this research on the interplay between processing, thermodynamics and mechanical stability
of typical photoactive layers in organic solar cells.

Image: 
Professor Ganesh Balasubramanian's laboratory (Group for Interfacial and Nanoengineering), Department of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, USA.

Organic solar cells are ideal for use in flexible electronics because of the inherently malleable nature of semiconducting polymers. Recent research on the interplay between processing, thermodynamics and mechanical stability of typical photoactive layers in organic cells is providing a deeper understanding of these high-potential materials.

Ganesh Balasubramanian, P.C. Rossin assistant professor of Mechanical Engineering & Mechanics at Lehigh University, and his graduate student Joydeep Munshi recently set out to understand how stable these materials are when deformed, and whether the promising properties can be realized under harsh loading conditions when the solar cells may be subject to stretching and compression. Through computational experiments using the leadership class computing resources in Frontera, the team demonstrated that adding small molecules to the semiconducting polymer blend enhances the performance and stability of material used in organic solar cells. They predict this is also true for organic solar cell material more generally.

The study is described in an article, "Elasto-morphology of P3HT:PCBM bulk heterojunction organic solar cells" featured on the back cover of Soft Matter. Additional authors include: professors TeYu Chien at the University of Wyoming and Wei Chen, at Northwestern University.

"Based on previous literature, we anticipated that variations in the materials processing parameters would influence the structure as well as the thermal and mechanical properties of these solar cells," says Balasubramanian. "However, the finding that presence of small molecular additives can augment the mechanical properties is new knowledge gained from this work."

The team demonstrated that, in addition to the solar-to-electrical power conversion efficiency, the mechanical stability and flexibility of typical organic solar cells is significantly impacted by the presence of molecular additives.

"This could prove crucial towards the commercialization of organic solar cells," says Balasubramanian.

The results were achieved by performing large scale molecular simulations in the supercomputer Frontera, located at the Texas Advanced Computing Center (TACC) at the University of Texas at Austin), which is the world's fastest academic supercomputer. The predictions consisted of the deformation mechanisms of the polymer blend under straining conditions as well as examining the structure/morphology of the material upon loading. Balasubramanian's team has been among the first to utilize Frontera.

While similar approaches have been considered for interrogating the properties of organic photovoltaic materials, the correlation between the material structure and elastic properties had not been done before, according to Balasubramanian. By adding molecular additives to the polymeric blends, advanced solar power materials and devices can be fabricated that sustain extreme operational stress-strain conditions while delivering superior performance.

He adds: "The research has the potential to provide new directions for scientific practices in this field of materials and energy research."

Credit: 
Lehigh University

Examining Congress members' popularity on Instagram

(Carlisle, Pa.) - With a "virtual campaign season" underway due to the COVID-19 pandemic, social media platforms will be a particularly important way for candidates to build a following and connect with voters. New research on the popularity of Congress members' Instagram posts reveals some surprising factors at play that could elevate their influence on the platform and make for more effective campaigns.

The study, published in the journal Online Information Review, examined every Instagram post shared by members of the 115th Congress for the first six months they were seated - nearly 18,000 posts in all. "Very few in Congress are really using the platform to its full potential, but, the pandemic and the lack of in-person events have made this a first-of-its-kind campaign season, and savvy politicians will be able to leverage their Instagram feeds to their advantage," said the study's author David O'Connell, associate professor of political science at Dickinson College.

O'Connell explained the type of content a congress member posts affects user responses and engagement. "People would much rather see personal content like family photos, selfies and pets than walls of text about policy positions," he said. O'Connell's analysis found that, holding all else constant, a personal photo receives almost 17% more likes and 31% more comments.

O'Connell also found that the more well-known a Congress member is in real life, the more influence he or she likely wields on Instagram. "Members who are outspoken ideologues, who have served longer, run for president or served in congressional leadership posts all had significantly more followers, even controlling for the type of content they shared." For example, Sens. Ted Cruz and John Cornyn are both Texas Republicans, yet Cruz, who ran for president and has spent considerable time on the national stage, has roughly 546,000 Instagram followers while Cornyn has just more than 11,500.

While other studies have examined politicians' use of Facebook and Twitter, relatively little has been written about their use of Instagram. "This research sheds light on some broader issues including how social media reinforces existing power biases, and on the increasing trends toward personalization in American politics," O'Connell said. He is particularly interested in how politicians will use the platform to motivate young people to vote, as youth turnout is traditionally unreliable, and people aged 18-29 is the largest demographic using Instagram.

Credit: 
Dickinson College

New research reveals mysterious blue whirl flame structure

A recently discovered soot-free flame called a blue whirl - which consumes all fuel it encounters - actually consists of three different flame structures that swirl together into one otherworldly blue ring, according to the first study to identify how these unique flames form. By revealing the blue whirl's structure, the findings may inform potential applications of the flame for efficient, low-emission combustion. "Only if we understand its structure can we tame it, scale it, and create it at will," Joseph Chung and colleagues write in the study. Scientists recently discovered the formation of a blue whirl while experimenting with fire whirls produced by burning liquid hydrocarbon fuels on a water base. But while further investigations have since produced temperature maps of blue whirls and informed how they can be stabilized, their flame structure and dynamics - especially how they form from a fire whirl - have remained mysterious. To simulate how blue whirls emerge, Chung et al. coupled 3D, time-dependent equations that describe the motion of viscous fluid substances to a model for fuel conversion and chemical energy release. The researchers started by simulating experimental conditions, then tweaked physical parameters such as fuel and air size and velocity in their calculations until a blue whirl materialized. The researchers found that the ethereal flame is composed of three different flames: a diffusion flame, in which the fuel and oxidizer are separated before burning, and premixed rich (with excess fuel) and lean (with excess air) flames. Chung et al. conclude that their research provides a tool to further explore this phenomenon, including whether the flame can be made directly and scaled up safely to larger sizes.

Credit: 
American Association for the Advancement of Science (AAAS)

Unlocking how cellular proteins control cancer spread

A new insight into cell signals that control cancer growth and migration could help in the search for effective anti-cancer drugs. A McGill-led study reveals key biochemical processes that advance our understanding of colorectal cancer, the third most common cancer among Canadians.

Using the CMCF beamline at the Canadian Light Source (CLS) at the University of Saskatchewan, scientists from McGill University and Osaka University in Japan were able to unlock the behavior of an enzyme involved in the spread of cancer cells. In a study published in the Journal of Biological Chemistry, the team found that there is a delicate interaction between the enzyme, PRL3, and another protein that moves magnesium in and out of cells. This interaction is crucial to colorectal cancer growth.

"These enzymes were first seen in liver cells that were activated to start growing, so somehow they act as a growth signal," said McGill biochemistry professor and corresponding author Dr. Kalle Gehring.

It was generally believed that PRL3 proteins acted as enzymes to control cancer cells. Therefore, it came as a surprise when Gehring and his team found that a mutation that leads to a loss of the enzyme activity still maintained the same influence over cancer growth and migration. "What our new paper showed is that a second activity of PRL3, control of a magnesium transporter, is the signal that instructs the cancer to travel to other parts of the body. It was very exciting that the mutant protein that has no catalytic activity, but still binds very tightly to magnesium transport proteins, turned out to be as oncogenic as the wild-type protein," said Gehring.

The team's findings call into question long-standing hypotheses about the role PRL3 plays in the spread of cancer and indicate that the binding mechanism is somehow key.

Understanding that binding the magnesium transporters and not the enzyme's catalytic activity influences cancer growth and migration signaling is key information for identifying novel compounds to prevent cancer spread. Current drug screening against PRL3 has focused on identifying compounds that block phosphatase activity. By testing the wrong function, the screens may have missed other compounds of therapeutic interest. Shifting the focus to the enzyme's ability to bind to magnesium transporters is one way to help companies identify better therapeutics for cancer through drug screening methods.

Future work will include more detailed studies on the role of the magnesium transporter and its interactions with PRL3.

Credit: 
McGill University

New super-resolution method reveals fine details without constantly needing to zoom in

image: Left and right show false-colored electron microscopic images of the same region on the specimen. But the image on the right has been super resolved using Dr. Yu Ding's new image processing method.

Image: 
Texas A&M University College of Engineering

Since the early 1930s, electron microscopy has provided unprecedented access to the alien world of the extraordinarily small, revealing intricate details that are otherwise impossible to discern with conventional light microscopy. But to achieve high resolution over a large specimen area, the energy of the electron beams needs to be cranked up, which is costly and detrimental to the specimen under observation.

Texas A&M University researchers may have found a new method to improve the quality of low-resolution electron micrographs without compromising the integrity of specimen samples. By training deep neural networks, a type of artificial intelligence algorithm, on pairs of images from the same sample but at different physical resolutions, they have found that details in lower-resolution images can be enhanced further.

"Normally, a high-energy electron beam is passed through the sample at locations where greater image resolution is desired. But with our image processing techniques, we can super resolve an entire image by using just a few smaller-sized, high-resolution images," said Dr. Yu Ding, Mike and Sugar Barnes Professor in the Wm Michael Barnes '64 Department of Industrial and Systems Engineering. "This method is less destructive since most parts of the specimen sample needn't be scanned with high-energy electron beams."

The researchers published their image processing technique in Institute of Electric and Electronics Engineers' Transactions on Image Processing in June.

Unlike in light microscopy where photons, or tiny packets of light, are used to illuminate an object, in electron microscopy, a beam of electrons is utilized. The electrons reflected from or passing through the object are then collected to form an image, called the electron micrograph.

Thus, the energy of the electron beams plays a crucial role in determining the resolution of images. That is, the higher the energy electrons, the better the resolution. However, the risk of damaging the specimen also increases, similar to how ultraviolet rays, which are the more energetic relatives of visible light, can damage sensitive materials like the skin.

"There's always that dilemma for scientists," said Ding. "To maintain the specimen's integrity, high-energy electron beams are used sparingly. But if one does not use energetic beams, high-resolution or the ability to see at finer scales becomes limited."

But there are ways to get high resolution or super resolution using low-resolution images. One method involves using multiple low-resolution images of essentially the same region. Another method learns common patterns between small image patches and uses unrelated high-resolution images to enhance existing low-resolution images.

These methods almost exclusively use natural light images instead of electron micrographs. Hence, they run into problems for super-resolving electron micrographs since the underlying physics for light and electron microscopy is different, Ding explained.

The researchers turned to pairs of low- and high-resolution electron microscopic images for a given sample. Although these types of pairs are not very common in public image databases, they are relatively common in materials science research and medical imaging.

For their experiments, Ding and his team first took a low-resolution image of a specimen and then subjected roughly 25% of the area under observation to high-energy electron beams to get a high-resolution image. The researchers noted that the information in the high-resolution and low-resolution image pair are very tightly correlated. They said that this property can be leveraged even though the available dataset might be small.

For their analyses, Ding and his team used 22 pairs of images of materials infused with nanoparticles. They then divided the high-resolution image and its equivalent area in the low-resolution image into three by three subimages. Next, each subimage pair was used to "self-train" deep neural networks. Post-training, their algorithm became familiar at recognizing image features, such as edges.

When they tested the trained deep neural network on a new location on the low-resolution image for which there was no high-resolution counterpart, they found that their algorithm could enhance features that were hard to discern by up to 50%.

Although their image processing technique shows a lot of promise, Ding noted that it still requires a lot of computational power. In the near future, his team will be directing their efforts in developing algorithms that are much faster and can be supported by lesser computing hardware.

"Our paired image processing technique reveals details in low-resolution images that were not discernable before," said Ding. "We are all familiar with the magic wand feature on our smartphones. It makes the image clearer. What we aim to do in the long run is to provide the research community a similar convenient tool for enhancing electron micrographs."

Credit: 
Texas A&M University

Perovskite and organic solar cells rocketed into space

image: This photograph shows the launch of the sounding rocket with the OHSCIS experiment aboard in the course of the MAPHEUS 8 campaign at the European Space and Sounding Rocket Range in Kiruna, Sweden in June 2019.

Image: 
DLR MORABA

For the first time, researchers in Germany sent perovskite and organic solar cells on a rocket into space. The solar cells withstood the extreme conditions in space, producing power from direct sunlight and reflective light from the Earth's surface. The work, published August 12 in the journal Joule, sets the foundation for future near-Earth application as well as potential deep space missions.

One of the goals for space missions is to minimize the weight of equipment that the rocket carries. While current inorganic silicon solar panels used in space missions and satellites have high efficiencies, they are also very heavy and rigid. The emerging technology of hybrid perovskite and organic solar cells that are incredibly light and flexible becomes an ideal candidate for future applications.

"What counts in this business is not the efficiency, but the produced electric power per weight, which is called specific power," says senior author Peter Müller-Buschbaum of Technical University of Munich in Germany. "The new type of solar cells reached values between 7 and 14 milliwatts per square centimeter during the rocket flight."

"Transferred onto ultra-thin foils, one kilogram (2.2 pounds) of our solar cells would cover more than 200 square meters (2,153 square feet) and would produce enough electric power for up to 300 standard 100-W light bulbs," says first author Lennart Reb, of Technical University of Munich in Germany. "This is ten times more than what the current technology is offering."

In June 2019, the rocket launched in northern Sweden, where the rocket entered space and reached 240 kilometers (149 miles) in altitude. The perovskite and organic solar cells, located at the payload, successfully withstood extreme conditions on the rocket ride--from the rumbling forces and heat at liftoff to the strong UV light and ultra-high vacuum in space. "The rocket was a big step," says Reb. "Going to the rocket was really like going into a different world."

In addition to operating efficiently in space, the perovskite and organic solar cells can also function in low-light conditions. When there's no direct light on the traditional solar cell, the cell typically stops working, and the power output turns zero. However, the team discovered an energy output fueled by the weak diffuse light reflected from Earth's surface from perovskite and organic solar cells that weren't exposed to direct sunlight.

"This is a good hint and confirms that the technology can go into what is called deep space missions, where you would send them far out in space, far away from the sun, where standard solar cells wouldn't work in," says Müller-Buschbaum. "There's really exciting future for this sort of technology, bringing these solar cells into more space missions in the future."

But before launching more new solar cells into space, Müller-Buschbaum says one of the limitations of the study is the short time the rocket spent in space, where the total time was 7 minutes. The next step is to employ long-term applications in space, such as satellites, to understand the cells' lifetime, long-term stability, and full potential.

"It's the very first time these perovskite and organic solar cells ever were in space, and that's really a milestone," says Müller-Buschbaum. "The really cool thing is that this is now paving the way for bringing these types of solar cells to more applications in space. On the long run, this might also help to bring these technologies for broader use in our terrestrial environment."

Credit: 
Cell Press

Quantum materials quest could benefit from graphene that buckles

image: Simulated mountain and valley landscape created by buckling in graphene. The bright linked dots are electrons that have slowed down and interact strongly.

Image: 
Yuhang Jiang

Graphene, an extremely thin two-dimensional layer of the graphite used in pencils, buckles when cooled while attached to a flat surface, resulting in beautiful pucker patterns that could benefit the search for novel quantum materials and superconductors, according to Rutgers-led research in the journal Nature.

Quantum materials host strongly interacting electrons with special properties, such as entangled trajectories, that could provide building blocks for super-fast quantum computers. They also can become superconductors that could slash energy consumption by making power transmission and electronic devices more efficient.

"The buckling we discovered in graphene mimics the effect of colossally large magnetic fields that are unattainable with today's magnet technologies, leading to dramatic changes in the material's electronic properties," said lead author Eva Y. Andrei, Board of Governors professor in the Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers University-New Brunswick. "Buckling of stiff thin films like graphene laminated on flexible materials is gaining ground as a platform for stretchable electronics with many important applications, including eye-like digital cameras, energy harvesting, skin sensors, health monitoring devices like tiny robots and intelligent surgical gloves. Our discovery opens the way to the development of devices for controlling nano-robots that may one day play a role in biological diagnostics and tissue repair."

The scientists studied buckled graphene crystals whose properties change radically when they're cooled, creating essentially new materials with electrons that slow down, become aware of each other and interact strongly, enabling the emergence of fascinating phenomena such as superconductivity and magnetism, according to Andrei.

Using high-tech imaging and computer simulations, the scientists showed that graphene placed on a flat surface made of niobium diselenide, buckles when cooled to 4 degrees above absolute zero. To the electrons in graphene, the mountain and valley landscape created by the buckling appears as gigantic magnetic fields. These pseudo-magnetic fields are an electronic illusion, but they act as real magnetic fields, according to Andrei.

"Our research demonstrates that buckling in 2D materials can dramatically alter their electronic properties," she said.

The next steps include developing ways to engineer buckled 2D materials with novel electronic and mechanical properties that could be beneficial in nano-robotics and quantum computing, according to Andrei.

Credit: 
Rutgers University

Researchers identify human influence as key agent of ocean warming patterns in the future

The oceans play an important role in regulating our climate and its change by absorbing heat and carbon.

The implications of their results, published today in Nature, are significant because regional sea level, affecting coastal populations around the world, depends on patterns of ocean warming. In this study they show how these patterns are likely to change.

The results imply widespread ocean warming and sea level rise, compared to the past, including increased warming near the Eastern edges of ocean basins leading to more sea level rise along the Western coastlines of continents in the North Atlantic and Pacific Oceans.

Co-author, Laure Zanna, Visiting Professor in Climate Physics at Oxford University and Professor in the Center of Atmosphere Ocean Science at NYU Courant, said: 'In the future, the imprint of rising atmospheric temperatures on ocean warming will likely dominate that of changes in ocean circulation. Initially, we might think that as the climate warms more, changes in ocean currents and their impact on ocean warming patterns will become larger. However, we show that that this is not the case in several regions of the ocean.'

A new method, developed by scientists at Oxford University, uses climate models to suggest that ocean warming patterns will increasingly be influenced by simple uptake of atmospheric warming - making them easier to predict. This is in contrast to now and the past when circulation changes were key factors in shaping ocean warming patterns.

Changes in ocean warming due to the simple uptake of atmospheric warming are easier to model and so the scientists hope that where previous models have struggled, they might become more accurate for future projections.

Lead author, Dr Ben Bronselaer, who began conducting this research while a PhD student at Oxford University, said: 'I think it is an encouraging possibility that climate models, which struggle to simulate past ocean warming, might be better at predicting future warming patterns. Better prediction of warming patterns implies better prediction of regional sea level rise, which will help to mitigate climate impacts such as flooding on individual communities. Of course, we do need to understand predictions of ocean circulation better to solidify this result.

'During our research, we found a surprising relationship between ocean heat and carbon storage which appears to be unique. While there is a connection between these two quantities that is not yet fully understood, we think we have made significant progress towards uncovering it.'

The Nature study shows that the global ocean heat and carbon uptake go hand-in-hand, and the uptake rates are set by the present state of the ocean. This relationship is at the core of the method developed in this study. As humans change the ocean state by adding more heat and carbon, the ability of the ocean to take up both heat and carbon will be altered. A possible implication could be that the later emissions are reduced, the slower the reductions in atmospheric surface temperature are likely to be, due to the coupling between heat and carbon uptake by the ocean.

These results highlight a deep and fundamental connection between ocean and carbon uptake, which has implications for atmospheric heat and carbon. While ocean carbon and heat are separate systems, this study shows that they are deeply interconnected, via the capacity of the ocean to absorb these quantities. These results help explain why atmospheric warming depends linearly on cumulative carbon emissions.

Prof Laure Zanna said: 'We find that the ocean's capacity to absorb heat and carbon are coupled, and constrained by the ocean state. This implies that the present ocean state will regulate surface warming whether CO2 emissions continue to rise or decline.

'The rates of ocean warming over the past 60 years have been significantly altered by changes in ocean circulation, particularly in the North Atlantic and parts of the Pacific Ocean, where we can identify cooling over some decades. However, in the future changes in ocean currents appear to play a smaller role on patterns of ocean warming, and the oceans will transport the excess anthropogenic heat in the ocean in a rather passive manner in these regions.'

The modelling in this study relied on a set of creative simulations done by colleagues at The Geophysical Fluid Dynamics Laboratory (GFDL), and other published work. Using these simulations, the scientists were able to draw hypotheses on how the patterns of heat and carbon are related and how they differ.

Building on this research, the scientists will now attempt to understand how the storage of heat and carbon in the ocean will affect the decline of atmospheric temperature and CO2 levels if carbon emissions start going down.

They will also use the component of ocean warming that is driven by circulation changes to better understand ocean circulation changes, which are difficult to measure directly, and their impact on regional sea level in the Tropics.

Credit: 
University of Oxford

Scientists unveil nature of active site in nitrogen-doped carbon for electrocatalytic CO2 reduction

Converting CO2 to value-added chemicals driven by renewable energy is a strategy of reducing carbon emission and alleviating pressure from depleting fossil resources. Electrocatalytic CO2 reduction (CO2R) to CO is an important way for CO2 utilization since CO is widely used for chemical synthesis.

Metal-free nitrogen-doped (N-doped) carbon material possesses high conductivity and tunable electronic structure. It exhibits excellent performance for the electrocatalytic CO2R to CO. However, the active site for the reaction remains under debate due to the complexity in the N species as well as the lack of methods for controllable doping of N.

Recently, a research group led by Prof. DENG Dehui from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences (CAS) unveiled the nature of the active site in N-doped carbon materials for the electrocatalytic CO2R to CO. The study was published in Cell Reports Physical Science on August 12.

Via an innovated approach of silicon dioxide nanosphere template-assisted pyrolysis of phthalocyanine, the scientists prepared a series of N-doped carbon foam catalysts with precisely controlled N-dopant types and contents.

Electrochemical tests showed that at -0.5 V versus reversible hydrogen electrode (RHE), the catalyst dominated by graphitic N (GN) dopants exhibited the highest CO Faradaic efficiency of 95%, compared with those dominated by pyridinic N (PN) or pyrrolic N (ProN) dopants.

Theoretical calculations demonstrated that the limiting potentials for CO2R to CO over the carbon atoms adjacent to the GN were significantly lower than those for the hydrogen evolution reaction (HER), which favored the electrocatalytic CO2R.

The PN site tended to be blocked by the strongly adsorbed H* and the carbon atoms neighbouring to the PN possessed lower limiting potentials for HER, while the ProN disfavored both CO2R and HER.

"Compared with the PN and ProN, the GN significantly improved the activity of the adjacent carbon atoms for the electrocatalytic CO2R to CO," said Prof. DENG.

Credit: 
Chinese Academy of Sciences Headquarters

Countries transitioning to zero carbon should look at more than technology cost

A 'one-size-fits-all' approach to producing cleaner energy based on cost alone could create social inequalities, finds a new study.

The Paris Agreement aims to keep global temperature rise this century well below 2°C above pre-industrial levels and to pursue efforts to limit it to 1.5°C. One major route to achieving this is for countries to reach 'net zero' carbon emissions by 2050 - either producing no emissions or removing the same amount that they produce.

Reaching this goal will require a mixture of replacing fossil fuels in energy production with sustainable alternatives like solar and wind power, and deploying technologies that remove carbon dioxide either from power plant emissions or directly from the atmosphere.

Many current models for determining the best mix of strategies for a country to adopt focus on the projected cost of the technologies. However, this 'one-size-fits-all' approach ignores the current state of a country's energy economy and industrial strengths, which could lead to social inequalities, argue Imperial College London researchers in a new analysis published today in Joule.

The team took the example of three countries - Spain, Poland and the UK - and ran an analysis that included the economic and social implications of different energy mixes, as well as the technology costs.

Poland, for example relies on coal for 80 percent of its energy generation and has no in-country expertise in solar power. So, even if deploying solar is the cheapest option technologically, the impact on the workforce would be large as it would be difficult to reskill such a large portion of the workforce. This could cause economic upheaval and social inequality.

For Poland, therefore, the researchers argue that a better option may be to continue using coal in the majority, but to deploy carbon capture and storage (CCS) technologies that remove the carbon dioxide from the power plant emissions.

Spain, in contrast, already has a solid solar and wind power industry, meaning the analysis based on cost alone is similar to the analysis that includes socio-economic impacts, as it would be far less disruptive to deploy more solar and wind power.

The UK has a growing offshore wind industry but would face problems with intermittent power from a completely renewables-based energy mix, so the deployment of CCS power stations remains a priority.

First author of the study Dr Piera Patrizio, from the Centre for Environmental Policy at Imperial, said: "The transition to net zero needs to be technically feasible and financially viable, but should also be socially equitable, avoiding any potentially regressive outcomes, perceived or otherwise, that might be caused by changes in the labour market."

Lead author Professor Niall Mac Dowell, from the Centre for Environmental Policy at Imperial, said: "If countries fail to account for the national situation; what resources are available both technically and in the labour market, they risk energy transitions that results in deeper social divisions, which, in the long term, will affect growth, productivity, wellbeing, and social cohesion."

The team are currently extending their analysis across the European Union, and to the United States of America, considering policies such as a recent push to adopt hydrogen fuel technologies and how that might affect different countries. They will also consider the impact of the COVID-19 pandemic and how decisions about the transition to net zero could affect recovering economies.

Credit: 
Imperial College London

Japanese biologists discover new species of sea worm in the southern ocean

image: The new species of worm is now one of six described species of Flabelligena, which are known mainly from the North Atlantic Ocean

Image: 
Naoto Jimi, National Institute of Polar Research, Japan

Earlier this year, a team from Japan's National Institute of Polar Research (NIPR), National Museum of Nature and Science (NMNS), and Kochi University (KU) set out to collect specimens of sea worm near the South Orkney Islands, a remote region of the Southern Ocean about 400 miles northeast of the tip of the Antarctic Peninsula.

The researchers collected material from the sea floor at depths of 2,036 to 2,479 meters, a segment that falls within what oceanographers call the bathyal zone (1,000 to 4,000 meters deep). Amid the collected seafloor material, they observed a new Acrocirridae, a family that now includes 43 species of worm. The team, led by Naoto Jimi, a postdoctoral fellow at NIPR, published their findings in Biodiversity Data Journal on June 8, 2020.

The team’s process involved extracting rock and silt sediments using a sieve with seawater and fixing them in an ethanol solution. They observed a variety of captured specimens under a microscope and photographed them with a high-resolution digital SLR. The newly-identified worm, one of a species belonging to the genus Flabelligena, is named “Flabelligena hakuhoae”.

Flabelligena falls within the broad class of worms called polychaetes, which contains over 10,000 species. "Polychaetes are one of the most diverse groups in marine benthic animals and well-studied in the Southern Ocean," Jimi said. "Many researchers have investigated the Southern Ocean, but our knowledge of small deep-sea invertebrates is still quite limited."

Jimi and his NIPR, NMNS, KU co-authors described the Flabelligena hakuhoae having a "minute" body, body papillae (small rounded protuberance), 3 pairs of branchiae (gill-like organs), absence of eyes, and a pair of frontal palps. Its body is about 1.8 centimeters long and 1 millimeter wide and rounded on both ends.

The new species of worm is now one of six described species of Flabelligena, which are known mainly from the North Atlantic Ocean. Three species of Flabelligena are from the Southwest Atlantic, Mediterranean, and South Indian Oceans. They all live in sandy mud areas, mainly in the bathyal to abyssal depths (1,000 meters and deeper). According to Jimi, the new species are the first records of Flabelligena found in the Southern Ocean. The discovery of team will also contribute dramatically to the understanding of biodiversity of the Antarctic region.

Jimi and his team hope to discover more species and continue to learn more about the vast Southern Ocean. "This is just the first step in understanding the biodiversity of the Antarctic Ocean," Jimi said. "The next step is to understand the polychaete diversity around Syowa Station.

Syowa Station is Japan's Antarctic research station located in East Ongul Island, Antarctica.

Credit: 
Research Organization of Information and Systems

SMART researchers find new way to make bacteria more sensitive to antibiotics

image: SMART AMR study finds that exposing bacteria to hydrogen sulfide can increase antimicrobial sensitivity in bacteria that do not produce H2S

Image: 
Jessie Choo Hui Ling (SMART AMR)

Singapore, 12 August 2020 - Researchers from Singapore-MIT Alliance for Research and Technology (SMART), MIT's research enterprise in Singapore, have discovered a new way to reverse antibiotic resistance in some bacteria using hydrogen sulphide (H2S).

Growing antimicrobial resistance is a major threat for the world with a projected 10 million deaths each year by 2050 if no action is taken. The World Health Organisation also warns that by 2030, drug-resistant diseases could force up to 24 million people into extreme poverty and cause catastrophic damage to the world economy.

In most bacteria studied, the production of endogenous H2S has been shown to cause antibiotic tolerance, so H2S has been speculated as a universal defence mechanism in bacteria against antibiotics.

A team at SMART's Antimicrobial Resistance (AMR) Interdisciplinary Research Group (IRG) tested that theory by adding H2S releasing compounds to Acinetobacter baumannii - a pathogenic bacteria that does not produce H2S on its own. They found that rather than causing antibiotic tolerance, exogenous H2S sensitised the A. baumannii to multiple antibiotic classes. It was even able to reverse acquired resistance in A. baumannii to gentamicin, a very common antibiotic used to treat several types of infections.

The results of their study, supported by the Singapore National Medical Research Council's Young Investigator Grant, are discussed in a paper titled "Hydrogen sulfide sensitizes Acinetobacter baumannii to killing by antibiotics" published in the prestigious journal Frontiers in Microbiology.

"Until now, hydrogen sulfide was regarded as a universal bacterial defense against antibiotics," says Dr Wilfried Moreira, the corresponding author of the paper and Principal Investigator at SMART's AMR IRG. "This is a very exciting discovery because we are the first to show that H2S can, in fact, improve sensitivity to antibiotics and even reverse antibiotic resistance in bacteria that do not naturally produce the agent."

While the study focused on the effects of exogenous H2S on A. baumannii, the scientists believe the results will be mimicked in all bacteria that do not naturally produce H2S.

"Acinetobacter baumannii is a critically important antibiotic-resistant pathogen that poses a huge threat to human health," says Say Yong Ng, lead author of the paper and Laboratory Technologist at SMART AMR. "Our research has found a way to make the deadly bacteria and others like it more sensitive to antibiotics, and can provide a breakthrough in treating many drug-resistant infections."

The team plans to conduct further studies to validate these exciting findings in pre-clinical models of infection, as well as extending them to other bacteria that do not produce H2S.

Credit: 
Singapore-MIT Alliance for Research and Technology (SMART)

Security gap allows eavesdropping on mobile phone calls

image: Using this App, tech-savvy volunteers can help search for radio cells that still contain the security vulnerability.

Image: 
RUB, Marquard

Calls via the LTE mobile network, also known as 4G, are encrypted and should therefore be tap-proof. However, researchers from the Horst Görtz Institute for IT Security (HGI) at Ruhr-Universität Bochum have shown that this is not always the case. They were able to decrypt the contents of telephone calls if they were in the same radio cell as their target, whose mobile phone they then called immediately following the call they wanted to intercept. They exploit a flaw that some manufacturers had made in implementing the base stations.

The results were published by the HGI team David Rupprecht, Dr. Katharina Kohls, and Professor Thorsten Holz from the Chair of Systems Security together with Professor Christina Pöpper from the New York University Abu Dhabi at the 29th Usenix Security Symposium, which takes place as an online conference from 12 to 14 August 2020. The relevant providers and manufacturers were contacted prior to the publication; by now the vulnerability should be fixed.

Reusing keys results in security gap

The vulnerability affects Voice over LTE, the telephone standard used for almost all mobile phone calls if they are not made via special messenger services. When two people call each other, a key is generated to encrypt the conversation. "The problem was that the same key was also reused for other calls," says David Rupprecht. Accordingly, if an attacker called one of the two people shortly after their conversation and recorded the encrypted traffic from the same cell, he or she would get the same key that secured the previous conversation.

"The attacker has to engage the victim in a conversation," explains David Rupprecht. "The longer the attacker talked to the victim, the more content of the previous conversation he or she was able to decrypt." For example, if attacker and victim spoke for five minutes, the attacker could later decode five minutes of the previous conversation.

Identifying relevant base stations via app

In order to determine how widespread the security gap was, the IT experts tested a number of randomly selected radio cells across Germany. The security gap affected 80 per cent of the analysed radio cells. By now, the manufacturers and mobile phone providers have updated the software of the base stations to fix the problem. David Rupprecht gives the all-clear: "We then tested several random radio cells all over Germany and haven't detected any problems since then," he says. Still, it can't be ruled out that there are radio cells somewhere in the world where the vulnerability occurs.

In order to track them down, the Bochum-based group has developed an app for Android devices. Tech-savvy volunteers can use it to help search worldwide for radio cells that still contain the security gap and report them to the HGI team. The researchers forward the information to the worldwide association of all mobile network operators, GSMA, which ensures that the base stations are updated. Additional information is available on the website http://www.revolte-attack.net.

"Voice over LTE has been in use for six years," says David Rupprecht. "We're unable to verify whether attackers have exploited the security gap in the past." He is campaigning for the new mobile phone standard to be modified so that the same problem can't occur again when 5G base stations are set up.

Credit: 
Ruhr-University Bochum

A cancer mystery of more than 40 years ago is solved thanks to epigenetics

image: In the image on the left, cells from the colon with normal levels of TYW2 and the "Y" piece. In the image on the right, when the loss of TYW2 and the nucleotide "Y" occurs, colon cancer cells begin to migrate uncontrollably.

Image: 
©Josep Carreras Leukaemia Research Institute

Before the first oncogene mutations were discovered in human cancer in the early 1980s, the 1970s provided the first data suggesting alterations in the genetic material of tumors. In this context, the prestigious magazine "Nature" published in 1975 the existence of a specific alteration in the transformed cell: an RNA responsible for carrying an amino acid to build proteins (transfer RNA) was missing a piece, the enigmatic nucleotide "Y".

After that outstanding observation, the most absolute silence and ignorance has reigned for forty-five years on the causes and consequences of not having that correct base in that RNA.

In an article that was just published in Proceedings of the National Academy of Sciences (PNAS) by the group of Dr. Manel Esteller, Director of the Josep Carreras Leukaemia Research Institute, ICREA Research Professor and Professor of Genetics at the University of Barcelona is solved this mystery by describing that in cancer cells the protein that generates the nucleotide "Y" is epigenetically inactivated, causing small but highly aggressive tumors.

«Since the original discovery in 1975, there has been much biochemical work to characterize the enzymes involved in the different steps that lead to the desired nucleotide "Y", a hypermodified guanine, but without connecting this characterization with its defect in tumor biology. We have built the bridge between these two worlds by demonstrating that the epigenetic silencing of the TYW2 gene is the cause of the loss of the elusive nucleotide "Y"», explains Dr. Esteller about the article in the PNAS journal and adds «Epigenetic blockade TYW2 gene occurs mainly in colon, stomach and uterine cancer. And it has undesirable consequences for healthy cells: the postman (RNA) that sends the signal to produce the bricks of our body (proteins) begins to accumulate errors and the cell takes on a different appearance, far from the normal epithelium, which we call mesenchymal and which it is associated with the appearance of metastasis. In this regard, when we study patients with colon cancer in early stages, the epigenetic defect of TYW2 and the loss of the nucleotide "Y" is associated with those tumors that, although small in size, already lead to less survival of the person. We would like to explore now how to restore the activity of the TYW2 gene and restore the longed-for "Y" piece in cancer in order to close the cycle of this story that began so brilliantly in 1975, at the dawn of modern molecular biology», concludes the researcher.

Credit: 
Josep Carreras Leukaemia Research Institute

Efficient valves for electron spins

image: Both quantum dots (dashed ellipses) on the nanowire are tuned by nanomagnets (brown bars) such that they only allow electrons with an "up" spin to pass. If the orientation of one of the magnets is changed, the current flow is suppressed.

Image: 
University of Basel, Department of Physics

Researchers at the University of Basel in collaboration with colleagues from Pisa have developed a new concept that uses the electron spin to switch an electrical current. In addition to fundamental research, such spin valves are also the key elements in spintronics - a type of electronics that exploits the spin instead of the charge of electrons. The results were published in the scientific journal Communications Physics.

At some point, spintronics might become a buzzword that is as much a part of our vocabulary as electronics. The idea behind it is to use the angular momentum (spin) of an electron instead of the electrical charge. Researchers around the world have been pursuing this goal for many years. Spintronics promises numerous applications in information storage and processing, and could improve the energy efficiency of electronic devices. An important prerequisite is the efficient control and detection of electron spins.

A team of physicists around Professor Christian Schönenberger and Dr. Andreas Baumgartner from the Swiss Nanoscience Institute and the Department of Physics at the University of Basel has now developed a new technique for spintronics in semiconductor devices. Researchers from the Instituto Nanoscienze-CNR in Pisa were also involved.

Nanomagnets are the key

For this purpose, the scientists form two small semiconductor islands (quantum dots) behind each other on a nanowire and generate magnetic fields in the quantum dots using nanomagnets. Using an external field, they are able to control these magnets individually and thus can determine whether a quantum dot allows electrons to pass with a spin directed upward (up) or downward (down). When two quantum dots are connected in series, a current only flows if both are set to "up" or both to "down". Ideally, no current flows if they are oriented in opposite directions.

Arunav Bordoloi, first author of the publication and PhD student in the Schönenberger team, found that this method produced a spin polarization close to the theoretical maximum. "With this technique, we can choose whether a single electron in a given spin state is allowed to enter or leave a quantum system - with an efficiency far greater than in conventional spin valves," he says.

"In recent years, researchers around the world found it a hard nut to crack to fabricate spin valves useful for nano- and quantum-electronic devices," says Dr. Andreas Baumgartner, who is supervising the project. "We have now succeeded in producing one."

Exploring new phenomena

The physicists were also able to show that the magnetic fields are localized to specific locations on the nanowire. "This technique should therefore allow us to study the spin properties of new phenomena typically too sensitive to magnetic fields, such as novel states at the ends of special superconductors," comments Dr. Baumgartner.

This new approach to spintronics should now enable direct measurements of spin correlations and spin entanglement and shed new light on many old and new physical phenomena. In the future, the concept could even prove useful in the quest to use electron spins as the smallest information unit (quantum bit) in a quantum computer.

Credit: 
University of Basel