Tech

Seeing both sides of light collection

image: The efficiency of solar cells can be increased by combining two light-absorbing materials. Such tandem devices derive energy from both direct sunlight and light reflected from other surfaces.

Image: 
© 2021 KAUST; Anastasia Serin

Two types of materials are better than one when it comes to solar cells, as revealed by an international team that has tested a new combination of materials and architecture to improve solar-cell efficiency.

Silicon has long dominated as the premier material for solar cells, helped by its abundance as a raw material. However, perovskites, a class of hybrid organic-inorganic material, are a viable alternative due to their low-cost and large-scale manufacture and potentially higher performance. While still too unstable for full commercialization, they might become available to the market by 2022.

KAUST's Michele De Bastiani and Stefaan De Wolf, working with colleagues in Canada, Germany and Italy, now show that a combination of the two is the best approach. By optimizing the material composition and the architecture of a "tandem" device, the team has achieved efficiencies beyond commercial silicon solar panels.

Sunlight, of course, comes directly from the sun, but illumination also comes from light reflecting off other surfaces, known as albedo. A device architecture that collects light from the back as well as from the front can utilize this source. "Our bifacial tandems exploit both direct sunlight and the albedo to generate electricity in a more efficient way than their conventional counterparts," explains De Bastiani.

He and the team started with a simple silicon device structure that was textured top and bottom to enhance light collection. They then used a solution-processing method to deposit a thin perovskite layer on top. A transparent back electrode allowed light in while also allowing a current to flow out. The researchers tested five perovskite materials, each with a different chemical composition, to increase the absorption of incoming light. In this way, they were able to identify the perovskite that best matched the electronic properties of the silicon.

"A restriction of the tandem configuration is the limited current through the lower of the two subcells," says De Bastiani. "We designed our tandem with a unique feature: the perovskite subcell generates more current than the silicon counterpart by stealing light that would be otherwise absorbed by the bottom subcell."

The team tested their bifacial devices and compared the performance to similar monofacial devices in various outdoor settings with a range of albedos, such as, for example, bright sandstone or concrete. They found that, in all conditions, the bifacial configuration outperformed the monofacial one.

"We are now investigating the stability of the perovskite while also scaling up the technology to the module level," says De Bastiani. "For this, we are looking for industrial partners and sponsors."

Credit: 
King Abdullah University of Science & Technology (KAUST)

New analysis of 2D perovskites could shape the future of solar cells and LEDs

An innovative analysis of two-dimensional (2D) materials from engineers at the University of Surrey could boost the development of next-generation solar cells and LEDs.

Three-dimensional perovskites have proved themselves remarkably successful materials for LED devices and solar panels in the past decade. One key issue with these materials, however, is their stability, with device performance decreasing quicker than other state-of-the-art materials. The engineering community believes the 2D variant of perovskites could provide answers to these performance issues.

In a study published in The Journal of Physical Chemistry Letters, researchers from Surrey's Advanced Technology Institute (ATI) detail how to improve the physical properties of 2D perovskite called Ruddlesden-Popper.

The study analysed the effects of combining lead with tin inside the Ruddlesden-Popper structure to reduce the toxic lead quantity. This also allows for the tuning of key properties such as the wavelengths of light that the material can absorb or emit at the device level - improving the performance of photovoltaics and light-emitting diodes.

Cameron Underwood, lead author of the research and postdoctoral researcher at the ATI, said:

"There is rightly much excitement about the potential of 2D perovskites, as they could inspire a sustainability revolution in many industries. We believe our analysis of strengthening the performance of perovskite can play a role in improving the stability of low-cost solar energy and LEDs."

Professor Ravi Silva, corresponding author of the research and Director of the ATI, said:

"As we wean ourselves away from fossil energy sources to more sustainable alternatives, we are starting to see innovative and ground-breaking uses of materials such as perovskites. The Advanced Technology Institute is dedicated to being a strong voice in shaping a greener and more sustainable future in electronics - and our new analysis is part of this continuing discussion."

Credit: 
University of Surrey

FAST captures distant fast radio bursts from the youth of universe

image: Three new FRBs

Image: 
NIU Chenhui et al.

Fast radio burst (FRB) is a kind of mysterious radio flashes lasting only a few thousandths of a second. Confirmed to be the cosmological origin in 2016, FRB has the potential to provide insights into a wide range of astrophysical problems.

Dr. NIU Chenhui from the team led by Dr. LI Di and Dr. ZHU Weiwei from National Astronomical Observatories of Chinese Academy of Sciences discovered three new FRBs with high dispersion measure from the massive data of the Five-hundred-meter Aperture Spherical radio Telescope (FAST).

Their findings were published in The Astrophysical Journal Letters on March 3.

The discovery indicated that these three FRBs happened billions of years ago when the Universe was still in its youth.

The newly discovered FRBs, along with the first FRB detected by FAST last year, suggest that there could be as many as 120,000 detectable FRBs arriving on Earth every day.

"We are catching up in terms of data processing and expecting more discoveries from FAST, the most sensitive radio telescope in the world," said Dr. NIU Chenhui, the first author of the paper.

Comparing FRB samples from the Parkes telescope and the Australian Square Kilometre Array Pathfinder (ASKAP) telescope, researchers from Australia revealed the relation between the fluence (integrated flux) and the dispersion measure of FRBs. The new discovery helps extend such relation and cover some previously less explored parameter space.

"Combined with simulations, FAST could detect FRBs with redshift larger than 3, i.e., more than 10 billion years old," said Dr. NIU.

The distribution of the dispersion measures of these FRBs was sensitive to the shape of the intrinsic brightness distribution of these cosmic events. "More discoveries from FAST will thus help reveal the yet unknown origin of FRBs," said Dr. LI Di, the corresponding author of the study and the chief scientist of FAST.

Credit: 
Chinese Academy of Sciences Headquarters

Rare earths outside China: FAU researchers identify new deposits

Rare earth elements are the gold of the 21st century: rare and highly prized all over the world. Most known and economically viable sources of rare earths are located in China, where more than 80 percent of them are refined. This has resulted in a near monopoly situation, with China dominating international trade, particularly in heavy rare earths. Geologists and materials scientists at FAU have now discovered a new way of finding new and previously unknown deposits of rare earths, or rare earth metals, worldwide. They have published the findings of their study in the journal Geology.

Rare earth metals are irreplaceable for manufacturing advanced high-tech industrial products due to their luminescent and catalytic properties. They are used to make permanent magnets that are a vital part of modern electronics in televisions, smartphones, notebooks, jet engines, and rocket guidance systems as well as in solar and wind power plants, electric motors and in medical engineering.

Contrary to what their name might suggest, sources of rare earth elements or rare earth metals are distributed fairly equally all over the world. However, there are only very few sources that are economically viable. FAU geologists Dr. Sönke Brandt, Prof. Dr. Reiner Klemd, Marc Fassbender and Prof. Dr. Karsten Haase have now discovered an indicator that can identify such deposits.

They inspected rock samples from the Vergenoeg fluorite mine in South Africa and discovered that fayalite crystals in the sediment of granite-like magma can contain large amounts of heavy rare earth elements. The mineral, which is reddish brown to black in colour, is mainly mined for use as a gemstone and is also used for sand blasting. Fayalite can be found worldwide in igneous rock and abyssal rocks.

'Since heavy rare earth elements are becoming increasingly scarce on the world market, the discovery of fayalite as a new potential source for locating new deposits of rare earths is extremely important for the economy,' says Prof. Dr. Reiner Klemd from Geozentrum Nordbayern at FAU, who led the study.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Female snowy plovers are no bad mothers

image: In snowy plovers, males mostly care for the young alone until they are fledged.

Image: 
Clemens Küpper

In snowy plovers, females have overcome traditional family stereotypes. They often abandon the family to begin a clutch with a new partner whereas the males continue to care for their young until they are independent. An international team led by scientists from the Max Planck Institute for Ornithology in Seewiesen, Germany, has now investigated the decision-making process that determines the duration of parental care by females. They found that offspring desertion often occurs either under poor environmental conditions, when chicks die despite being cared for by both parents, or when chicks have a good chance of survival even without the female.

It seems cruel and somewhat senseless when bird parents desert their vulnerable young to mate again. However, theoretical and experimental studies show that this is often beneficial for the parents, even if they have already invested energy and time in the brood. After all, re-mating after deserting an unsuccessful brood can improve their overall reproductive success.

A team of scientists led by researchers of the Max Planck Institute for Ornithology in Seewiesen, Germany, has now investigated in more detail which factors cause female parents to desert their broods. They studied snowy plovers (Charadrius nivosus) that often breed on salt flats or at brackish inland lakes. This barren environment puts great pressure on parents, as temporary salt ponds often dry up and many chicks die of thirst or starvation. "Males have survival advantages at all life stages", says Clemens Küpper, research group leader at Seewiesen. "Although equal numbers of females and males hatch from eggs, adult population therefore show a surplus of males."

Reversed parental roles

As females are less abundant in the population, they have an advantage in finding mates. Therefore, the parental roles are reversed in this species: males care for the young until they are fledged, while females often leave the family and pair up with another partner for a new breeding attempt.

The research team analyzed the parental behaviour and survival of more than 260 broods over a seven-year period. Of these, more than 70 percent were deserted by the females. Although one parent is often sufficient for parental care, as the chicks are precocial and find the food on their own. However, the study shows that chicks from an abandoned brood actually survive less often than in other broods where females stay longer.

So, are snowy plover mothers making the wrong decisions here? The scientists found that broods were more likely to be deserted at the beginning of the breeding season. This makes sense because there are more opportunities for the female to mate again. The current number of chicks is also important: females are especially likely to abandon broods on days when a chick dies. "This suggests that females who initially decided to care, suddenly abandon their broods when their chicks start to die", says Krisztina Kupán, first author of the study. These females try then to rescue their reproductive success by starting over with a new male.

Females are sensitive to environmental conditions

The researchers concluded that there are two main reasons for females to desert their brood and remate. First, the chicks have a good chance of survival with their fathers alone, so the female can leave and continue to reproduce. Second, the chicks begin to die despite both parents are caring for the brood. This means that conditions for chick rearing are so poor that additional parental care does little for the chick survival. Instead, the female tries to increase her reproductive success frequently starting to breed elsewhere. "Females are flexible and make sensible decisions," says Krisztina Kupán. "They are sensitive to environmental conditions and stay with the chicks if they can contribute substantially to their survival."

Credit: 
Max-Planck-Gesellschaft

UH geologists discover powerful 'river of rocks' below Caribbean

Geologists have long thought tectonic plates move because they are pulled by the weight of their sinking portions and that an underlying, hot, softer layer called asthenosphere serves as a passive lubricant. But a team of geologists at the University of Houston has found that layer is actually flowing vigorously, moving fast enough to drive plate motions.

In their study published in Nature Communications, researchers from the UH College of Natural Sciences and Mathematics looked at minute changes in satellite-detected gravitational pull within the Caribbean and at mantle tomography images - similar to a CAT Scan - of the asthenosphere under the Caribbean. They found a hot "river of rocks" being squeezed from the Pacific Ocean through a gateway under Central America and reaching to the middle of the Caribbean Sea. This underground "river of rocks" started flowing eight million years ago, when the Central American gateway opened, uplifting the overlying seafloor by several hundred feet and tilting it to the northeast toward the Lesser Antilles.

"Without the extra support generated by this flow in the asthenosphere, portions of Central America would still be below sea level. The Atlantic and the Pacific Oceans would be connected without a need for the Panama Canal," said study co-author Lorenzo Colli, assistant professor of geophysics, geodynamics and mantle structure in the Department of Earth and Atmospheric Sciences.

The findings have implications for understanding the shape of the Earth's surface, of its evolution over time through the appearance and disappearance of shallows seas, low-lying land bridges and the forces that move tectonic plates and cause earthquakes.

Another fascinating discovery, according to the researchers, is the asthenosphere is moving six inches per year, which is three times faster than an average plate. It can move independently from the overlying plates and drag them in a different direction.

"This challenges the top-down notion that subduction is always the driver," explained Jonny Wu, study co-author and assistant professor of structural geology, tectonics and mantle structure. "Think of the plates moving like an air hockey puck and being lubricated from below. Instead, what we found is the air hockey table is imposing its own currents on the puck that's moving around, creating a bottom-up movement that has not been well recognized, and that's being quantified here."

Credit: 
University of Houston

Breakthrough lays groundwork for future quantum networks

image: Army-funded research sends entangled qubit states through a communication cable linking one quantum network node to a second node. This research could help lay new groundwork for future quantum communication networks and large scale quantum computers

Image: 
Nancy Wong, University of Chicago

RESEARCH TRIANGLE PARK, N.C. -- New Army-funded research could help lay the groundwork for future quantum communication networks and large-scale quantum computers.

Researchers sent entangled qubit states through a communication cable linking one quantum network node to a second node.

Scientists at the Pritzker School of Molecular Engineering at the University of Chicago, funded and managed by the U.S. Army Combat Capability Development, known as DEVCOM, Army Research Laboratory's Center for Distributed Quantum Information, also amplified an entangled state via the same cable first by using the cable to entangle two qubits in each of two nodes, then entangling these qubits further with other qubits in the nodes. The peer-reviewed journal published the research in its Feb. 24, 2021, issue.

"The entanglement distribution results the team achieved brought together years of their research related to approaches for transferring quantum states and related to advanced fabrication procedures to realize the experiments," said Dr. Sara Gamble, program manager at the Army Research Office, an element of the Army's corporate research laboratory, and co-manager of the CDQI, which funded the work. "This is an exciting achievement and one that paves the way for increasingly complex experiments with additional quantum nodes that we'll need for the large-scale quantum networks and computers of ultimate interest to the Army."

Qubits, or quantum bits, are the basic units of quantum information. By exploiting their quantum properties, like superposition, and their ability to be entangled together, scientists and engineers are creating next-generation quantum computers that will be able solve previously unsolvable problems.

The research team uses superconducting qubits, tiny cryogenic circuits that can be manipulated electrically.

"Developing methods that allow us to transfer entangled states will be essential to scaling quantum computing," said Prof. Andrew Cleland, the John A. MacLean senior professor of Molecular Engineering Innovation and Enterprise at University of Chicago, who led the research.

Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.

Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.

To send the entangled states through the communication cable--a one-meter-long superconducting cable--the researchers created an experimental set-up with three superconducting qubits in each of two nodes. They connected one qubit in each node to the cable and then sent quantum states, in the form of microwave photons, through the cable with minimal loss of information. The fragile nature of quantum states makes this process quite challenging.

The researchers developed a system in which the whole transfer process--node to cable to node--takes only a few tens of nanoseconds (a nanosecond is one billionth of a second). That allowed them to send entangled quantum states with very little information loss.

The system also allowed them to amplify the entanglement of qubits. The researchers used one qubit in each node and entangled them together by essentially sending a half-photon through the cable. They then extended this entanglement to the other qubits in each node. When they were finished, all six qubits in two nodes were entangled in a single globally entangled state.

"We want to show that superconducting qubits have a viable role going forward," Cleland said.

A quantum communication network could potentially take advantage of this advance. The group plans to extend their system to three nodes to build three-way entanglement.

The researchers developed a system in which the whole transfer process--node to cable to node--takes only a few tens of nanoseconds (a nanosecond is one billionth of a second).

"The team was able to identify a primary limiting factor in this current experiment related to loss in some of the components," said Dr. Fredrik Fatemi, branch chief for quantum sciences, DEVCOM ARL, and co-manager of CDQI. "They have a clear path forward for increasingly complex experiments which will enable us to explore new regimes in distributed entanglement."

Credit: 
U.S. Army Research Laboratory

Air pollution: The silent killer called PM2.5

image: Annual and 24-hour PM2.5 ambient air quality standards versus PM2.5 air pollution around the world.

Image: 
Yevgen Nazarenko

Millions of people die prematurely every year from diseases and cancer caused by air pollution. The first line of defence against this carnage is ambient air quality standards. Yet, according to researchers from McGill University, over half of the world's population lives without the protection of adequate air quality standards.

Air pollution varies greatly in different parts of the world. But what about the primary weapons against it? To find answers, researchers from McGill University set out to investigate global air quality standards in a study published in the Bulletin of the World Health Organization.

The researchers focused on air pollution called PM2.5 - responsible for an estimated 4.2 million premature deaths every year globally. This includes over a million deaths in China, over half a million in India, almost 200,000 in Europe, and over 50,000 in the United States.

"In Canada, about 5,900 people die every year from air pollution, according to estimates from Health Canada. Air pollution kills almost as many Canadians every three years as COVID-19 killed to date," says co-author Parisa Ariya, a Professor in the Department of Chemistry at McGill University.

Small but deadly

Among the different types of air pollution, PM2.5 kills the most people worldwide. It consists of particles smaller than approximately 2.5 microns - so small that billions of them can fit inside a red blood cell.

"We adopted unprecedented measures to protect people from COVID-19, yet we don't do enough to avoid the millions of preventable deaths caused by air pollution every year," says Yevgen Nazarenko, a Research Associate at McGill University who conducted the study with Devendra Pal under the supervision of Professor Ariya.

The researchers found that where there is protection, standards are often much worse than what the World Health Organization considers safe. Many regions with the most air pollution don't even measure PM2.5 air pollution, like the Middle East. They also found that the weakest air quality standards are often violated, particularly in countries like China and India. In contrast, the strictest standards are often met, in places like Canada and Australia.

Surprisingly, the researchers discovered that high population density is not necessarily a barrier to fighting air pollution successfully. Several jurisdictions with densely populated areas were successful in setting and enforcing strict standards. These included Japan, Taiwan, Singapore, El Salvador, Trinidad and Tobago, and the Dominican Republic.

"Our findings show that more than half of the world urgently needs protection in the form of adequate PM2.5 ambient air quality standards. Putting these standards in place everywhere will save countless lives. And where standards are already in place, they should be harmonized globally," says Nazarenko.

"Even in developed countries, we must work harder to clean up our air to save hundreds of thousands of lives every year," he says.

Credit: 
McGill University

Tracking cosmic ghosts

image: A visualization of the Glashow event recorded by the IceCube detector. Each colored circle shows an IceCube sensor that was triggered by the event; red circles indicate sensors triggered earlier in time, and green-blue circles indicate sensors triggered later. This event was nicknamed "Hydrangea."

Image: 
IceCube Collaboration

The idea was so far-fetched it seemed like science fiction: create an observatory out of a one cubic kilometer block of ice in Antarctica to track ghostly particles called neutrinos that pass through the Earth. But speaking to Benedickt Riedel, global computing manager at the IceCube Neutrino Observatory, it makes perfect sense.

"Constructing a comparable observatory anywhere else would be astronomically expensive," Riedel explained. "Antarctica ice is a great optical material and allows us to sense neutrinos as nowhere else."

Neutrinos are neutral subatomic particles with a mass close to zero that can pass through solid materials at near the speed of light, rarely reacting with normal matter. They were first detected in the 1950s in experiments that operated near nuclear reactors, which also generate these particles. They were further found to be created by cosmic rays interacting with our atmosphere. But astrophysicists believed they were likely widespread and caused by a variety of cosmic events, if only they could be detected.

Importantly, scientists believed they could be critical clues to other phenomenon. "20 percent of the potentially visible Universe is dark to us," Riedel explained. "That's mostly because of distances and the age of the Universe. High energy light is also hidden. It is absorbed or undergoes transformation that makes it hard to trace back to a source. IceCube reveals a slice of Universe we haven't yet observed."

An Important New Tool in the Multi-Messenger Astronomy Toolbox

Multi-messenger astronomy describes an approach that combines observations of light, gravitational waves, and particles to understand some of the most extreme events in the Universe. Neutrinos play an important part in this type of research.

Prior to 1987, with the explosion of Supernova 1987a, all extra-solar astronomical observations were photon-based. Today, additional detection systems add to our view of the cosmos, including all sky surveys and gravitational wave detectors. However, most observatories can only look at a small portion of the sky. IceCube, because of the nature of neutrinos, can observe these particles' flights from any direction, and therefore act as a full-sky sentinel.

The block of ice at the Amundsen-Scott South Pole Station in Antarctica -- up to a hundred thousand years-old and extremely clear -- is instrumented with sensors between 1,450 and 2,450 meters below the surface. As neutrinos pass through the ice, they may interact with a proton or neutron, producing photons which then travel through the ice, and can be detected by a sensor. The sensors transform these signals from neutrino interactions -- a handful an hour -- into digital data that is then analyzed to determine whether they represent a local source (Earth's atmosphere) or a distant one.

"Based on the analysis, researchers are also able to determine where in the sky the particle came from, its energy, and sometimes, what type of neutrino -- electron, muon or tau -- it was," said James Madson, executive director at the Wisconsin IceCube Particle Astrophysics Center.

In 2017, IceCube detected a neutrino with an energy of 290 teraelectronvolts (TeV) and sent out an alert. The detection triggered an extensive campaign involving more than twenty space- and ground-based telescopes. They identified a blazar 3.5 billion light years away, identifying a high energy cosmic ray source for the first time and launching a new era in multi-messenger detection, according to Riedl.

"We continuously search our dataset in near-real time for interesting neutrino events," he explained. "We found one and sent out an email alert to the community. They followed up with all these other electromagnetic observations, pinpointing a known gamma ray source. They also found, over the course of a month, an increased activity from the source."

IceCube Discovers Evidence of High-energy Electron Antineutrino

On March 10, 2021, IceCube announced the detection of a Glashow resonance event, a phenomenon predicted by Nobel laureate physicist Sheldon Glashow in 1960. The Glashow resonance describes the formation of a W? boson -- an elementary particle that mediates the weak force -- during the interaction of a high-energy electron antineutrino with an electron, peaking at an antineutrino energy of 6.3 petaelectronvolts (PeV). Its existence is a key prediction of the Standard Model of particle physics. The results further demonstrated the ability of IceCube to do fundamental physics. The result was published on March 10 in Nature.

While this energy scale is out of reach for current and future planned particle accelerators, natural astrophysical phenomena are expected to produce antineutrinos that reach beyond PeV energies. The news of the Glashow resonance discovery, "suggests the presence of electron antineutrinos in the astrophysical flux, while also providing further validation of the standard model of particle physics," the authors wrote. "Its unique signature indicates a method of distinguishing neutrinos from antineutrinos, thus providing a way to identify astronomical accelerators that produce neutrinos via hadronuclear or photohadronic interactions, with or without strong magnetic fields."

Neutrino detections require significant computing resources to model the detector behavior and differentiate extra-solar signals from background events created from cosmic ray interactions in the atmosphere. Riedel serves as the coordinator for a large community of researchers -- as many as 300 by his estimates -- who use the Frontera supercomputer at the Texas Advanced Computing Center (TACC), a National Science Foundation (NSF)-funded resource for the national community.

IceCube was awarded time on Frontera as part of the Large Scale Community Partnership track, which provides extended allocations of up to three years to support long-lived science experiments. IceCube - which has collected data for 14 years and was recently awarded a grant from NSF to expand operations over the next the next few years -- is a premier example of such an experiment.

"Part of the resources from Frontera contributed to that discovery," Riedl said. "There's years of Monte Carlo simulations that went into it to figuring out that we could do this."

IceCube uses computing resources from a number of sources, including the Open Science Grid, the Extreme Science and Engineering Discovery Environment (XSEDE), their own local supercomputing cluster, and recently the Amazon Web Services cloud. Frontera is the largest system utilized, however, and can handle a large part of the computational needs of the neutrino community, reserving local or cloud resources for urgent analyses, Riedel says.

"A lot of the computing on Frontera may not be directly associated with discoveries, but it helps down the road, to discern signals better and develop new algorithms," he said.

Modeling Ice and Following Up on Promising Signals

The projects that IceCube scientists use Frontera for vary, but they typically either involve calculations to better understand the optical nature of the ice generally (so the trajectory and other characteristics of neutrino detections can be accurately determined); or computations to analyze specific events that are deemed significant.

The first type of computation uses primarily ray tracing to calculate the path of the light in the ice from high-energy electrically charged particles produced when neutrinos interact. The rays can scatter or be adsorbed by defects in the ice, complicating analysis. Using graphics processing units (GPUs), Riedel has found, can speed up the simulations to studying light the light propagation in the ice by hundreds of times. The IceCube team is among the largest users of the Frontera GPU subsystem that includes NVIDIA RTX GPUs.

The second type of computation occurs when scientists receive an alert that says they have received an interesting signal. "We kick off a calculation to analyze the event that can scale to one million CPUs," Riedl said. "We don't have those, so Frontera can give us a portion of that computational power to run a reconstruction or extraction algorithm. We get those type of events about once a month."

"Large scale simulations of the IceCube facility and the data it creates allow us to rapidly and accurately determine the properties of these neutrinos, which in turn exposes the physics of the most energetic events in the universe," said Niall Gaffney, TACC Director of Data Intensive Computing. "This is key to validating the fundamental quantum-mechanical physics in environments that cannot be practically replicated on earth."

Today's astronomers can observe the universe in many different ways, and computing is now central to almost all of them. "We've moved from the traditional view of a guy with a telescope looking up at the sky, to large scale instruments, to now particle physics and particle observatories," Riedl said. "With this new paradigm, we need large amounts of computing for short periods of time to do big time sensitive computing, and big scientific computing centers like TACC help us do our science."

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

Contactless high performance power transmission

image: A team of physicists at the Technical University of Munich has developed a coil made of superconducting wires that can transmit power of more than five kilowatts contactless without major losses.

Image: 
Christoph Utschick / Wuerth Elektronik eiSos

A team led by Christoph Utschick and Prof. Rudolf Gross, physicists at the Technical University of Munich (TUM), has developed a coil with superconducting wires capable of transmitting power in the range of more than five kilowatts contactless and with only small losses. The wide field of conceivable applications include autonomous industrial robots, medical equipment, vehicles and even aircraft.

Contactless power transmission has already established itself as a key technology when it comes to charging small devices such as mobile telephones and electric toothbrushes. Users would also like to see contactless charging made available for larger electric machines such as industrial robots, medical equipment and electric vehicles.

Such devices could be placed on a charging station whenever they are not in use. This would make it possible to effectively utilize even short idle times to recharge their batteries. However, the currently available transmission systems for high performance recharging in the kilowatt range and above are large and heavy, since they are based on copper coils.

Working in a research partnership with the companies Würth Elektronik eiSos and superconductor coating specialist Theva Dünnschichttechnik, a team of physicists led by Christoph Utschick and Rudolf Gross have succeeded in creating a coil with superconducting wires capable of contactless power transmission in the order of more than five kilowatts (kW) and without significant loss.

Reduced alternating current loss in superconductors

This meant the researchers had to overcome a challenge. Minor alternating current losses also occur in superconducting transmission coils. These losses grow as transmission performance increases, with a decisive impact: The surface temperature of the superconducting wires rises and the superconduction collapses.

The researchers developed a special coil design in which the individual windings of the coil are separated from one another by spacers. "This trick significantly reduces alternating current loss in the coil," says Christoph Utschick. "As a result, power transmission as high as the kilowatt range is possible."

Optimization with analytical and numerical simulations

The team chose a coil diameter for their prototype that resulted in a higher power density than is possible in commercially available systems. "The basic idea with superconducting coils is to achieve the lowest possible alternating current resistance within the smallest possible winding space and thus to compensate for the reduced geometric coupling," says Utschick.

This called on the researchers to resolve a fundamental conflict. If they made the distance between the windings of the superconducting coil small, the coil would be very compact, but there would be a danger of superconduction collapse during operation. Larger separations would on the other hand result in lower power density.

"We optimized the distance between the individual windings using analytical and numerical simulations," says Utschick. "The separation is approximately equal to half the width of the tape conductor." The researchers now want to work on further increasing the amount of transmittable power.

Exciting application areas

If they succeed, the door will open to a large number of very interesting application areas, for example uses in industrial robotics, autonomous transport vehicles and high-tech medical equipment. Utschick even envisions electric racing vehicles which can be charged dynamically while on the race track, as well as autonomous electric aircraft.

Wide-scale applicability of the system still faces an obstacle, however. The coils require constant cooling with liquid nitrogen, and the cooling vessels used cannot be made of metal. The walls of metal vessels would otherwise heat up considerably in the magnetic field, much as a pot does on an induction stove.

"There is as yet no cryostat like this which is commercially available. This will mean an extensive amount of further development effort," says Rudolf Gross, Professor for Technical Physics at the Technical University of Munich and Director of the Walther-Meissner-Institute of the Bavarian Academy of Sciences and Humanities. "But the achievements up to now represent major progress for contactless power transmission at high power levels."

Credit: 
Technical University of Munich (TUM)

UCI-led team creates new ultralightweight, crush-resistant tensegrity metamaterials

image: Novel tensegrity metamaterials by UCI and Georgia Institute of Technology researchers employ isolated compressive loop elements that are exclusively connected through a continuous network of tensile members (highlighted in magenta).

Image: 
Jens Bauer and Cameron Crook / UCI

Irvine, Calif., March 11, 2021 - Catastrophic collapse of materials and structures is the inevitable consequence of a chain reaction of locally confined damage - from solid ceramics that snap after the development of a small crack to metal space trusses that give way after the warping of a single strut.

In a study published this week in Advanced Materials, engineers at the University of California, Irvine and the Georgia Institute of Technology describe the creation of a new class of mechanical metamaterials that delocalize deformations to prevent failure. They did so by turning to tensegrity, a century-old design principle in which isolated rigid bars are integrated into a flexible mesh of tethers to produce very lightweight, self-tensioning truss structures.

Starting with 950 nanometer-diameter members, the team used a sophisticated direct laser writing technique to generate elementary cells sized between 10 and 20 microns. These were built up into eight-unit supercells that could be assembled with others to make a continuous structure. The researchers then conducted computational modeling and laboratory experiments and observed that the constructs exhibited uniquely homogenous deformation behavior free from localized overstress or underuse.

The team showed that the new metamaterials feature a 25-fold enhancement in deformability and an orders-of-magnitude increase in energy absorption over state-of-the-art lattice arrangements.

"Tensegrity structures have been studied for decades, particularly in the context of architectural design, and they have recently been found in a number of biological systems," said senior co-author Lorenzo Valdevit, a UCI professor of materials science and engineering who directs the Architected Materials Group. "Proper periodic tensegrity lattices were theoretically conceptualized only a few years ago by our co-author Julian Rimoli at Georgia Tech, but through this project we have achieved the first physical implementation and performance demonstration of these metamaterials."

While developing structural configurations for planetary landers, the Georgia Tech team discovered that tensegrity-based vehicles could withstand severe deformation, or buckling, of its individual components without collapsing, something never observed in other structures.

"This gave us the idea of creating metamaterials that exploit the same principle, which led us to the discovery of the first-ever 3D tensegrity metamaterial," explained Rimoli, aerospace engineering professor at Georgia Tech.

Made possible by novel additive manufacturing techniques, extremely lightweight yet strong and rigid conventional structures based on micrometer-scale trusses and lattices have been of keen interest to engineers for their potential to replace heavier, solid substances in aircraft, wind turbine blades and a host of other applications. Though possessing many desirable qualities, these advanced materials can - like any load-bearing structure - still be susceptible to catastrophic destruction if overloaded.

"In familiar nano-architected materials, failure usually starts with a highly localized deformation," said first author Jens Bauer, a UCI research scientist in mechanical and aerospace engineering. "Shear bands, surface cracks, and buckling of walls and struts in one area can cause a chain reaction leading to the collapse of an entire structure."

He explained that truss lattices begin to collapse when compressive members buckle, since those in tension cannot. Typically, these parts are interconnected at common nodes, meaning that once one fails, damage can quickly spread throughout the entire structure.

In contrast, the compressive members of tensegrity architectures form closed loops, isolated from one another and only connected by tensile members. Therefore, instability of compressive members can only propagate through tensile load paths, which - provided they do not rupture - cannot experience instability. Push down on a tensegrity system and the whole structure compresses uniformly, preventing localized damage that would otherwise cause catastrophic failure.

According to Valdevit, who's also a professor of mechanical and aerospace engineering at UCI, tensegrity metamaterials demonstrate an unprecedented combination of failure resistance, extreme energy absorption, deformability and strength, outperforming all other types of state-of-the-art lightweight architectures.

"This study provides important groundwork for design of superior engineering systems, from reusable impact protection systems to adaptive load-bearing structures," he said.

Credit: 
University of California - Irvine

Cheaper carbon capture is on the way

image: This animation depicts the two-stage flash configuration, one of several processes described in a new study detailing how EEMPA, a Pacific Northwest National Laboratory-developed solvent, can capture carbon from flue gas emitted by power plants. From left to right, EEMPA (red) first interacts with flue gas (black), where it absorbs carbon dioxide. Then, as a saturated solvent (blue), EEMPA is stripped of carbon dioxide in high and low-pressure tanks. Finally, the stripped solvent is reintroduced to the carbon dioxide absorber, where the process begins again.

Image: 
(Animation by Michael Perkins | Pacific Northwest National Laboratory)

RICHLAND, Wash.--As part of a marathon research effort to lower the cost of carbon capture, chemists have now demonstrated a method to seize carbon dioxide (CO2) that reduces costs by 19 percent compared to current commercial technology. The new technology requires 17 percent less energy to accomplish the same task as its commercial counterparts, surpassing barriers that have kept other forms of carbon capture from widespread industrial use. And it can be easily applied in existing capture systems.

In a study published in the March 2021 edition of International Journal of Greenhouse Gas Control, researchers from the U.S. Department of Energy's Pacific Northwest National Laboratory--along with collaborators from Fluor Corp. and the Electric Power Research Institute--describe properties of the solvent, known as EEMPA, that allow it to sidestep the energetically expensive demands incurred by traditional solvents.

"EEMPA has some promising qualities," said chemical engineer Yuan Jiang, lead author of the study. "It can capture carbon dioxide without high water content, so it's water-lean, and it's much less viscous than other water-lean solvents."

Carbon capture methods are diverse. They range from aqueous amines--the water-rich solvents that run through today's commercially available capture units, which Jiang used as an industrial comparison--to energy-efficient membranes that filter CO2 from flue gas emitted by power plants.

Current atmospheric CO2 levels have soared higher in recent years than at any point within the last 800,000 years, as a new record high of 409.8 parts per million was struck in 2019. CO2 is primarily released through human activities like fossil fuel combustion, and today's atmospheric concentrations exceed pre-industrial levels by 47 percent.

At a cost of $400-$500 million per unit, commercial technology can capture carbon at roughly $58.30 per metric ton of CO2, according to a DOE analysis. EEMPA, according to Jiang's study, can absorb CO2 from power plant flue gas and later release it as pure CO2 for as little as $47.10 per metric ton, offering an additional technology option for power plant operators to capture their CO2.

Jiang's study described seven processes that power plants can adopt when using EEMPA, ranging from simple setups similar to those described in 1930s technology, to multi-stage configurations of greater complexity. Jiang modeled the energy and material costs to run such processes in a 550-megawatt coal power plant, finding that each method coalesces near the $47.10 per metric ton mark.

Solving a solvent's problems

One of the first known patents for solvent-based carbon capture technology cropped up in 1930, filed by Robert Bottoms.

"I kid you not," said green chemist David Heldebrant, coauthor of the new study. "Ninety-one years ago, Bottoms used almost the same process design and chemistry to address what we now know as a 21st century problem."

The chemical process for extracting CO2 from post-combustion gas remains largely unchanged: water-rich amines mix with flue gas, absorb CO2 and are later stripped of the gas, which is then compressed and stored. But aqueous amines have limitations. Because they're water-rich, they must be boiled at high temperatures to remove CO2 and then cooled before they can be reused, driving costs upward.

"We wanted to hit it from the other side and ask, why are we not using 21st century chemistry for this?" Heldebrant said. So, in 2009, he and his colleagues began designing water-lean solvents as an alternative. The first few solvents were too viscous to be usable.

"'Look,'" he recalled industry partners saying, "'your solvent is freezing and turning into glass. We can't work with this.' So, we said, OK. Challenge accepted."

Over the next decade, the PNNL team refined the solvent's chemistry with the explicit aim to overcome the "viscosity barrier." The key, it turned out, was to use molecules that aligned in a way that promoted internal hydrogen bonding, leaving fewer hydrogen atoms to interact with neighboring molecules.

Heldebrant draws a comparison to children running through a ball pit: if two kids hold each other's hands while passing through, they move slowly. But if they hold their own hands instead, they pass as two smaller, faster-moving objects. Internal hydrogen bonding also leaves fewer hydrogen atoms to interact with overall, akin to removing balls from the pit.

Pivoting to plastic

Where the team's solvent was once viscous like honey, it now flowed like water from the kettle. EEMPA is 99 percent less viscous than PNNL's previous water-lean formulations, now nearly on par with commercial solvents, allowing them to be utilized in existing infrastructure, which is largely built from steel. Pivoting to plastic in place of steel, the team found, can further reduce equipment costs.

Steel is expensive to produce, costly to ship and tends to corrode over time in contact with solvents. At one tenth the weight, substituting plastic for steel can drive the overall cost down another $5 per metric ton, according to a study led by Jiang in 2019.

Pairing with plastic offers another advantage to EEMPA, whose reactive surface area is boosted in plastic systems. Because traditional aqueous amines can't "wet" plastic as well (think of water beading on Teflon), this advantage is unique to the new solvent.

The PNNL team plans to produce 4,000 gallons of EEMPA in 2022 to analyze at a 0.5-megawatt scale inside testing facilities at the National Carbon Capture Center in Shelby County, Alabama, in a project led by the Electric Power Research Institute in partnership with Research Triangle Institute International. They will continue testing at increasing scales and further refine the solvent's chemistry, with the aim to reach the U.S. Department of Energy's goal of deploying commercially available technology that can capture CO2 at a cost of $30 per metric ton by 2035.

Credit: 
DOE/Pacific Northwest National Laboratory

Hubble sees new atmosphere forming on a rocky exoplanet

image: This image is an artist's impression of the exoplanet GJ 1132 b.

For the first time, scientists using the NASA/ESA Hubble Space Telescope have found evidence of volcanic activity reforming the atmosphere on this rocky planet, which has a similar density, size, and age to that of Earth.

To the surprise of astronomers, new observations from Hubble have uncovered a second atmosphere that has replaced the planet's first atmosphere. It is rich in hydrogen, hydrogen cyanide, methane and ammonia, and also has a hydrocarbon haze. Astronomers theorise that hydrogen from the original atmosphere was absorbed into the planet's molten magma mantle and is now being slowly released by volcanism to form a new atmosphere. This second atmosphere, which continues to leak away into space, is continually being replenished from the reservoir of hydrogen in the mantle's magma.

Image: 
NASA, ESA, and R. Hurt (IPAC/Caltech)

The planet GJ 1132 b appears to have begun life as a gaseous world with a thick blanket of atmosphere. Starting out at several times the radius of Earth, this so-called "sub-Neptune" quickly lost its primordial hydrogen and helium atmosphere, which was stripped away by the intense radiation from its hot, young star. In a short period of time, it was reduced to a bare core about the size of Earth.

To the surprise of astronomers, new observations from Hubble [1] have uncovered a secondary atmosphere that has replaced the planet's first atmosphere. It is rich in hydrogen, hydrogen cyanide, methane and ammonia, and also has a hydrocarbon haze. Astronomers theorise that hydrogen from the original atmosphere was absorbed into the planet's molten magma mantle and is now being slowly released by volcanism to form a new atmosphere. This second atmosphere, which continues to leak away into space, is continually being replenished from the reservoir of hydrogen in the mantle's magma.

"This second atmosphere comes from the surface and interior of the planet, and so it is a window onto the geology of another world," explained team member Paul Rimmer of the University of Cambridge, UK. "A lot more work needs to be done to properly look through it, but the discovery of this window is of great importance."

"We first thought that these highly radiated planets would be pretty boring because we believed that they lost their atmospheres," said team member Raissa Estrela of the Jet Propulsion Laboratory at the California Institute of Technology in Pasadena, California, USA. But we looked at existing observations of this planet with Hubble and realised that there is an atmosphere there."

"How many terrestrial planets don't begin as terrestrials? Some may start as sub-Neptunes, and they become terrestrials through a mechanism whereby light evaporates the primordial atmosphere. This process works early in a planet's life, when the star is hotter," said team leader Mark Swain of the Jet Propulsion Laboratory. "Then the star cools down and the planet's just sitting there. So you've got this mechanism that can cook off the atmosphere in the first 100 million years, and then things settle down. And if you can regenerate the atmosphere, maybe you can keep it."

In some ways, GJ 1132 b has various parallels to Earth, but in some ways it is also very different. Both have similar densities, similar sizes, and similar ages, being about 4.5 billion years old. Both started with a hydrogen-dominated atmosphere, and both were hot before they cooled down. The team's work even suggests that GJ 1132 b and Earth have similar atmospheric pressure at the surface.

However, the planets' formation histories are profoundly different. Earth is not believed to be the surviving core of a sub-Neptune. And Earth orbits at a comfortable distance from our yellow dwarf Sun. GJ 1132 b is so close to its host red dwarf star that it completes an orbit the star once every day and a half. This extremely close proximity keeps GJ 1132 b tidally locked, showing the same face to its star at all times -- just as our moon keeps one hemisphere permanently facing Earth.

"The question is, what is keeping the mantle hot enough to remain liquid and power volcanism?" asked Swain. "This system is special because it has the opportunity for quite a lot of tidal heating."

The phenomenon of tidal heating occurs through friction, when energy from a planet's orbit and rotation is dispersed as heat inside the planet. GJ 1132 b is in an elliptical orbit, and the tidal forces acting on it are strongest when it is closest to or farthest from its host star. At least one other planet in the host star's system also exerts a gravitational pull on the planet. The consequences are that the planet is squeezed or stretched by this gravitational "pumping." That tidal heating keeps the mantle liquid for a long time. A nearby example in our own Solar System is the Jovian moon, Io, which has continuous volcanism as a result of a tidal tug-of-war between Jupiter and the neighbouring Jovian moons.

The team believes the crust of GJ 1132 b is extremely thin, perhaps only hundreds of feet thick. That's much too feeble to support anything resembling volcanic mountains. Its flat terrain may also be cracked like an eggshell by tidal flexing. Hydrogen and other gases could be released through such cracks.

"This atmosphere, if it's thin -- meaning if it has a surface pressure similar to Earth -- probably means you can see right down to the ground at infrared wavelengths. That means that if astronomers use the James Webb Space Telescope to observe this planet, there's a possibility that they will see not the spectrum of the atmosphere, but rather the spectrum of the surface," explained Swain. "And if there are magma pools or volcanism going on, those areas will be hotter. That will generate more emission, and so they'll potentially be looking at the actual geological activity -- which is exciting!"

This result is significant because it gives exoplanet scientists a way to figure out something about a planet's geology from its atmosphere," added Rimmer. "It is also important for understanding where the rocky planets in our own Solar System -- Mercury, Venus, Earth and Mars, fit into the bigger picture of comparative planetology, in terms of the availability of hydrogen versus oxygen in the atmosphere."

Credit: 
ESA/Hubble Information Centre

Birds learn to avoid flashy, hard-to-catch butterflies and their lookalikes

image: Research suggests showy colors and patterns in certain butterflies could warn predators of their speed and agility. In Adelpha butterflies, many distantly related species have one of three common "racing stripe" patterns, shown in these specimens.

Image: 
Jeff Gage/Florida Museum of Natural History

GAINESVILLE, Fla. --- The showy colors of some butterflies could advertise their speed and nimbleness, much like a coat of bright yellow paint on a sports car. A new study shows birds can learn to recognize these visual cues, avoiding not only butterflies they've failed to nab in the past but similar-looking species as well.

The research provides some of the strongest evidence to date for the idea of evasive mimicry, a strategy in which animals protect themselves from predators by matching the colors or patterns of agile relatives. First proposed more than 60 years ago, the hypothesis has been a challenge to test.

But in an experimental setting, researchers found that wild birds learned and remembered the wing patterns of artificial butterflies that evaded their attacks, as well as those that had a foul flavor, equally spurning both in follow-up tests and often ignoring lookalikes with similar color patterns. Unexpectedly, the birds learned to avoid evasive butterflies faster than distasteful ones.

The results suggest that being hard to catch may deter predators at least as effectively as chemical defenses.

"There's a common idea that being distasteful is one of the best kinds of defense to have, but at least in this experiment, that didn't prove to be the case," said study co-author Keith Willmott, curator and director of the Florida Museum of Natural History's McGuire Center for Lepidoptera and Biodiversity.

Most research on warning coloration has focused on species with chemical defenses and those that mimic them. Monarch butterflies, for example, sport bright wing patterns of black lines on a field of orange, indicating they contain bad-tasting toxins. A predator that eats one will likely avoid both monarchs and the similar-looking viceroy butterfly in the future.

But a growing number of studies suggest a flashy exterior can mean something entirely different: that an animal is quick. Predators learn to associate these kinds of patterns with a futile chase that leaves them hungry, and species that evolve imitations of these "racing stripes" can capitalize on a defensive strategy while reinforcing the visual message.

"When many species share the same color pattern, they're better able to educate predators to avoid them," Willmott said. "The more species that share it, the better."

During his Ph.D. studies, Willmott worked on the classification of a group of fast-flying tropical butterflies known as Adelpha. At first, he found them nearly impossible to identify. It seemed the genus either contained only a few species with slight variations in wing pattern or dozens of species that looked virtually the same. The latter turned out to be the case, with more than 90 species making up the group. Like some researchers before him, Willmott began to wonder whether evasive mimicry could explain why so many species of Adelpha looked alike.

"It was always mysterious to me," he said. "Species whose upper wings looked incredibly similar were distantly related, and we started to see cases where even subspecies of multiple species suddenly developed very unique color patterns. Really, the only way you can explain that is through mimicry."

While other researchers suggested some Adelpha must have hidden chemical defenses, the explanation didn't quite satisfy Willmott. Toxic butterflies are usually slow fliers with long wings and a propensity for playing dead when caught. Adelpha butterflies, however, don't display these traits, having instead a short, stout thorax and smaller, triangular wings - characteristics that enable fast, erratic flight and sharp turns.

But he wasn't sure how to test this hypothesis until a conversation with fellow researchers at a 2018 conference in India: Johanna Mappes was an expert at developing predator-prey experiments with wild birds; Pável Matos-Maraví was interested in the evasive behavior of skipper butterflies; and Marianne Elias and her Ph.D. student Erika Páez were eager to study what drove the evolution of wing color patterns in the genus Adelpha, including the possible effects of predators.

Simulating how evasive mimicry might play out in the wild appealed to the group. The ability of prey to escape predators' attacks has been "virtually unstudied," said Elias, a research group leader at the Institute of Systematics, Evolution, Biodiversity at the National Museum of Natural History in France.

Previous work had shown birds can identify the visual cues of evasive prey. Together, the team designed an experiment to test whether potential examples of evasive mimicry in Adelpha could be the result of natural selection.

At a special facility in Finland, the researchers collaborated with Janne Valkonen of the University of Jyväskylä to capture wild blue tits, birds that would never have encountered tropical Adelpha butterflies, and train them to catch a paper butterfly with an almond treat attached to its underside. Then, the birds were presented with a plain brown paper butterfly as a control and a paper butterfly with one of three common Adelpha wing patterns: a vertical white band on black forewings, a vertical orange band on black forewings or a combination of orange-striped forewings with white-striped hindwings.

The paper Adelpha butterfly either concealed an almond soaked in a bitter substance - a proxy for chemical defense - or evaded the bird's attack by gliding away on a rail. The birds learned to connect a particular wing pattern with the negative experience of distastefulness or escape, eventually avoiding this butterfly and striking the control instead. In a final test, they were given four butterflies at the same time: the plain brown butterfly and all three Adelpha butterflies, including one with the pattern they had seen before.

They strongly avoided the butterfly they had learned to associate with the bitter almond or fast flight and often avoided butterflies that shared a similar color or pattern.

Birds were 1.6 times more likely to attack the distasteful butterfly than evasive ones, perhaps because they had varying levels of tolerance for the bad-tasting almond, said Páez, who co-led the study with Valkonen. After all, even a bitter morsel of food is better than nothing.

"Bad-tasting prey could provide a nutritive meal whereas missing prey completely cannot," she said.

While birds tend to avoid colorful prey by default, the study provides evidence of learned behavior, Willmott said.

"This potentially explains many cases of apparent mimicry that lacked evidence of chemical defense."

Credit: 
Florida Museum of Natural History

New tool to dissect the "undruggable"

image: Woo and her lab have designed a pencil/eraser pair to both write and erase an important sugar from proteins, a crucial step to understand how these sugars influence proteins and their connection to diseases like Alzheimer's and Parkinson's.

Image: 
Kris Snibbe/Harvard University

Sugar has been called "evil," "toxic," and "poison." But the body needs sugars, too. Sugar molecules help cells recognize and fight viruses and bacteria, shuttle proteins from cell to cell, and make sure those proteins function. Too much or too little can contribute to a range of maladies, including neurodegenerative diseases like Alzheimer's, inflammation, diabetes, and even cancer.

About 85 percent of proteins, including those associated with Alzheimer's and Parkinson's, are beyond the reach of current drugs. One critical and abundant sugar (O-GlcNAc, pronounced o-glick-nack) is found on over 5,000 proteins, often those considered "undruggable." But now, researchers at Harvard University have designed a new highly-selective O-GlcNAc pencil and eraser--tools that can add or remove the sugar from a protein with no off-target effects--to examine exactly what these sugars are doing and, eventually, engineer them into new treatments for the "undruggable."

"We can now start studying particular proteins and see what happens when you add or remove the sugar," said Daniel Ramirez, a co-author on the paper published in Nature Chemical Biology and a Ph.D. candidate in biological and biomedical sciences in the Graduate School of Arts and Sciences. "This is turning out to be very important for a lot of chronic diseases like cancer and diabetes and Alzheimer's."

Ramirez designed the original O-GlcNAc pencil, which was reported in ACS Chemical Biology.

All cells carry a multitude of sugars (called glycans), but they're notoriously hard to study. Current tools either provide a wide-lens view (turning on or off all the O-GlcNAc in a cell) or an ultra-zoomed in view (turning on or off a single sugar on one amino acid on one protein). Neither of these perspectives can show what O-GlcNAc molecules are doing to a protein as a whole, the crucial insight that would enable researchers to connect the dots from O-GlcNAc to disease.

"With the protein-level approach, we're filling in an important piece that was missing," said Christina Woo, an associate professor of chemistry and chemical biology, who led the study. Her lab's tool is like Goldilocks' lukewarm bowl of porridge: Not too broad, not too specific. Just right.

"Once you have any protein of interest," said first-author and postdoctoral scholar Yun Ge, "you can apply this tool on that protein and look at the outcomes directly." Ge engineered the O-GlcNAc eraser, which, like the pencil, uses a nanobody as a protein homing device. The tool is adaptable, too; as long as a nanobody exists for a protein of choice, the tool can be modified to target any protein for which a homing nanobody exists.

The nanobody is a crucial component, but it has limitations: Whether or not it remains stuck to the target protein is still in question, and the molecule could alter the function or structure of the protein once stuck. If cellular changes can't be definitively linked to the sugar on the protein, that muddies the data.

To skirt these potential limitations, the team engineered their pencils and erasers to be "catalytically dead," said Woo. The neutered enzymes won't make unwanted changes along the way to their target protein. And, they can both add and remove sugars, unlike previous tools, which cause permanent changes. Of course, once they connect a specific protein function to O-GlcNAc, they can then use those tools to zoom in and locate exactly where those sugars are latching onto and modifying the protein.

Already, a few of the Woo lab's collaborators are using the pencil/eraser combo to study O-GlcNAc in live animals. One, for example, is using fruit flies to study how the sugar impacts a protein associated with Alzheimer's disease. The sugar is also associated with Parkinson's disease progression: "If you're taking in less glucose," said co-author Ramirez, "then you're not able to produce this sugar inside the cells." That means the body can't attach the sugars to the proteins, which causes wide-reaching changes to the cells, aggravating the disease. In diabetes, excess sugars cause similar global disruption; and cancer cells tend to eat lots of sugars. Now, with the Woo lab's pencil/eraser pair, researchers can identify exactly how these sugars impact various proteins and start to design drugs to reverse negative effects.

Next, the team plans to tweak their tool to achieve even greater control. With optogenetics, for example, they could switch sugars on or off with just a flash of light. Swapping out nanobodies for small molecules (used in traditional drug design), they could edge closer to new treatments. They're also designing an eraser for the eraser--a tool with a kill switch--and plan to incorporate nanobodies that can target a naturally-occurring protein (for this study, they tagged proteins so the nanobody could find them). "We're basically trying to make the system more natural and function the way the cell does," said Ramirez.

Woo also plans to investigate how O-GlcNAc may influence traditionally "undruggable" proteins called transcription factors, which turn genes on and off. If O-GlcNAc plays a role in that process, the sugars could be engineered to study and regulate gene function, too.

"We really don't know what people are going to find once we give them these tools," said Ramirez. The tool may be new, but the potential is great: "We're on the iPhone one, basically," he continued, "but we're already working on the next couple generations."

Credit: 
Harvard University