Tech

Pushing photons

UC Santa Barbara researchers continue to push the boundaries of LED design a little further with a new method that could pave the way toward more efficient and versatile LED display and lighting technology.

In a paper published in Nature Photonics, UCSB electrical and computer engineering professor Jonathan Schuller and collaborators describe this new approach, which could allow a wide variety of LED devices -- from virtual reality headsets to automotive lighting -- to become more sophisticated and sleeker at the same time.

"What we showed is a new kind of photonic architecture that not only allows you to extract more photons, but also to direct them where you want," said Schuller. This improved performance, he explained, is achieved without the external packaging components that are often used to manipulate the light emitted by LEDs.

Light in LEDs is generated in the semiconductor material when excited, negatively charged electrons traveling along the semiconductor's crystal lattice meet positively-charged holes (an absence of electrons) and transition to a lower state of energy, releasing a photon along the way. Over the course of their measurements, the researchers found that a significant amount of these photons were being generated but were not making it out of the LED.

"We realized that if you looked at the angular distribution of the emitted photon before patterning, it tended to peak at a certain direction that would normally be trapped within the LED structure," Schuller said. "And so we realized that you could design around that normally trapped light using traditional metasurface concepts."

The design they settled upon consists of an array of 1.45-micrometer long gallium nitride (GaN) nanorods on a sapphire substrate, in which quantum wells of indium gallium nitride were embedded, to confine electrons and holes and thus emit light. In addition to allowing more light to leave the semiconductor structure, the process polarizes the light, which co-lead author Prasad Iyer said, "is critical for a lot of applications."

Nanoscale Antennae

The idea for the project came to Iyer a couple of years ago as he was completing his doctorate in Schuller's lab, where the research is focused on photonics technology and optical phenomena at subwavelength scales. Metasurfaces -- engineered surfaces with nanoscale features that interact with light -- were the focus of his research.

"A metasurface is essentially a subwavelength array of antennas," said Iyer, who previously was researching how to steer laser beams with metasurfaces. He understood that typical metasurfaces rely on the highly directional properties of the incoming laser beam to produce a highly directed outgoing beam.

LEDs, on the other hand, emit spontaneous light, as opposed to the laser's stimulated, coherent light.

"Spontaneous emission samples all the possible ways the photon is allowed to go," Schuller explained, so the light appears as a spray of photons traveling in all possible directions. The question was could they, through careful nanoscale design and fabrication of the semiconductor surface, herd the generated photons in a desired direction?

"People have done patterning of LEDs previously," Iyer said, but those efforts invariably split the into multiple directions, with low efficiency. "Nobody had engineered a way to control the emission of light from an LED into a single direction."

Right Place, Right Time

It was a puzzle that would not have found a solution, Iyer said, without the help of a team of expert collaborators. GaN is exceptionally difficult to work with and requires specialized processes to make high-quality crystals. Only a few places in the world have the expertise to fabricate the material in such exacting design.

Fortunately, UC Santa Barbara, home to the Solid State Lighting and Energy Electronics Center (SSLEEC), is one of those places. With the expertise at SSLEEC and the campus's world-class nanofabrication facility, the researchers designed and patterned the semiconductor surface to adapt the metasurface concept for spontaneous light emission.

"We were very fortunate to collaborate with the world experts in making these things," Schuller said.

Credit: 
University of California - Santa Barbara

JCESR lays foundation for safer, longer-lasting batteries

image: The Paddlewheel Effect. Above a certain temperature, SO??² anions begins to rotate, and simultaneously nearby Li? cations become highly mobile. Linda Nazar's work shows that in certain solid electrolytes, altering the chemical composition enables anion rotation and the paddlewheel effect below room temperature.

Image: 
Argonne National Laboratory

Electricity storage in batteries is in ever increasing demand for smartphones, laptops, cars and the power grid. Solid-state batteries are among the most promising next-generation technologies because they offer a higher level of safety and potentially longer life.  

The Joint Center for Energy Storage Research (JCESR) has made significant strides with solid-state batteries as successors to today’s lithium-ion (Li-ion) batteries. A major challenge with solid-state batteries is increasing the diffusivity of Li-ions in the solid-state electrolyte, which is typically slower than in the liquid organic electrolytes now used in Li-ion batteries.

“JCESR wants to understand the atomic and molecular level origins of battery behavior.  With this knowledge, we can build the battery from the bottom up, atom-by-atom and molecule-by-molecule, where every atom and molecule play a prescribed role in producing the targeted battery behavior.” — JCESR Director George Crabtree

JCESR’s Linda Nazar, a leading professor at the University of Waterloo, and Zhizhen Zhang, her postdoctoral research associate, published their research on enhancing the mobility of Li-ions in solid-state batteries using the paddlewheel effect, which is the coordinated motion of atoms, in a paper entitled: “Targeting Superionic Conductivity at Room Temperature by Turning on Anion Rotation in Fast Ion Conductors” on June 3 in Matter, a monthly journal of materials science. JCESR is an Energy Innovation Hub led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory. The University of Waterloo is one of JCESR’s 18 partners.

Solid-state batteries, using solid electrolytes in place of the usual liquid organic electrolytes, have emerged as a promising replacements for today’s Li-ion batteries, according to Nazar.

“They offer the potential of safer and longer-lasting batteries that can deliver higher energy density important to a wide variety of electrochemical energy storage applications, such as vehicles, robots, drones and more,” said Nazar. “As the most important component in solid-state batteries, the solid electrolyte determines its safety and cycle stability to a large extent.”

An unwanted chemical reaction, called the thermal runaway reaction, has led to fires and explosions involving today’s Li-ion batteries which continue to burn until they run out of fuel. Because of these hazards, JCESR seeks to eliminate the internal liquid organic electrolyte by replacing it with a solid.

Very few solid-state electrolytes have ion conductivity as high as liquid organic electrolytes, and they receive the lion’s share of attention.  JCESR is exploring a promising phenomenon that dramatically speeds up ion diffusion: the rotational motion of normally static negative ions (i.e., anions) in the solid-state electrolyte framework that help drive the motion of the Li+ positive ions (i.e., cations).

“In fact, it turns out that the anion ‘building blocks’ that comprise the solid framework are not rigid, but undergo rotational motion,” said Nazar.  “Our study addresses this principle to show that anion dynamics in the framework of the solid enhance Li+ cation transport.  The anion dynamics can be ‘turned on’ even at room temperature by tuning the framework, and the anion dynamics are strongly coupled to cation diffusion by the paddlewheel effect. This is somewhat akin to the transport of people through a multi-person revolving door.”

While new solid electrolytes are still in the developmental stage, the advances are encouraging. A breakthrough would be a game changer and dramatically increase the safety and deployment of Li-ion batteries, according to JCESR Director George Crabtree.

“If you can find a solid-state electrolyte enabling fast Li+ cation motion, it would be a drop-in replacement for liquid organic electrolytes and immediately rid batteries of the thermal runaway reaction, the major cause of fire in today’s Li-ion batteries.” said Crabtree. “For its safety advantages alone, there would be a huge market for it in cell phones, laptops, video recorders, autos and the electricity grid.”

The intellectual enthusiasm for solid-state batteries is shared across JCESR. Other collaborators at University of Michigan and MIT also are exploring solid electrolytes and the paddlewheel effect.  Solid-state batteries are one of the most promising and sought-after advances for industry, said Crabtree. 

“JCESR wants to understand the atomic and molecular level origins of battery behavior.  With this knowledge, we can build the battery from the bottom up, atom-by-atom and molecule-by-molecule, where every atom and molecule play a prescribed role in producing the targeted battery behavior,” Crabtree said.  “The paddlewheel effect is an example of that. This paper is at the very frontier of solid electrolyte behavior, and we want to transfer this knowledge to the commercial sector.”

Credit: 
DOE/Argonne National Laboratory

CNIC researchers discover a system essential for limb formation during embryonic development

image: From left to right.: Miguel Torres, Irene Delgado Carceller, Giovanna Giovinazzo, Fátima Sánchez Cabo, and Vanessa Carolina Cadenas Rodríguez.

Image: 
CNIC

Researchers at the Centro Nacional de Investigaciones Cardiovasculares (CNIC) have discovered a system that provides cells with information about their position within developing organs. This system, studied in developing limbs, tells cells what anatomical structure they need to form within the organ. The article, published today in Science Advances, shows that malfunctioning of this system causes congenital malformations and could in part explain the effect of thalidomide, a drug contraindicated in pregnancy because it induces limb defects.

Embryonic development is one of the most fascinating processes in nature and has aroused scientific interest since the time of Aristotle. The generation of millions of cells from a single progenitor and their organization to produce the precise anatomy of each species is one of the most astounding examples of a self-organizing system. "Understanding how cells know which organs and anatomical structures they should generate in each location within the embryo is one of the most interesting challenges in this scientific field," said study coordinator Dr Miguel Torres.

The positional information theory, first proposed more than 50 years ago by the British scientist Lewis Wolpert, hypothesized a mechanism through which cells obtain information about their position in the embryo. "This system can be compared to the GPS geolocation systems used by cell phones," Dr Torres continued. "GPS systems consist of an external reference--the satellite signals--and a mechanism for interpreting these signals, housed in each of our cell phones. In biological systems, the positional information in each cell triggers a distinct and specific developmental plan at each location."

The CNIC team, working with partners at the National Institutes of Health in the USA, analyzed the molecular basis of limb formation. The scientists discovered how cells obtain information about their position on the proximodistal axis of the limb bud, or primordium (the rudimentary state of a developing organ.)

Study first author Dr Irene Delgado explained that "our work shows that the signal that tells cells where they are is the growth factor FGF." FGF molecules are produced exclusively by "a small group of cells at the distal tip of the limb bud, furthest away from the body trunk."

The strength of the signal received by cells depends on how close they are to the FGF-producing cells. In other words, explained Dr Delgado, "the more distal a cell is, the stronger the FGF signal it receives, whereas more proximal cells receive a weaker signal."

The researchers demonstrated that the molecule in receptor cells that interprets FGF signals is a transcription factor called Meis. This transcription factor is distributed in a linear abundance gradient, so that it is highly abundant in proximal cells (close to the body trunk) and becomes progressively less abundant in more distal positions. In other words, clarified Dr Torres, "the amount of Meis in each cell reflects the amount of FGF received, and thus marks the cell's position along the proximodistal axis of the developing limb."

Transcription factors regulate the function of the genome, modulating cell behavior by switching some genes on and switching others off. Depending on the amount of Meis in a cell, specific groups of genes, including Hox genes, are activated, corresponding to the cell's position along the proximodistal axis. "Cells that receive an instruction that their position is proximal are programmed to generate the shoulder, whereas more distal cells are programmed to form the hand, and cells in an intermediate position form the upper arm, elbow, or forearm," explained Delgado.

This system, emphasized Torres, "is essential for the correct formation of the limbs." The mechanisms described in the study advance understanding of the origin of phocomelia, a congenital defect in which the embryonic limbs only form hands and feet, with the rest of the limb failing to develop. In the study, experimental elimination of FGF-Meis signaling resulted in all limb-bud cells receiving the incorrect instruction that they were distal, resulting in phocomelia.

This finding may help to explain the mechanism of action of thalidomide, a drug famous for causing limb defects. Previous research into the effects of thalidomide indicated that Meis is one of the factors affected by this drug.

The findings establish a new model for the generation of proximodistal identity in the developing vertebrate limb and provide a molecular mechanism for the interpretation of FGF gradients during axial patterning in vertebrate embryos.

Credit: 
Centro Nacional de Investigaciones Cardiovasculares Carlos III (F.S.P.)

Preliminary Criterion scores do not help counteract racial gap in NIH grant awards

A new scoring approach introduced in 2009 to curb bias during the National Institutes of Health (NIH) Enhanced Peer Review process did not mitigate the gap in preliminary overall impact scores between black and white principal investigators (PIs) for the years 2014 to 2016, a new study shows. Black investigators, on average, received worse preliminary scores for all five criteria - Significance, Investigator(s), Innovation, Approach, and Environment - even after accounting for career stage, gender, type of degree, and scientific field. Elena Erosheva and colleagues also demonstrated that substantial funding gaps continue to exist between black and white applicants, with the award probability for black PIs only 55% of that for their white counterparts. Although the authors caution against making direct comparisons to work done before the NIH introduced scored criteria to increase transparency to its applicants, this finding aligns with those of a series of groundbreaking studies from Donna Ginther et al., which demonstrated large funding disparities for black PIs during the years 2000 to 2006. In the interim, the NIH introduced its Enhanced Peer Review process in 2009 to improve information and transparency for applicants, an adjustment that the researchers hypothesized should have reduced funding gaps between black and white PIs. This process requires each reviewer to provide whole-number scores (ranging between one and nine, with one being the best) based on the NIH's five criteria, which the reviewer then uses to derive one preliminary overall impact score. Reviewers are given some latitude to weigh their criterion scores as they see fit when they derive their preliminary overall impact score. The averages of these preliminary overall impact scores determine which applications are selected for discussion at Scientific Review Group meetings, after which reviewers assign the final overall impact scores that decide an application's fate. To evaluate whether there are differences in how preliminary criterion scores of black and white applicants are combined to produce preliminary overall impact scores, Erosheva et al. performed multilevel modeling on preliminary scores from 54,740 NIH grant applications. The results demonstrate that preliminary criterion scores fully account for racial disparities in preliminary overall impact scores, although they do not explain all of the observed variability. "Overall, we conclude that preliminary criterion scores absorb rather than mitigate racial disparities in preliminary overall impact scores," the authors say.

Credit: 
American Association for the Advancement of Science (AAAS)

'Terminator' protein halts cancer-causing cellular processes

ITHACA, N.Y. - Essential processes in mammalian cells are controlled by proteins called transcription factors. For example, the transcription factor HIF-1 is triggered by a low-oxygen situation to cause the cell to adapt to decreased oxygen.

Transcription factors operate in healthy cells, but cancer cells can co-opt transcription factors such as HIF-1 into promoting tumor growth.

New research from the lab of Hening Lin, professor of chemistry and chemical biology in the College of Arts and Sciences, finds that a protein called TiPARP acts as a terminator for several cancer-causing transcription factors, including HIF-1, which is implicated in many cancers, including breast cancer. The research demonstrates that TiPARP, therefore, is a tumor suppressor.

The paper "TiPARP Forms Nuclear Condensates to Degrade HIF-1α and Suppress Tumorigenesis," published in PNAS, establishes TiPARP as a turning-off mechanism for several important transcription factors - including HIF-1, C0-Myc and estrogen receptor - and shows how TiPARP itself is degraded during this process. The study also shows the mechanism through which TiPARP terminates these factors, another new discovery.

Co-authors are graduate student Lu Zhang; former postdoctoral fellow Ji Cao; and Longying Dong, former director of the Immunopathology Research and Development Laboratory in the College of Veterinary Medicine's Department of Biomedical Sciences.

HIF-1 is important for cancer because a lot of tumors thrive in low-oxygen conditions, said Lin, a Howard Hughes Medical Institute Investigator and corresponding author of the paper.

"For these tumors to survive, they have to rely on HIF-1," Lin said. "TiPARP is a terminator of HIF-1. Therefore, if you can activate TiPARP, then you can suppress [tumor growth]."

Lin and co-authors are also excited by the discovery of the mechanism through which TiPARP brings about the termination of HIF-1 and other transcription factors. This mechanism, called "liquid-liquid phase separation" or "phase condensation," is a topic of great interest in biology.

Imagine drops of vinegar in oil: The vinegar forms distinct droplets suspended in the more viscous oil.

Similarly, when TiPARP is activated in a cell nucleus, it forms the so-called "phase separation" that recruits HIF-1-alpha and HUWE1 (an ubiquitin protein ligase) in the cell nucleus. This starts a process through which HIF-1-alpha and TiPARP both deactivate and degrade.

Through phase separation, TiPARP terminates not just HIF-1 but several different transcription factors implicated in different types of cancer, Lin said.

In fact, TiPARP may already be at work in Tamoxifen, a widely used breast cancer drug. Lin thinks Tamoxifen, which successfully treats estrogen receptor-positive breast cancers, works because TiPARP is actively terminating estrogen receptor, HIF-1 and c-Myc in tumors.

"When Tamoxifen and similar compounds were developed as estrogen receptor agonists or antagonists, we did not even know what compounds would be better," he said. "Now we think the idea will be, what compound can activate TiPARP better?"

Credit: 
Cornell University

NASA finds heavy rainfall in Tropical Storm Cristobal

image: GPM satellite provided a look at Cristobal's rainfall rates on June 3 at 0311 UTC (June 2 at 11:11 p.m. EDT). GPM found heaviest rainfall in the south falling at rates of more than 1 inch (25 mm) per hour over Mexico's Yucatan Peninsula. Lighter rain rates appear around the entire system.

Image: 
NASA/NRL

The third tropical cyclone of the Atlantic Ocean basin has been generating large amounts of rainfall over Mexico's Yucatan and parts of Central America. Using satellite data, NASA analyzed that heavy rainfall and provided forecasters with valuable cloud top temperature data to help assess the strength of the storm.

On June 2, 2020, by 2 p.m. EDT, Tropical Depression 03L strengthened into Tropical Storm Cristobal over Mexico's Gulf of Campeche. The Gulf of Campeche is surrounded by Mexico's Yucatan Peninsula, and the gulf is part of the southwestern Gulf of Mexico.

Cristobal remained in the Bay of Campeche on June 3, and a Tropical Storm Warning remained in effect from Campeche to Puerto de Veracruz.

Analyzing Rainfall

The Global Precipitation Measurement mission or GPM satellite provided a look at Cristobal's rainfall rates on June 3 at 0311 UTC (June 2 at 11:11 p.m. EDT). GPM found heaviest rainfall south of center over Mexico's Yucatan Peninsula, falling at rates of more than 1 inch (25 mm) per hour. Lighter rain rates appear around the entire system.

Analyzing Cloud Top Temperatures

Another way NASA analyzes tropical cyclones is by using infrared data that provides temperature information. The Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite provided data on cloud top temperatures of Cristobal.

Cloud top temperatures provide information to forecasters about where the strongest storms are located within a tropical cyclone. Tropical cyclones do not always have uniform strength, and some sides are stronger than others. The stronger the storms, the higher they extend into the troposphere, and the colder their cloud top temperatures.

On June 3 at 4:20 a.m. EDT (0820 UTC) NASA's Aqua satellite analyzed Tropical Storm Cristobal using the MODIS instrument and found coldest cloud top temperatures as cold as or colder minus 80 degrees Fahrenheit (minus 62.2 Celsius). A large area of the strongest storms were located over the Yucatan Peninsula and along the coastline of the Bay of Campeche. NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain.

NASA provides data to forecasters at NOAA's National Hurricane Center or NHC so they can incorporate it in their forecasting.

Cristobal's Status on June 3, 2020

The National Hurricane Center noted on June 3 at 8 a.m. EDT (1200 UTC), the center of Tropical Storm Cristobal was located by an Air Force Hurricane Hunter aircraft near latitude 18.8 degrees north and longitude 92.1 degrees west. The center was about 25 miles (40 km) northwest of Ciudad Del Carmen, Mexico.

Cristobal was moving toward the southeast near 3 mph (6 kph), and is expected to turn toward the east later in the day. Maximum sustained winds were near 60 mph (95 kph) with higher gusts. Tropical-storm-force winds extended outward up to 60 miles (95 km) from the center. Gradual weakening is forecast while the center remains inland, but re-strengthening is expected after Cristobal moves back over water Thursday night and Friday [June 5]. The minimum central pressure reported by an Air Force Hurricane Hunter aircraft is 994 millibars.

NHC Forecast for Cristobal

A motion toward the north-northeast and north is expected on Thursday and Friday. On the forecast track, the center will cross the southern Bay of Campeche coast later today and move inland over eastern Mexico tonight and Thursday.  The center is forecast to move back over the Bay of Campeche Thursday night and Friday.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

GPM is a joint mission between NASA and the Japan Aerospace Exploration Agency, JAXA. The GPM and Aqua satellites are part of a fleet of NASA Earth observing satellites.

For updated forecasts, visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

Graphene and 2D materials could move electronics beyond 'Moore's Law'

image: A team of researchers based in Manchester, the Netherlands, Singapore, Spain, Switzerland and the USA has published a new review on a field of computer device development known as spintronics, which could see graphene used as building block for next-generation electronics.

Image: 
The University of Manchester

A team of researchers based in Manchester, the Netherlands, Singapore, Spain, Switzerland and the USA has published a new review on a field of computer device development known as spintronics, which could see graphene used as building block for next-generation electronics.

Recent theoretical and experimental advances and phenomena in studies of electronic spin transport in graphene and related two-dimensional (2D) materials have emerged as a fascinating area of research and development.

Spintronics is the combination of electronics and magnetism, at the nanoscale and could lead to next generation high-speed electronics. Spintronic devices are a viable alternative for nanoelectronics beyond Moore's law, offering higher energy efficiency and lower dissipation as compared to conventional electronics, which relies on charge currents. In principle we could have phones and tablets operating with spin-based transistors and memories.

As published in APS Journal Review of Modern Physics, the review focuses on the new perspectives provided by heterostructures and their emergent phenomena, including proximity-enabled spin-orbit effects, coupling spin to light, electrical tunability and 2D magnetism.

The average person already encounters spintronics in laptops and PCs, which are already using spintronics in the form of the magnetic sensors in the reading heads of hard disk drives. These sensors are also used in the automotive industry.

Spintronics is a new approach to developing electronics where both memory devices (RAM) and logic devices (transistors) are implemented with the use of 'spin', which is the basic property of electrons that cause them to behave like tiny magnets, as well as the electronic charge.

Dr Ivan Vera Marun, Lecturer in Condensed Matter Physics at The University of Manchester said: "The continuous progress in graphene spintronics, and more broadly in 2D heterostructures, has resulted in the efficient creation, transport, and detection of spin information using effects previously inaccessible to graphene alone.

"As efforts on both the fundamental and technological aspects continue, we believe that ballistic spin transport will be realised in 2D heterostructures, even at room temperature. Such transport would enable practical use of the quantum mechanical properties of electron wave functions, bringing spins in 2D materials to the service of future quantum computation approaches."

Controlled spin transport in graphene and other two-dimensional materials has become increasingly promising for applications in devices. Of particular interest are custom-tailored heterostructures, known as van der Waals heterostructures, that consist of stacks of two-dimensional materials in a precisely controlled order. This review gives an overview of this developing field of graphene spintronics and outlines the experimental and theoretical state of the art.

Billions of spintronics devices such as sensors and memories are already being produced. Every hard disk drive has a magnetic sensor that uses a flow of spins, and magnetic random access memory (MRAM) chips are becoming increasingly popular.

Over the last decade, exciting results have been made in the field of graphene spintronics, evolving to a next generation of studies extending to new two-dimensional (2D) compounds.

Since its isolation in 2004, graphene has opened the door for other 2D materials. Researchers can then use these materials to create stacks of 2D materials called heterostructures. These can be combined with graphene to create new 'designer materials' to produce applications originally limited to science fiction.

Professor Francisco Guinea who co-authored the paper said: "The field of spintronics, the properties and manipulation of spins in materials has brought to light a number of novel aspects in the behaviour of solids. The study of fundamental aspects of the motion of spin carrying electrons is one of the most active fields in the physics of condensed matter."

The identification and characterisation of new quantum materials with non-trivial topological electronic and magnetic properties is being intensively studied worldwide, after the formulation, in 2004 of the concept of topological insulators. Spintronics lies at the core of this search. Because of their purity, strength, and simplicity, two dimensional materials are the best platform where to find these unique topological features which relate quantum physics, electronics, and magnetism."

Overall, the field of spintronics in graphene and related 2D materials is currently moving towards the demonstration of practical graphene spintronic devices such as coupled nano-oscillators for applications in fields of space communication, high?speed radio links, vehicle radar and interchip communication applications.

Advanced materials is one of The University of Manchester's research beacons - examples of pioneering discoveries, interdisciplinary collaboration and cross-sector partnerships that are tackling some of the biggest questions facing the planet. #ResearchBeacons

Credit: 
University of Manchester

A promise to restore hearing

image: Liu used his base editing technology to correct a recessive genetic mutation that can cause complete hearing loss in humans.

Image: 
Harvard University

When Wei Hsi (Ariel) Yeh was a young undergraduate student, one of her close friends went from normal hearing to complete deafness in the span of one month. He was 29 years old. No one knew why he lost his hearing; doctors still don't know. Frustrated and fearful for her friend, Yeh, who graduated last month with her Ph.D. in chemistry from Harvard University, dedicated her graduate studies to solving some of the vast genetic mysteries behind hearing loss.

In the United States, one in eight people aged 12 years or older has hearing loss in both ears. While technologies like hearing aids and cochlear implants can amplify sound, they can't correct the problem. But gene editing could--genetic anomalies contribute to half of all cases. Two years ago, Yeh and David R. Liu, Thomas Dudley Cabot Professor of the Natural Sciences and a member of the Broad Institute and the Howard Hughes Medical Institute, repaired a dominant mutation and prevented hearing loss in a mouse model for the first time. But, Liu said, "Most genetic diseases are not caused by dominant mutations, they're caused by recessive ones, including most genetic hearing losses."

Now, Liu, Yeh and researchers at Harvard University, the Broad Institute, and the Howard Hughes Medical Institute achieved another first: They restored partial hearing to mice with a recessive mutation in the gene TMC1 that causes complete deafness, the first successful example of genome editing to fix a recessive disease-causing mutation.

Dominant disease mutations, meaning those that sully just one of the body's two copies of a gene, in some ways are easier to attack. Knock out the bad copy, and the good one can come to the rescue. "But for recessive diseases," Liu said, "you can't do that. By definition, the recessive allele means that you have two bad copies. So, you can't just destroy the bad copy." You have to fix one or both.

To hear, animals rely on hair cells in the inner ear, which bend under the pressure of sound waves and send electrical impulses to the brain. The recessive mutation to TMC1 that Liu and Yeh hoped to correct caused rapid deterioration of those hair cells, leading to profound deafness at just 4 weeks of age.

Jeffrey Holt, Professor of Otolaryngology and Neurology at the Harvard Medical School and an author on the paper, successfully treated TMC1-related deafness with gene therapy--they sent cells with healthy versions of the gene in among the unhealthy to counteract the disease-causing mutation. But Volha (Olga) Shubina-Aleinik, a postdoctoral fellow in the Holt lab, said gene therapy may have a limited duration. "That is why we need more advanced techniques such as gene editing, which may last a lifetime."

Yeh spent years designing a base editor that could find and erase the disease-causing mutation and replace it with the correct DNA code. But even after she demonstrated good results in vitro, there was a problem: Base editors are too large to fit in the traditional delivery vehicle, adeno-associated virus or AAV. To solve this problem, the team split the base editor in half, sending each piece in with its own viral vehicle. Once inside, the two viruses needed to co-infect the same cells where the two base editor halves would rejoin and head off to find their target. Despite the labyrinthine entry, the editor proved to be efficient, causing only a minimum of undesired deletions or insertions.

"We saw very little evidence of off-target editing," Liu said. "And we noticed that the edited animals had much-preserved hair cell morphology and signal transduction, meaning the hair cells, the critical cells that convert sound waves to neuronal signals appeared more normal and behaved more normally."

After the treatment, Yeh performed an informal test: She clapped her hands. Mice that had previously lost all hearing ability, jumped and turned to look. Formal tests revealed the base editor worked, at least in part: Treated mice had partially restored hearing and could respond to loud and even some medium sounds, Yeh said.

Of course, more work needs to be done before the treatment can be used in humans. Unedited cells continued to die, causing deafness to return even after the base editor restored function to others.

But the study also proved that the clandestine AAV delivery method works. Already, Liu is using AAV to tackle other genetic diseases, including progeria, sickle cell anemia, and degenerative motor diseases. "We're actually going after quite a few genetic diseases now, including some prominent ones that have caused a lot of suffering and energized pretty passionate communities of patients and patient families to do anything to find a treatment," Liu said. "For progeria, there's no cure. The best treatments extend a child's average lifespan from about 14 to 14.5 years."

For Yeh, whose friend still has no answer, much less a cure for his hearing loss, genetic deafness is still her primary target. "There's still a lot to explore," she said. "There's so much unknown."

Credit: 
Harvard University

Ocean uptake of CO2 could drop as carbon emissions are cut

image: Ocean waters could quickly respond to drops in human carbon emissions by taking up less from the atmosphere.

Image: 
Kevin Krajick/Earth Institute

Volcanic eruptions and human-caused changes to the atmosphere strongly influence the rate at which the ocean absorbs carbon dioxide, says a new study. The ocean is so sensitive to changes such as declining greenhouse gas emissions that it immediately responds by taking up less carbon dioxide.

The authors say we may soon see this play out due to the COVID-19 pandemic lessening global fuel consumption; they predict the ocean will not continue its recent historic pattern of absorbing more carbon dioxide each year than the year before, and could even take up less in 2020 than in 2019.

"We didn't realize until we did this work that these external forcings, like changes in the growth of atmospheric carbon dioxide, dominate the variability in the global ocean on year-to-year timescales. That's a real surprise," said lead author Galen McKinley, a carbon cycle scientist at Columbia University's Lamont-Doherty Earth Observatory. "As we reduce our emissions and the growth rate of atmospheric carbon dioxide slows down, it's important to realize that the ocean carbon sink will respond by slowing down."

The paper, published today in the journal AGU Advances, largely resolves the uncertainty about what caused the ocean to take up varying amounts of carbon over the last 30 years. The findings will enable more accurate measurements and projections of how much the planet might warm, and how much the ocean might offset climate change in the future.

A carbon sink is a natural system that absorbs excess carbon dioxide from the atmosphere and stores it away. Earth's largest carbon sink is the ocean. As a result, it plays a fundamental role in curbing the effects of human-caused climate change. Nearly 40 percent of the carbon dioxide added to the atmosphere by fossil fuel burning since the dawn of the industrial era has been taken up by the ocean.

There's variability in the rate at which the ocean takes up carbon dioxide, which isn't fully understood. In particular, the scientific community has puzzled over why the ocean briefly absorbed more carbon dioxide in the early 1990s and then slowly took up less until 2001, a phenomenon verified by numerous ocean observations and models.

McKinley and her coauthors addressed this question by using a diagnostic model to visualize and analyze different scenarios that could have driven greater and lesser ocean carbon uptake between 1980 and 2017. They found the reduced ocean carbon sink of the 1990s can be explained by the slowed growth rate of atmospheric carbon dioxide early in the decade. Efficiency improvements and the economic collapse of the Soviet Union and Eastern European countries are thought to be among the causes of this slowdown.

But another event also affected the carbon sink: The massive eruption of Mount Pinatubo in the Philippines in 1991 caused the sink to temporarily become much larger coincident with the eruption.

"One of the key findings of this work is that the climate effects of volcanic eruptions such as those of Mount Pinatubo can play important roles in driving the variability of the ocean carbon sink," said coauthor Yassir Eddebbar, a postdoctoral scholar at Scripps Institution of Oceanography.

Pinatubo was the second-largest volcanic eruption of the 20th century. The estimated 20 million tons of ash and gases it spewed high into the atmosphere had a significant impact on climate and the ocean carbon sink. The researchers found that Pinatubo's emissions caused the ocean to take up more carbon in 1992 and 1993. The carbon sink slowly declined until 2001, when human activity began pumping more carbon dioxide into the atmosphere. The ocean responded by absorbing these excess emissions.

"This study is important for a number of reasons, but I'm most interested in what it means for our ability to predict the near-term, one to ten years out, future for the ocean carbon sink," said coauthor said Nicole Lovenduski, an oceanographer at the University of Colorado Boulder. "The future external forcing is unknown. We don't know when the next big volcanic eruption will occur, for example. And the COVID-19-driven carbon dioxide emissions reduction was certainly not anticipated very far in advance."

Investigating how the Pinatubo eruption impacted global climate, and thus the ocean carbon sink, and whether the drop in emissions due to COVID-19 is reflected in the ocean are among the research team's next plans.

By understanding variability in the ocean carbon sink, the scientists can continue to refine projections of how the ocean system will slow down.

McKinley cautions that as global emissions are cut, there will be an interim phase where the ocean carbon sink will slow down and not offset climate change as much as in the past. That extra carbon dioxide will remain in the atmosphere and contribute to additional warming, which may surprise some people, she said.

"We need to discuss this coming feedback. We want people to understand that there will be a time when the ocean will limit the effectiveness of mitigation actions, and this should also be accounted for in policymaking," she said.

Credit: 
Columbia Climate School

NASA infrared data shows Tropical Cyclone Nisarga strengthened before landfall

image: On June 2 at 4:47 p.m. EDT (2047 UTC) NASA's Aqua satellite analyzed Tropical Cyclone Nisarga using the Atmospheric Infrared Sounder or AIRS instrument. AIRS found coldest cloud top temperatures as cold as or colder than (purple) minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the storm's center.

Image: 
NASA JPL/Heidar Thrastarson

Satellite data of Tropical Cyclone Nisarga's cloud top temperatures revealed that the storm had strengthened before it began making landfall in west central India.

Nisarga formed around 5 p.m. EDT (2100 UTC) on June 2, and had maximum sustained winds near 40 knots (46 mph/74 kph) at that time. Within 12 hours, the storm intensified to hurricane strength.

One of the ways NASA researches tropical cyclones is using infrared data that provides temperature information. Cloud top temperatures provide information to forecasters about where the strongest storms are located within a tropical cyclone (which are made of hundreds of thunderstorms). Tropical cyclones do not always have uniform strength, and some sides are stronger than others. The stronger the storms, the higher they extend into the troposphere, and the colder the cloud temperatures.

On June 2 at 4:47 p.m. EDT (2047 UTC) NASA's Aqua satellite analyzed the storm using the Atmospheric Infrared Sounder or AIRS instrument. AIRS found cloud top temperatures were getting colder. Colder cloud tops are an indication that the uplift of air in the storm was getting stronger and thunderstorms were building higher into the troposphere. AIRS found temperatures as cold as or colder than minus 63 degrees Fahrenheit (minus 53 degrees Celsius) around the center. NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain.

The Joint Typhoon Warning Center noted at 5 a.m. EDT (0900 UTC), Tropical cyclone Nisarga was located near latitude 17.9 degrees north and longitude 72.9 degrees east, about 65 nautical miles (75 miles/120 km) south of Mumbai, India. The storm was moving to the northeast.

Maximum sustained winds were near 75 knots (86 mph/139 kph) making it a Category 1 hurricane on the Saffir-Simpson Hurricane Wind Scale. Those hurricane force winds extended 25 miles (40 km) from the center, while tropical-storm force winds extended 75 miles from the center.

At that time, Nisarga was making landfall south of Mumbai. The system is forecast to track inland and dissipate.

Typhoons and hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

The AIRS instrument is one of six instruments flying on board NASA's Aqua satellite, launched on May 4, 2002.

By Rob Gutro
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Innocent and highly oxidizing

Chemical oxidation, the selective removal of electrons from a substrate, represents one of the most important transformations in chemistry. However, most common oxidants often show disadvantages such as undesired side reactions. The chemist Marcel Schorpp and colleagues from the group of Prof. Dr. Ingo Krossing from the Institute of Inorganic and Analytical Chemistry at the University of Freiburg have successfully generated a novel and extremely stable perfluorinated radical cation. In cooperation with Stephan Rein from the group of Prof. Dr. Stefan Weber from the Institute of Physical Chemistry, this radical was characterized in detail. The researchers recently published their results in the journal Angewandte Chemie.

The new reagent proves to be an extremely strong oxidizing agent and allows for the synthesis of reactive species in standard laboratory solvents that were previously difficult or inaccessible - for example, the oxidation of decamethylferrocenium, which is a long known and very stable species to the corresponding highly reactive dication in the presence of carbon monoxide. With this newly described reagent, many of the above mentioned disadvantages of other oxidants can be circumvented, since it reacts as an innocent oxidizing agent: only taking up electrons from the substrate without showing further reactivity.

Due to the broad applicability described in the article, this reagent is interesting for inorganic, organic chemistry as well as electrochemical or materials science research questions. "In the future, for example, it might be possible to embed it in a polymer to be used as cathode material for organic batteries," explains Schorpp.

Credit: 
University of Freiburg

Oncotarget: Second line trastuzumab emtansine following horizontal dual blockade

image: Kaplan-Meier curve for PFS.

Image: 
Correspondence to - Salvatore Del Prete - salvatore.delprete@aslnapoli2nord.it and Liliana Montella - liliana.montella@aslnapoli2nord.it

Volume 11, Issue 22 of Oncotarget reported that despite relevant medical advancements, metastatic breast cancer remains an incurable disease.

HER2 signaling conditions tumor behavior and treatment strategies of HER2 expressing breast cancer.

Cancer treatment guidelines uniformly identify dual blockade with pertuzumab and trastuzumab plus a taxane as best first line and trastuzumab emtansine as a preferred second-line choice.

However, there is no prospectively designed available study focusing on the sequence and outcomes of patients treated with T-DM1 following the triplet.

In the following report, data concerning a wide series of patients treated in a real-life setting are presented.

Results obtained in terms of response and median progression-free survival suggests a significant role for T-DM1 in disease control of metastatic HER2 expressing breast cancer.

Dr. Salvatore Del Prete from the Medical Oncology Unit "San Giovanni di Dio" Hospital, Frattamaggiore as well as Dr. Liliana Montella from the Medical Oncology Unit "Santa Maria delle Grazie" Hospital said, "From the 80s, Human Epidermal Growth Factor Receptor 2 (HER2) signaling was increasingly recognized as pivotal in tumor growth of HER2 expressing breast cancer. HER2 expression is limited to a proportion (15-20%) of breast cancer; however, HER2 conditions tumor behavior and addresses treatment strategies."

Currently available guidelines in metastatic HER2 positive breast cancer design a sequence of treatment with first-line double blockade with trastuzumab plus pertuzumab and a taxane according to Cleopatra trial results and second-line treatment with trastuzumab emtansine enforced by Emilia trial results and, then, lapatinib plus capecitabine.

The reduced toxicity of T-DM1 in the second and later lines of treatment together and the high rates of activity and efficacy are determinant in choosing treatment for a patient candidate to a prolonged time on treatment.

On February 22, 2013, the Food and Drug Administration approved T-DM1 for use as a single agent in the treatment of patients with HER2-positive metastatic breast cancer who previously received trastuzumab and a taxane.

Such results induced FDA on May 3, 2019, to approve T-DM1 for the adjuvant treatment of patients with HER2-positive early breast cancer treated with neoadjuvant taxane and trastuzumab-based treatment and having a residual invasive disease in the breast or axilla at the surgery .

In the present study, data coming from different centers concerning patients with HER2 positive metastatic breast cancer treated with second-line T-DM1 following trastuzumab and pertuzumab were collected and evaluated.

The Del Prete/Montella Research Team concluded in their Oncotarget Research Article that despite the increased rate of survival of metastatic breast cancer patients overall, a rate of patients is lost at any line of therapy.

In two different studies concerning patients series treated predominantly before 2010, 3% and 26% of patients reached the goal of third-line treatment.

This evidence underscores the need to give our patients the best treatment as early as possible.

Summarizing, the available evidence is substantially in favor of the choice of T-DM1 in the treatment of HER2 breast cancer at second and later lines.

Credit: 
Impact Journals LLC

Precision spray coating could enable solar cells with better performance and stability

image: Researchers have developed a new precision spray-coating method called sequential spray deposition that enables multilayer perovskite absorber for advanced solar cell designs and could be scaled up for mass production. The technique could be used to create perovskite architectures with any number of layers.

Image: 
Pongsakorn Kanjanaboos, Mahidol University

WASHINGTON -- Although perovskites are a promising alternative to the silicon used to make most of today's solar cells, new manufacturing processes are needed to make them practical for commercial production. To help fill this gap, researchers have developed a new precision spray-coating method that enables more complex perovskite solar cell designs and could be scaled up for mass production.

Perovskites are promising for next-generation solar cells because they absorb light and convert it to energy with better efficiency and potentially lower production costs than silicon. Perovskites can even be sprayed onto glass to create energy-producing windows.

"Our work demonstrates a process to deposit perovskite layer by layer with controllable thicknesses and rates of deposition for each layer," said research team leader Pongsakorn Kanjanaboos from the School of Materials Science and Innovation, Faculty of Science, Mahidol University in Thailand. "This new method enables stacked designs for solar cells with better performance and stability."

In the Optical Society (OSA) journal Optical Materials Express, Kanjanaboos and colleagues describe their new spray coating method, called sequential spray deposition, and show that it can be used to create a multilayer perovskite design. Applying different perovskite materials in each layer can allow customization of a device's function or the ability to meet specific performance and stability requirements.

A better way to spray

One of the advantages of perovskites are that they are solution processable, meaning that a solar cell is made by drying liquid perovskite into a solid at a low temperature. This fabrication process is much easier and less expensive than making a traditional silicon solar cell, a process that requires very high temperatures and cutting a solid material into wafers.

However, the solution process typically used to make perovskites does not allow multilayer designs because the upper layer tends to dissolve the already-dried lower layer. To overcome this challenge, the researchers turned to a process known as sequential spray deposition in which tiny droplets of a material are applied to a surface.

After trying different spray coating methods, they found one that worked at temperatures around 100 °C. They then optimized the spray parameters to ensure that the tiny droplets dried and crystalized into solid perovskite immediately upon contact with the already-dried lower layer.

Building a multi-layer device

"With our spray coating process, the solution of the upper layer doesn't disturb the solid film making up the first layer," said Pongsakorn. "Endless combinations of stacked perovskite architectures with any number of layers can be designed and created with precise control of thicknesses and rates of deposition for each layer."

The researchers demonstrated the technique by depositing a perovskite material with high stability on different perovskite material with better electrical properties. This double-layer semi-transparent perovskite device showed clearly defined layers and simultaneously achieved high performance and good stability.

The researchers plan to use the new approach to make multilayer perovskite devices with new functions and combinations of performance and stability that were not possible before.

Credit: 
Optica

Experts debate saturated fat consumption guidelines for Americans

Should public health guidelines recommend reducing saturated fat consumption as much as possible? Nutrition experts are tackling that controversial question head-on in a new series of papers outlining key points of agreement--and disagreement--in how to interpret the evidence and inform health guidelines.

Ronald Krauss, MD, of Children's Hospital Oakland Research Institute, and Penny Kris-Etherton, PhD, of Pennsylvania State University, present their positions in the premiere of Great Debates in Nutrition, a new section edited by David S. Ludwig in the American Journal of Clinical Nutrition. Krauss and Kris-Etherton will also debate the topic in a live interactive session as part of NUTRITION 2020 LIVE ONLINE, a virtual conference hosted by the American Society for Nutrition.

"The debate format enabled us to identify and refine points of agreement, as well as issues that remained in dispute," the authors wrote in a paper summarizing their viewpoints. "While these proceedings represent our personal perspectives, we hope that they provide a framework for discussion of dietary saturated fat recommendations among health professionals and for improved understanding by the general public of the controversies that have surrounded this topic."

Saturated fats are those that are solid at room temperature, such as bacon grease and butter. The Dietary Guidelines for Americans 2015-2020 recommends limiting saturated fats intake to less than 10% of calories per day. A majority of Americans currently exceed this level, with saturated fat representing about 11-12% of daily calories, on average.

Animal products are the main source of saturated fats in Americans' diets, with mixed dishes such as burgers, sandwiches, tacos and pizza contributing a little over a third of this daily intake. Snacks and sweets contribute about 18% of total saturated fat intake, protein foods contribute 15% and dairy products provide 13%. Tropical plant products, like coconut butter, also have high saturated fat content.

The debate centers around the evidence linking saturated fat with heart disease and the potential consequences--intended and unintended--of a public health guideline to reduce consumption as much as possible.

Both experts agree that a person's overall dietary pattern matters more than any single food component, and that dietary patterns associated with lower rates of heart disease are not high in saturated fat. In addition, both agree that low-density lipoproteins (LDLs) increase the risk of heart disease and that consuming saturated fats in place of unsaturated fats and carbohydrates increases LDL cholesterol levels.

However, they disagree about the strength of the evidence that a recommendation to limit saturated fat as much as possible would actually reduce rates of heart disease by helping to keep cholesterol in check. One set of questions revolves around the specific role of LDL cholesterol and the extent to which other mechanisms may come into play. Another question is whether the effect of dietary saturated fat on LDL cholesterol levels could be influenced by other factors, particularly the amount and types of carbohydrates a person consumes.

Both experts point to uncertainty with regard to how different sources of saturated fats affect health, as well as how the body's response to these fats may differ from person to person. While both call for additional research to help resolve the questions that remain, they come to different conclusions regarding what to make of the evidence as it stands.

"Though it is possible that dietary intake of saturated fatty acids has a causal role in cardiovascular disease, the evidence to support this contention is inconclusive," Krauss wrote. "Collectively, neither randomized clinical trials nor observational studies have conclusively established a benefit on cardiovascular disease outcomes and mortality [that can be specifically attributed to] reducing dietary saturated fatty acids." Krauss also questions the scientific basis for implementing a numerical target for the population as a whole, given that the effects of saturated fats may vary by source or by person.

Despite the outstanding questions, Kris-Etherton argues that the available evidence is strong enough to justify a recommendation to decrease intake of saturated fats. "We still have a large body of convincing and consistent evidence from different types of studies for making population-wide dietary recommendations to lower saturated fatty acids," said Kris-Etherton. "To wait until all questions are answered before any recommendations are made to limit saturated fatty acids further ignores the evidence base we have which when implemented can prevent cardiovascular disease (and other diseases) in many people."

Both experts caution that any guidance on this topic should avoid inadvertently encouraging people to replace foods high in saturated fat with foods high in added sugars and refined or processed carbohydrates, which they agree would pose substantial health risks.

Krauss and Kris-Etherton will discuss this topic in a live debate from 10:30 a.m.-noon on Wednesday, June 3 as part of NUTRITION 2020 LIVE ONLINE, to be held from June 1-4, 2020. Contact the media team for more information or register to access the virtual content.

Credit: 
American Society for Nutrition

Double-sided solar panels that follow the sun prove most cost effective

image: This graphical abstract summarizes how this work performs a comprehensive techno-economic analysis worldwide for photovoltaic systems using a combination of bifacial modules and single- and dual-axis trackers. The researchers found that single-axis trackers with bifacial modules achieve the lowest LCOE in the majority of locations (16% reduction on average). Yield is boosted by 35% by using bifacial modules with single-axis trackers and by 40% in combination with dual-axis trackers.

Image: 
Rodríguez-Gallegos et al./Joule

Solar power systems with double-sided (bifacial) solar panels--which collect sunlight from two sides instead of one--and single-axis tracking technology that tilts the panels so they can follow the sun are the most cost effective to date, researchers report June 3rd in the journal Joule. They determined that this combination of technologies produces almost 35% more energy, on average, than immobile single-panel photovoltaic systems, while reducing the cost of electricity by an average of 16%.

"The results are stable, even when accounting for changes in the weather conditions and in the costs from the solar panels and the other components of the photovoltaic system, over a fairly wide range," says first author Carlos Rodríguez-Gallegos, a research fellow at the Solar Energy Research Institute of Singapore, sponsored by the National University of Singapore. "This means that investing in bifacial and tracking systems should be a safe bet for the foreseeable future."

Research efforts tend to focus on further boosting energy output from solar power systems by improving solar cell efficiency, but the energy yield per panel can also be increased in other ways. Double-sided solar panels, for example, produce more energy per unit area than their standard counterparts and can function in similar locations, including rooftops. This style of solar panel, as well as tracking technology that allows each panel to capture more light by tilting in line with the sun throughout the day, could significantly improve the energy yield of solar cells even without further advancements in the capabilities of the cells themselves. However, the combined contributions of these recent technologies have not been fully explored.

To identify the global economic advantages associated with the use of a variety of paired photovoltaic technologies, Rodríguez-Gallegos and colleagues first used data from NASA's Clouds and the Earth's Radiant Energy System (CERES) to measure the total radiation that reaches the ground each day. The researchers further tailored this data to account for the influence of the sun's position on the amount of radiation a solar panel can receive based on its orientation, and then calculated the average net cost of generating electricity through a photovoltaic system throughout its lifetime. They focused on large photovoltaic farms composed of thousands of modules rather than smaller photovoltaic systems, which generally include higher associated costs per module. The team validated their model using measured values from experimental setups provided by three institutes and incorporated additional weather parameters to perform a worldwide analysis.

The model suggests that double-sided solar panels combined with single-axis tracking technology is most cost effective almost anywhere on the planet, although dual-axis trackers--which follow the sun's path even more accurately but are more expensive than single-axis trackers--are a more favorable substitute in latitudes near the poles. But despite this technology's clear benefits, Rodríguez-Gallegos does not expect this style of photovoltaic system to become the new standard overnight.

"The photovoltaics market is traditionally conservative," he says. "More and more evidence points toward bifacial and tracking technology to be reliable, and we see more and more of it adopted in the field. Still, transitions take time, and time will have to show whether the advantages we see are attractive enough for installers to make the switch."

While this work considers standard silicon-based solar cells, Rodríguez-Gallegos and colleagues next plan to analyze the potential of tracking systems combined with pricey, top-of-the-line solar materials with higher efficiencies (called tandem technologies), which are currently limited to heavy-duty concentrator photovoltaics and space applications.

"As long as research continues to take place, the manufacturing costs of these materials are expected to keep on decreasing, and a point in time might be reached when they become economically competitive and you might see them on your roof," says Rodríguez-Gallegos. "We then aim to be a step ahead of this potential future so that our research can be used as a guide for scientists, manufacturers, installers, and investors."

Credit: 
Cell Press