Tech

Mangrove forest study has takeaways for coastal communities

image: Despite resilient grow back in the past, Lagomasino and his research team estimate that nearly 11,000 hectares of mangrove forest, about 27,000 acres, failed to regrow at their previous levels after Hurricane Irma.

Image: 
David Lagomasino/ECU

A new paper published by an East Carolina University researcher in the Department of Coastal Studies shines light on the effect human-made infrastructure and natural topography has on coastal wetlands after major storm events.

In partnership with NASA and Florida International University, the study, led by assistant professor David Lagomasino, was published in the July edition of Nature Communications.

The study focused on the effects of Hurricane Irma, which struck Florida in 2017, and the damage it caused to the state's mangrove forests. The research team found that the forests suffered unparalleled dieback after the major hurricane.

Mangrove forests are often damaged after hurricanes, but Lagomasino said forests in Florida have shown great resiliency in the past due to their structure, position and species composition. After Hurricane Irma, the forests did not rebound at the same rate. Nearly 11,000 hectares -- a space the size of more than 24,000 football fields -- showed evidence of complete dieback following the storm.

For a resource that prevents more than $11 billion in annual property and flood damage in the state, that's a major concern, Lagomasino said.

"There have been significant storms in the past that have led to damage, but Irma seems to have caused one of the largest areas of dieback, at least in the satellite record," Lagomasino said.

After studying satellite and aerial footage of the region, the research team was able to pinpoint potential explanations for the dieback, including human-made obstacles.

"Human-made obstacles, as well as natural changes in topography, can impact the flow of water through an area," Lagomasino said. "Things like roads and levees can restrict or stop the flow of water between areas that were once connected. The lack of connection between the water can lead to extremes -- extreme dry conditions and extreme wet conditions, both of which can be stressful on wetland vegetation that thrives in more stable conditions."

The study noted that human-made barriers can lead to an increase in how long water stays on the surface, which can cause rapid degradation of fine root materials. Increased saltwater ponding may occur when storm surge is high and barriers obstruct water flow.

These results are not only key for future storm planning in Florida, but other coastal states like North Carolina, Lagomasino said.

"What we have learned in Florida can be useful to North Carolina and other coastal regions," Lagomasino said. "Our results indicate that the elevation of the landscape, the connectivity of water across the landscape, and the height of storm surge can indicate vulnerable areas. In other words, low elevation areas that are disconnected or do not have the capability to drain after being flooded are more susceptible to long-term damage.

"This is useful for understanding the resilience of coastal forests and wetlands in North Carolina and may also be important in predicting urban areas that may also be less resilient to these extreme events."

The study suggested changes that can be made to improve coastal resiliency in the future when facing severe weather events, including:

Adding new metrics that account for storm surge and geology to the traditional hurricane rating system;

Establishing field research stations in low-lying areas to help identify underrepresented physical and biological processes in vulnerable regions;

Performing regular coastal remote sensing surveys to monitor drainage basins and improve water connectivity; and

Improving freshwater flow to help create new tidal channels.

"We hope that the information from our research will help improve the recovery process after storms," Lagomasino said. "If these areas can be identified ahead of time, then the disaster response can address issues in hard-hit areas much faster or minimize the impact beforehand.

"The big takeaway here is that intense winds do a lot of damage during hurricanes. However, the intensity of damage does not necessarily coincide with the ability of the system to recover over time. Other factors, like slight changes in the elevation of the coastal landscape and storm surge, play a significant role in how the ecosystem recovers or does not recover after the initial damage. Having these factors in mind prior to hurricane season can help lessen long-term impacts in vulnerable communities."

Credit: 
East Carolina University

Plant physiology: A tale of three proteins

LMU biologists have shown that 'supervisor' and 'motivator' proteins are required to enable a third factor to perform its function in photosynthesis.

Plants, algae and cyanobacteria need only three ingredients for the synthesis of sugars via the process of photosynthesis - carbon dioxide, water and sunlight. However, the operation is far more complicated than this simple list of ingredients might suggest. Prof. Dr. Dario Leister and research group in the Department of Biology I at LMU are analyzing the complex regulation of photosynthesis. Their latest findings shed light on the roles of three proteins, named PGRL1, PGRL2 and PGR5, which participate in the control of one of the two subsystems of the photosynthetic apparatus. PGRL2 itself was first discovered in the course of the new study.

Photosynthesis involves several coordinated sequences of reactions. In the first step, a specific portion of the electromagnetic radiation emitted by the Sun is absorbed by membrane-bound pigment-protein complexes, which are organized into two 'photosystems' called PSI and PSII. The photosystems operate in two basic modes - linear and cyclic. In the former, PSII and PSI act in series. Light energy detaches electrons from water molecules, generating hydrogen ions (protons) and molecular oxygen. The protons are pumped to the opposite side of the membrane, while the electrons are transferred sequentially from one complex to the next, gaining in energy in the process. Ultimately, this energy is stored in the form of ATP, which drives most of the biochemical transactions in cells. Linear electron flow (LEF) through PSI also supplies the 'reducing equivalents' required for the conversion of carbon dioxide into sugars. The second mode of photosynthesis involves only photosystem I. Here, the electrons energized by solar radiation are diverted by other proteins, such that they follow a cyclic route. Notably, this cyclic electron flow (CEF) through PSI generates ATP only. "Plants need both subsystems," Leister points out. The cyclic pathway is particularly significant when plants are under stress, and need more ATP. Indeed, without this mechanism plants could not survive under natural conditions.

One worker, one motivator, one supervisor

How is the cyclic pathway regulated? About 20 years ago, Japanese researchers set out to characterize a collection of mutants of Arabidopsis thaliana (thale cress), a popular model system used by plant geneticists. In one of these strains, a gene they called PGR5 was mutated. 'PGR' stands for 'proton gradient', and refers to the proton concentration gradient created by the transfer of protons across the membrane during the course of photosynthesis. In the mutant, formation of the gradient was perturbed. "To our surprise, the PGR5 protein had none of the sequences that one would expect to find in a typical electron transporter," says Leister. This soon gave rise to the idea that other factors must also be involved in the maintenance of the proton gradient.

Experiments carried out by Leister in 2008 confirmed this suspicion. He discovered a second protein, which he called "pgr5-like 1" (PGRL1). Arabidopsis has two different genes that code for this factor, which explains why it did not turn up in the original mutant screen in which the PGR5 gene was identified. "At the time, we thought we had now had our hands on the really important protein," he recalls. Inactivation of either PGRL1 or PGR5 reduces cyclic electron flow around photosystem. Furthermore, loss of PGRL1 destabilizes PGR5, but not vice versa. So it looked as if PGRL1 was a central component of the cyclic mode of photosynthesis. Moreover, this notion was supported by the fact that it contains the structural elements one would expect to find in an electron transporter.

But the regulation of cyclic electron flow later turned out to be more complex than that. Leister and his colleagues went on to identify PGRL2 as a third protein involved - and its discovery complicated matters significantly. The team showed that when PGRL2 was knocked out, photosynthesis was not affected. Conversely, overproduction of PGRL2 destabilized PGR5, even in the presence of PGRL1. The big surprise came when PGRL1 and PGRL2 were simultaneously inactivated: PGR5 reappeared and was able on its own to restore cyclic electron transport. Interestingly, these plants grew more slowly than those in which PGR5 (and cyclic electron transport) were missing. Leister offers an instructive interpretation of these findings. "PGR5 actually does the job, PGRL1 acts as a motivator of PGR5, and PGRL2 is PGR5's supervisor. In the absence of its motivator, PGR5 is inactive. In the absence of its supervisor, it works quite well. But when motivator and supervisor are both missing, PGR5 becomes hyperactive, and ultimately destructive.

Leister's team now plans to elucidate the biochemical mechanisms that underlie these behaviors - using cyanobacteria, which are genetically much simpler than Arabidopsis, as a model system.

Credit: 
Ludwig-Maximilians-Universität München

US presidents' narcissism linked to international conflict

COLUMBUS, Ohio - The most narcissistic U.S. presidents since 1897 preferred to instigate conflicts with other great power countries without seeking support from allies, a new study suggests.

Results showed that of the presidents measured, those highest in narcissism - including Lyndon B. Johnson, Teddy Roosevelt and Richard Nixon - were about six times more likely to initiate a dispute with another great power in any given year than a president with average levels of narcissism.

The inclination to "go it alone" in international disputes fits with the desire of those high in narcissism to boost their own reputation and self-image and appear tough and competent to others, said John Harden, author of the study and a doctoral student in political science at The Ohio State University.

"More narcissistic U.S. presidents differed from others in how they approached foreign policy and world politics," Harden said.

"They were more likely to weigh their personal desires more heavily than political survival or the country's interests when it came to how they handled some disputes."

The study was published online recently in the journal International Studies Quarterly.

Harden studied presidents from 1897 - roughly the time the United States became a great power in the world - through George W. Bush in 2009.

In order to measure presidential narcissism, Harden used a dataset from 2000 created by three researchers to assess the personalities of presidents.

These researchers tapped the knowledge of presidential historians and other experts who had written at least one book on a president. Each expert completed a personality inventory with more than 200 questions about the president they studied.

How valid could it be to complete a personality test for another person? It actually works very well, Harden said. Other research has had people complete the same personality inventory used by the historians on behalf of an acquaintance. Results showed that these people answered the personality questions very similarly to the acquaintances themselves.

Using the personality test results for the 19 presidents from 1897 to 2008, Harden analyzed five facets of the test that relate with a common measure of grandiose narcissism: high levels of assertiveness and excitement-seeking and low levels of modesty, compliance and straightforwardness.

Harden determined those five factors are correlated with narcissism in a separate analysis using a general population sample.

"These facets describe people who want to be in charge, seek the spotlight, brag about their accomplishments and are willing to lie and flatter to get what they want. They certainly would be willing to insult others, too," Harden said. "So it is a pretty good description of a narcissist."

Based on these results, Lyndon Johnson was the president who scored highest on narcissism, followed by Teddy Roosevelt and then Richard Nixon.

The president who scored lowest on narcissism was William McKinley, followed by William Howard Taft and Calvin Coolidge.

"The results are in line with common assessments of the presidents," Harden said.

"Ethically principled McKinley, sensitive and often overwhelmed Taft, and taciturn Coolidge are at the bottom of the list. Meanwhile, self-absorbed and image-conscious figures like Johnson, Roosevelt and Nixon are at the top."

To see how narcissism was related to international conflict, Harden used another dataset, called Militarized Interstate Disputes. This data includes all instances where one country threatened, displayed, or used force against another from 1816 to 2014.

Harden looked specifically at disputes started unilaterally by the United States against other great powers, such as the Soviet Union and China. Any conflicts in which the United States sought support from allies were not counted as a unilaterally initiated great power dispute.

Many of these disputes are not well-known by the public, Harden said, but created a great deal of tension among world leaders.

For example, Nixon initiated Operation Giant Lance in 1969, which sent a squadron of B-52s armed with nuclear weapons to patrol the glacier caps near Moscow. Johnson launched the so-called Lightning Bug War in 1964, sending drones on missions deep inside China.

In his study, Harden took into account and controlled for a wide range of factors other than the narcissism of the president that may have played a role in these conflicts - including, but not limited to, the president's political party, whether the president was in his final term and whether he had military experience, whether the country was war-weary or in a recession, whether the government was unified under one political party and whether the incident occurred during the Cold War.

After taking all of these factors into account, results showed that the probability that the United States would unilaterally initiate at least one great power dispute in any given year was about 4%. For presidents highest in narcissism, the likelihood was around 29%, more than six times higher. For presidents who were at the low end of the narcissism scale, the probability was less than 1%.

"The raw data speaks for itself. The three most narcissistic presidents had unilaterally initiated great power disputes that made up 33% to 71% of all disputes they initiated. Meanwhile, the bottom three had none," Harden said.

There are several reasons why more narcissistic presidents would have a greater likelihood of starting fights with other great power nations without allied support, Harden said.

For one, they would only want to deal with great powers.

"Why would a leader who focuses on their historical notability and image 'waste their time' with lesser status powers?" he said.

They also would work without partners because they don't want to share the spotlight and wouldn't believe that others would have anything to contribute.

Leaders high in narcissism also behave in ways that increase tensions, such as taking actions to project strength. They are willing to accept risks. They also behave dramatically and send unclear signals, Harden said.

While the public and some political scientists may believe that U.S. presidents act with the best interests of the country at heart, Harden said this study provides evidence that some leaders use their office to make themselves feel powerful and important.

"Leaders high in narcissism don't want the same things from their position as others do," Harden said.

"For them, the world truly is a stage."

Credit: 
Ohio State University

Mouse brain imaged from the microscopic to the macroscopic level

image: By using an imaging pipeline of MRI, μCT, and EM, Foxley, Kasthuri, and their team were able to simultaneously resolve brain structures, like the white matter, at (a) macro-, (b) meso-, and (c) microscopic-scales in the same brain.

Image: 
Image from Foxley et al.

Researchers at the University of Chicago and the U.S. Department of Energy's (DOE) Argonne National Laboratory have leveraged existing advanced X-ray microscopy techniques to bridge the gap between MRI (magnetic resonance imaging) and electron microscopy imaging, providing a viable pipeline for multiscale whole brain imaging within the same brain. The proof-of-concept demonstration involved imaging an entire mouse brain across five orders of magnitude of resolution, a step which researchers say will better connect existing imaging approaches and uncover new details about the structure of the brain.

The advance, which was published on June 9 in NeuroImage, will allow scientists to connect biomarkers at the microscopic and macroscopic level, improving the resolution of MRI imaging and providing greater context for electron microscopy.

"Our lab is really interested in mapping brains at multiple scales to get an unbiased description of what brains look like," said senior author Narayanan "Bobby" Kasthuri, MD, Assistant Professor of Neurobiology at UChicago and neuroscience researcher at Argonne. "When I joined the faculty here, one of the first things I learned was that Argonne had this extremely powerful X-ray microscope, and it hadn't been used for brain mapping yet, so we decided to try it out."

The microscope uses a type of imaging called synchrotron-based X-ray tomography, which can be likened to a "micro-CT", or micro-computerized tomography scan. Thanks to the powerful X-rays produced by the synchrotron particle accelerator at Argonne, the researchers were able to image the entire mouse brain -- roughly one cubic centimeter -- at the resolution of a micron, 1/10,000 of a centimeter. It took roughly six hours to collect images of the entire brain, adding up to around 2 terabytes (TB) of data. This is one of the fastest approaches for whole brain imaging at this level of resolution.

MRI can quickly image the whole brain to trace neuronal tracts, but the resolution isn't sufficient to observe individual neurons or their connections. On the other end of the scale, electron microscopy (EM) can reveal the details of individual synapses, but generates an enormous amount of data, making it computationally challenging to look at pieces of brain tissue larger than a few micrometers in volume. Existing techniques for studying neuroanatomy at the micrometer resolution typically are either merely two-dimensional or use protocols that are incompatible with MRI or EM imaging, making it impossible to use the same brain tissue for imaging at all scales.

The researchers quickly realized that their new micro-CT, or μCT, approach could help bridge this existing resolution gap. "There have been a lot of imaging studies where people use MRI to look at the whole brain level and then try to validate those results using EM, but there's a discontinuity in the resolutions," said first author Sean Foxley, PhD, Research Assistant Professor at UChicago. "It's hard to say anything about the large volume of tissue you see with an MRI when you're looking at an EM dataset, and the X-ray can bridge that gap. Now we finally have something that can let us look across all levels of resolution seamlessly."

Combining their expertise in MRI and EM, Foxley, Kasthuri, and the rest of their team opted to attempt mapping a single mouse brain using these three approaches. "Why did we choose the mouse brain? Because it fits in the microscope," Kasthuri said with a laugh. "But also, the mouse is the workhorse of neuroscience; they're very useful for analyzing different experimental conditions in the brain."

After collecting and preserving the tissue, the team placed the sample in an MRI scanner to collect structural images of the entire brain. Next, it was placed on a rotating stage in the μCT scanner at the Advanced Photon Source, a DOE Office of Science User Facility, to collect the CT data before specific regions of interest were identified in the brainstem and cerebellum for targeting for EM.

After months of data processing and image tracing, the researchers determined that they were able to use the structural markers identified on the MRI to localize specific neuronal subgroups in designated brain regions, and that they could trace the size and shape of individual cell bodies. They could also trace the axons of individual neurons as they traveled through the brain, and could connect the information from the μCT images with what they saw at the synaptic level with the EM.

This approach, the team says, will not only be helpful for imaging the brain at the μCT resolution, but also for informing MRI and EM imaging.

"Imaging a 1-millimeter cube of the brain with EM, which is the equivalent to about the minimum resolution of an MRI image, produces almost a million gigabytes of data," Kasthuri said. "And that's just looking at a 1-millimeter cube! I don't know what's happening in the next cube, or the next, so I don't really have context for what I'm seeing with EM. MRI can provide some context except that scale is too big to bridge. Now this μCT gives us that needed context for our EM work."

On the other end of the scale, Foxley is excited about how this approach can be helpful for understanding the living brain through MRI. "This technique gives us a really clear way to identify changes in the microstructure of the brain when there is a disease or injury present," he said. "So now we can start looking for biomarkers with the μCT that we can then trace back to what we see on the MRI in the living brain. The X-ray lets us look at things on the cellular level, so then we can ask, what changed at the cellular level that produced a global change in the MRI signal on a macroscopic level?"

The researchers are already using this technique to begin exploring important questions in neuroscience, looking at the brains of mice that have been genetically engineered to develop Alzheimer's disease to see if they can trace the A? plaques seen with μCT back to measurable changes in MRI scans, especially in early stages of the disease.

Importantly, because this work was done at the national laboratory, this resource will be open and freely accessible to other scientists around the world, making it possible for researchers to begin asking and answering questions that span the whole brain and reach down to the synaptic level.

At the moment, however, the UChicago team is most interested in continuing to refine the technique. "The next step is to do an entire primate brain," said Kasthuri. "The mouse brain is possible, and useful for pathological models. But what I really want to do is get an entire primate brain imaged down to the level of every neuron and every synaptic connection. And once we do that, I want to do an entire human brain."

Credit: 
University of Chicago Medical Center

Traits of a troll: Research reveals motives of internet trolling

video: New BYU research recently published in the journal of Social Media + Society sheds light on the motives and personality characteristics of internet trolls.

Image: 
Julie Walker

As social media and other online networking sites have grown in usage, so too has trolling - an internet practice in which users intentionally seek to draw others into pointless and, at times, uncivil conversations.

New research from Brigham Young University recently published in the journal of Social Media and Society sheds light on the motives and personality characteristics of internet trolls.

Through an online survey completed by over 400 Reddit users, the study found that individuals with dark triad personality traits (narcissism, Machiavellianism, psychopathy) combined with schadenfreude - a German word meaning that one derives pleasure from another's misfortune - were more likely to demonstrate trolling behaviors.

"People who exhibit those traits known as the dark triad are more likely to demonstrate trolling behaviors if they derive enjoyment from passively observing others suffer," said Dr. Pamela Brubaker, BYU public relations professor and co-author of the study. "They engage in trolling at the expense of others."

The research, which was co-authored by BYU communications professor Dr. Scott Church and former BYU graduate Daniel Montez, found that individuals who experienced pleasure from the failures or shortcomings of others considered trolling to be acceptable online behavior. Women who participated in the survey viewed trolling as dysfunctional while men were more likely to view it as functional.

"This behavior may happen because it feels appropriate to the medium," said Church. "So, heavy users of the platform may feel like any and all trolling is 'functional' simply because it's what people do when they go on Reddit."

The researchers say it's important to note that those who possess schadenfreude often consider trolling to be a form of communication that enriches rather than impedes online deliberation. Because of this view, they're not concerned with how their words or actions affect those on the other side of the screen. To them, trolling isn't perceived as destructive but merely as a means for dialogue to take place.

"They are more concerned with enhancing their own online experience rather than creating a positive online experience for people who do not receive the same type of enjoyment or pleasure from such provocative discussions," said Brubaker.

However, there's still hope for productive online discussions. The study found no correlation between being outspoken online and trolling behavior. The findings noted that users who actively "speak out" and voice their opinions online didn't necessarily engage in trolling behaviors. Such results are encouraging and suggest that civil online discourse is attainable.

"Remember who you are when you go online," said Church. "It helps when we think of others online as humans, people with families and friends like you and me, people who feel deeply and sometimes suffer. When we forget their identities as actual people, seeing them instead as merely usernames or avatars, it becomes easier to engage in trolling."

Brubaker suggests approaching online discourses with an open mind in order to understand various perspectives.

"Digital media gives us the power to connect with people who have similar and different ideas, interests, and experiences from our own. As we connect with people online, we should strive to be more respectful of others and other points of view, even when another person's perspective may not align with our own," she said. "Each of us has the power to be an influence for good online. We can do this by exercising mutual respect. We can build others up and applaud the good online."

Credit: 
Brigham Young University

Using the ancient art of Kirigami to make an eyeball-like camera

image: Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston, is sing the Ancient art of Kirigami to make an eyeball-like camera.

Image: 
University of Houston

Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston, is reporting the development of a camera with a curvy, adaptable imaging sensor that could improve image quality in endoscopes, night-vision goggles, artificial compound eyes and fish-eye cameras.

"Existing curvy imagers are either flexible but not compatible with tunable focal surfaces, or stretchable but with low pixel density and pixel fill factors," reports Yu in Nature Electronics. "The new imager with kirigami design has a high pixel fill factor, before stretching, of 78% and can retain its optoelectronic performance while being biaxially stretched by 30%."

Modern digital camera systems using conventional rigid, flat imaging sensors require complex and bulky lenses to correct optical aberrations. The curvy camera, like a human eyeball, on the other hand, can work with a single lens while correcting aberrations and offering other merits, such as a wide field of view and compact size.

Yu has shown that the curvy and shape-adaptive cameras with high pixel fill factors can be created by transferring an array of ultrathin silicon pixels with a kirigami design onto curvy surfaces using conformal additive stamp (CAS) printing, a manufacturing technology invented in his lab.

Kirigami is the Japanese art of paper cutting, similar to origami, or paper folding. Yu used the kirigami principal on a thin sheet of imaging sensors, making cuts which allows it to stretch and curve. Compared to other stretchable structure designs, such as thin open-mesh serpentine or island-bridge structures, this new kirigami structure has a much higher fill factor, meaning it retains high pixel density, creating better images.

Not only is the camera curvy, but Yu makes it shape-adaptive, enabling it to capture objects at different distances clearly.

"The new adaptive imager can achieve focused views of objects at different distances by combining a concave-shaped camera printed on a magnetic rubber sheet with a tunable lens. Adaptive optical focus is achieved by tuning both the focal length of the lens and the curvature of the imager, allowing far and near objects to be imaged clearly with low aberration." said Yu, who is also a principal investigator of the Texas Center for Superconductivity at UH.

In CAS printing, an elastomeric, or stretchy, balloon with a sticky coating is inflated. It is then used as a stamping medium, pushing down on prefabricated electronic devices to pick up the electronics and print them onto various curvy surfaces.

Credit: 
University of Houston

How we measure biodiversity can have profound impacts on land-use

image: A study led by Princeton University illustrates this challenge by using several different approaches to solve the same puzzle: Given a target amount of food, where should new croplands be put to minimize environmental or biodiversity impacts?

Image: 
Egan Jimenez, Princeton University

The world's human population is expanding, which means even more agricultural land will be needed to provide food for this growing population. However, choosing which areas to convert is difficult and depends on agricultural and environmental priorities, which can vary widely.

A study led by Princeton University illustrates this challenge by using several different approaches to solve the same puzzle: Given a target amount of food, where should new croplands be put to minimize environmental or biodiversity impacts?

The researchers used the country of Zambia as a case study given that it currently harbors a significant amount of biodiversity but will likely see significant agricultural expansion. They looked at common ways of measuring biodiversity, like counting up the species present in the region, as well as factoring in the relative rarity of those species in that geographic region.

Depending on which factor they put into a model for optimizing land use, very different areas of land were suggested for agricultural development. In fact, the overlap between the recommended regions was less than 4%.

The findings, published in the journal Ecological Applications, indicate an urgent need for consensus: When such small differences can result in almost completely different results, contradictory models may become a roadblock to policymakers rather than a roadmap.

Conservation biologists should strive for more consistent methods for prioritizing biodiversity conservation, the researchers said, and must be more transparent in how they make and justify these decisions.

"The sheer scale of agriculture today means that we need to be strategic about where we decide to produce food into the future," said lead author Christopher Crawford, Ph.D. candidate in the Science, Technology, and Environmental Policy (STEP) Program in Princeton's School of Public and International Affairs (SPIA). "Our paper puts the stakes for the natural world into greater context, showing that what you prioritize and how you measure it can have significant consequences on biodiversity."

Crawford's co-author David Wilcove, professor of ecology and evolutionary biology and public affairs and the High Meadows Environmental Institute, explains the effects in more detail.

"Let's say you decide which areas to protect for nature and which to convert to cropland based on where birds are, you might get a different answer than if you focused on mammals. And if you base your decision on protecting the places with the most species, you might get a different answer than if you based your decision on the places with the most endangered species," Wilcove said.

Crawford and Wilcove worked with Lyndon Estes of Clark University and Tim Searchinger, also of SPIA, whose 2016 paper provided the inspiration and model used in this study. The team compared four distinct approaches to measuring biodiversity and dug into the factors underlying these different approaches.

The analysis started by comparing four commonly used approaches to measuring biodiversity previously published in academic journals. They then identified four key methodological decisions that underlay the differences between those four published approaches and created a new set of indices specifically designed to show the impact each general decision has on the prioritization of land.

Their first approach looks at the number of vertebrates -- like mammals, birds, and reptiles -- and plant species in a region, as well as expert advice on habitat priorities for conservation. The second takes into account the total number of vertebrate species, measuring their importance based on their extinction risk and the rarity of the type of ecosystem in that region. The third approach focuses on the vegetation types in the different regions, weighing them in terms of how intact they are, how rare they are, and whether or not they are threatened. The fourth approach calculates the total number of species in the different regions, weighted by the size of their geographical ranges.

After running each approach through their model, the researchers found very different regions of Zambia were recommended for agricultural development -- the overlap between the areas recommended by the different methods was less than 4%, and sometimes as low as 0.3%. This shows there likely isn't a "one-size-fits-all" solution to prioritizing land use. And while some decisions, such as changing the groups of species being considered, or how they are counted, had a much bigger effect on the ultimate land-use recommendations, even small and often overlooked methodological decisions can result in notably divergent recommendations.

The findings highlight the extreme complexity policymakers face when it comes to converting land. The method chosen when making these decisions can have huge consequences for biodiversity. While the researchers focused on biodiversity, it is also only one piece of the puzzle. Land-use prioritization must also take into account the suitability of the different regions for agriculture, the amount of carbon that would be released through land conversion, and the costs of transporting crops from the would-be agricultural region to markets. Decision-making becomes complicated if even two of these factors are considered at once, let alone all of them, because of the inevitable trade-offs.

"Which species you focus on, how you count and compare them, and the spatial scale of your analysis produce strikingly different answers to the question of which places to save and which places to develop," Wilcove said. "Scientists can come up with all sorts of sophisticated algorithms for balancing conservation with development, but unless they think very carefully about how they counted and compared the plants and animals they want to protect, their results may be meaningless."

Credit: 
Princeton School of Public and International Affairs

Pulling wisdom teeth can improve long-term taste function

PHILADELPHA--Patients who had their wisdom teeth extracted had improved tasting abilities decades after having the surgery, a new Penn Medicine study published in the journal Chemical Senses found. The findings challenge the notion that removal of wisdom teeth, known as third molars, only has the potential for negative effects on taste, and represent one of the first studies to analyze the long-term effects of extraction on taste.

"Prior studies have only pointed to adverse effects on taste after extraction and it has been generally believed that those effects dissipate over time," said senior author Richard L. Doty, PhD, director of the Smell and Taste Center at the University of Pennsylvania. "This new study shows us that taste function can actually slightly improve between the time patients have surgery and up to 20 years later. It's a surprising but fascinating finding that deserves further investigation to better understand why it's enhanced and what it may mean clinically."

Doty and co-author Dane Kim, a third-year student in the University of Pennsylvania School of Dental Medicine, evaluated data from 1,255 patients who had undergone a chemosensory evaluation at Penn's Smell and Taste Center over the course of 20 years. Among that group, 891 patients had received third molar extractions and 364 had not.

The "whole-mouth identification" test incorporates five different concentrations of sucrose, sodium chloride, citric acid, and caffeine. Each solution is sipped, swished in the mouth, and then spit out. Subjects then indicate whether the solution tastes sweet, salty, sour, or bitter.

The extraction group outperformed the control group for each of the four tastes, and in all cases, women outperformed men. The study suggests, for the first time, that people who have received extractions in the distant past experience, on average, an enhancement (typically a three to 10 percent improvement) in their ability to taste.

"The study strongly suggests that extraction of the third molar has a positive long-term, albeit subtle, effect on the function of the lingual taste pathways of some people," Kim said.

Two possibilities, the authors said, could explain the enhancement. First, extraction damage to the nerves that innervate the taste buds on the front of the mouth can release inhibition on nerves that supply the taste buds at the rear of the mouth, increasing whole-mouth sensitivity. Second, hypersensitivity after peripheral nerve injury from a surgery like an extraction has been well documented in other contexts. There is evidence, for example, from animal studies that repetitive light touch, which might occur during chewing, gradually accentuates neural responses from irritated tissue that can lead to progressive long-term tactile hypersensitivity. Whether this occurs for taste, however, is not known.

"Further studies are needed to determine the mechanism or mechanisms behind the extraction-related improvement in taste function," Doty said. "The effects are subtle but may provide insight into how long-term improvement in neural function can result from altering the environment in which nerves propagate."

Credit: 
University of Pennsylvania School of Medicine

FSU researchers find most nitrogen in Gulf of Mexico comes from coastal waters

image: Researchers on the NOAA ship Nancy Foster recovering a CTD instrument that is used to collect water samples and algae from multiple depths in the ocean.

Image: 
Courtesy of Michael Stukel/Florida State University

Almost all of the nitrogen that fertilizes life in the open ocean of the Gulf of Mexico is carried into the gulf from shallower coastal areas, researchers from Florida State University found.

The work, published in Nature Communications, is crucial to understanding the food web of that ecosystem, which is a spawning ground for several commercially valuable species of fish, including the Atlantic bluefin tuna, which was a focus of the research.

"The open-ocean Gulf of Mexico is important for a lot of reasons," said Michael Stukel, an associate professor in the Department of Earth, Ocean and Atmospheric Science and a co-author of the paper. "It's a sort of ocean desert, with very few predators to threaten larvae, which is part of what makes it a good spawning ground for several species of tuna and mahi-mahi. There are all kinds of other organisms that live out in the open ocean as well."

The food web in the Gulf of Mexico that supports newly born larvae and other organisms starts with phytoplankton. Like plants on land, phytoplankton need sunlight and nutrients, including nitrogen, to grow. The researchers wanted to understand how the nitrogen they need was entering the gulf.

They considered a few hypotheses. Their first idea was that nitrogen may have been coming from the deep ocean. Another was that a type of phytoplankton known as a nitrogen fixer was supplying the nutrient to larvae. Finally, they considered that nitrogen might be entering the open ocean from shallower areas of the coast.

By combing measurements made at sea while on research cruises in 2017 and 2018 with information from satellite observations and models, they found that organic matter coming from the coasts is responsible for more than 90 percent of nitrogen coming into the open ocean in the gulf.

Scientists already knew that large, swirling eddies act like slow-moving storms in the ocean and move water from shallower areas near the coast into the interior of the gulf. The researchers believe the nitrogen is probably moved in those eddies, although they didn't answer that question in this study.

Climate change is affecting how water near the surface of the ocean and deeper water mix. Understanding how a changing climate will affect these lateral currents is a harder question to answer.

It's an important part of the ecosystem to understand because the survival of larval tuna and other species that reproduce in the open-ocean Gulf of Mexico is tied to currents that link coastal regions with the nutrient-poor open ocean.

"If we want to understand how this ecosystem will respond to future climate change, we need to understand how all of these lateral transports work in the ocean," Stukel said. "Scientists studying biogeochemical balances -- especially in basins enclosed by productive coasts, which is the situation in the Gulf of Mexico -- should closely consider how lateral transport affects those ecosystems."

Credit: 
Florida State University

How to build a better wind farm

Washington, DC--Location, location, location--when it comes to the placement of wind turbines, the old real estate adage applies, according to new research published in Proceedings of the National Academy of Sciences by Carnegie's Enrico Antonini and Ken Caldeira.

Turbines convert the wind's kinetic energy into electrical energy as they turn. However, the very act of installing turbines affects our ability to harness the wind's power. As a turbine engages with the wind, it affects it. One turbine's extraction of energy from the wind influences the ability of its neighbors to do the same.

"Wind is never going to 'run dry' as an energy resource, but our ability to harvest it isn't infinitely scalable either," Antonini explained. "When wind turbines are clustered in large groups, their performance is diminished and the rate at which they extract energy is reduced."

Antonini and Caldeira set out to determine how large a wind farm can be before its generation capability per unit of land reaches the limits of energy replenishment, as well as how much of a "wind shadow" large farms cast, which would have a negative effect on any neighboring downwind installations.

"As we move away from fossil fuels, some scenarios predict wind farms could supply as much as one-third of global energy by 2050," Caldeira said. "So, it is imperative that we understand the relationship between turbine placement and maximum energy extraction."

It takes time for the wind to return to normal strength after some of its kinetic energy has been extracted by a wind farm. How quickly wind can recover from encountering a wind turbine is related to the wind farm's latitude and the Earth's rotation, Antonini and Caldeira said. Previous studies on wind power generation have noticed wakes behind large wind farms, so Antonini and Caldeira developed a theoretical understanding of the fundamental controls on the size of these wakes.

The size of a large wind farm's wake is related the speed of the overlying winds, as well as to the amount of time it takes pressure differences in Earth's atmosphere to replenish the energy that was extracted by the turbines. Antonini and Caldeira's work indicates that these factors should be considered when determining the size and placement of wind farms under different conditions.

For example, they found that turbines in areas with high winds are more likely to be affected by their upstream neighbors than those in areas with weaker winds. Also, wind farms that are closer to the equator are more likely to be negatively impacted by the wind shadow of upstream wind farms than are wind farms that are closer to the poles.

"Wind energy is a potential source of large amounts of carbon-emission-free energy," Caldeira said. "But to get the most out of this resource, we need to think about how other wind farms might affect us, and how we might affect other wind farms."

One idea the authors suggested is that constructing multiple small wind farms with space for wind recovery in between them could potentially be a more effective strategy in some locations than building one massive wind farm, although more research is needed.

"We hope this work will enable the builders and managers of wind turbine installations to design the best possible scenario for maximum wind power generation," Antonini said.

Credit: 
Carnegie Institution for Science

New type of metasurface allows unprecedented laser control

image: The incident light can be split into three independent beams, each with different properties -- a conventional beam (right), a beam known as a Bessel beam (center) and an optical vortex (left).

Image: 
(Christina Spägele/Harvard SEAS)

The ability to precisely control the various properties of laser light is critical to much of the technology that we use today, from commercial virtual reality (VR) headsets to microscopic imaging for biomedical research. Many of today's laser systems rely on separate, rotating components to control the wavelength, shape and power of a laser beam, making these devices bulky and difficult to maintain.

Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have developed a single metasurface that can effectively tune the different properties of laser light, including wavelength, without the need of additional optical components. The metasurface can split light into multiple beams and control their shape and intensity in an independent, precise and power-efficient way.

The research opens the door for lightweight and efficient optical systems for a range of applications, from quantum sensing to VR/AR headsets.

"Our approach paves the way to new methods to engineer the emission of optical sources and control multiple functions, such as focusing, holograms, polarization, and beam shaping, in parallel in a single metasurface," said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the paper.

The research was published recently in Nature Communications.

The tunable laser has just two components -- a laser diode and a reflective metasurface. Unlike previous metasurfaces, which relied on a network of individual pillars to control light, this surface uses so-called supercells, groups of pillars which work together to control different aspects of light.

When light from the diode hits the supercells on the metasurface, part of the light is reflected back, creating a laser cavity between the diode and the metasurface. The other part of the light is reflected into a second beam that is independent from the first.

"When light hits the metasurface, different colors are deflected in different directions," said Christina Spägele, a graduate student at SEAS and first author of the paper. "We managed to harness this effect and design it so that only the wavelength that we selected has the correct direction to enter back in the diode, enabling the laser to operate only at that specific wavelength."

To change the wavelength, the researchers simply move the metasurface with respect to the laser diode.

"The design is more compact and simpler than existing wavelength-tunable lasers, since it does not require any rotating component," said Michele Tamagnone, former postdoctoral fellow at SEAS and co-author of the paper.

The researchers also showed that the shape of the laser beam can be fully controlled to project a complex hologram -- in this case the complex, century-old Harvard shield. The team also demonstrated the ability to split the incident light into three independent beams, each with different properties -- a conventional beam, an optical vortex and a beam known as a Bessel beam, which looks like a bullseye and is used in many applications including optical tweezing.

"In addition to controlling any type of laser, this ability to generate multiple beams in parallel and directed at arbitrary angles, each implementing a different function, will enable many applications from scientific instrumentation to augmented or virtual reality and holography," said Capasso.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Pop-up coffee table -- no assembly required

image: By harnessing the mechanical instabilities in curved beams, researchers developed a system that can transform objects into elaborate and customizable 3D configurations. Above, the researchers demonstrated a lamp shade that can open and close in a simple motion.

Image: 
(Saurabh Mhatre/Harvard University)

Deployable structures -- objects that transition from a compact state to an expanded one -- are used everywhere from backyards to Mars. But as anyone who has ever struggled to open an uncooperative folding chair knows, transforming two-dimensional forms into three-dimensional structures is sometimes a challenge.

Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Harvard Graduate School of Design have developed a deployable system that is light, compact, inexpensive, easy to manufacture, and, most importantly, easy to deploy. By harnessing the mechanical instabilities in curved beams, the system can transform objects into elaborate and customizable 3D configurations on a range of scales, from large-scale furniture to small medical devices.

"Most buckling-induced deployable structures, like folding chairs, are activated by compressive forces that are created through the linear displacement of elements," said Saurabh Mhatre, a research associate at GSD and first author of the paper. "Our approach is different in that the compression force is generated through a rotational movement, which in turn induces buckling as the trigger for the 2D-to-3D transformation."

The interdisciplinary research team of designers and engineers used a combination of experiments and numerical analyses to understand the geometry of curved, slender beams and what happens when those beams rotate and buckle. By harnessing buckling - a normally undesirable phenomenon in design and engineering -- the researchers were able to design deployable structures with a simple rotational motion.

To demonstrate the system, the team built a lampshade that can be rotated to let in more or less light and a coffee table that can fold flat and pop-up in one simple motion.

"This new platform can be extended to realize functional structures and devices from the millimeter to meter scale using a variety of different materials," said Katia Bertoldi, the William and Ami Kuan Danoff Professor of Applied Mechanics at SEAS and senior author of the study. "These structures could be used as medical devices, optical devices like camera focusing mechanisms, deployable wheels and turbines, furniture, or deployable shelters."

The research was published recently in Advanced Materials. It was co-authored by Elisa Boatti, David Melancon, Ahmad Zareei, Maxime Dupont and Martin Bechthold. It was supported in part by the National Science Foundation through the Harvard University Materials Research Science and Engineering Center under grants DMR2011754 and DMR-1922321.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Old oil fields may be less prone to induced earthquakes

image: Earthquakes in the southern Delaware Basin (red dots) occur where there has been no historical production from the Delaware Mountain Group (purple circles, the size of the circle indicates the volume of oil and water produced). Credit: Dvory et al.

Image: 
Dvory et al.

Boulder, Colo., USA: Subsurface carbon sequestration—storing carbon in
rocks deep underground—offers a partial solution for removing carbon from
the atmosphere. Used alongside emissions reductions, geologic carbon
sequestration could help mitigate anthropogenic climate change. But like
other underground operations, it comes with risks—including earthquakes.

Geophysicists are still working to understand what can trigger
human-induced earthquakes, which have been documented since the 1960s. A
new study, published in Geology on Thursday, explores why part of
a heavily produced oilfield in the U.S. has earthquakes, and part of it
doesn’t. For the first time, the authors demonstrate that the influence of
past oil drilling changes stresses on faults in such a way that injecting
fluids is less likely to induce, or trigger, earthquakes today.

The study focuses on the Delaware Basin, an oil- and gas-producing field
spanning the border between West Texas and New Mexico. Drilling there has
taken place since at least the 1970s, with over 10,000 active individual
wells dotting the region. There, Stanford geophysicists No’am Dvory and
Mark Zoback noticed an interesting pattern in seismic activity. Recent
shallow earthquakes were mostly located in the southern half of the basin,
while the northern half is seismically quiet, despite shallow wastewater
injection occurring across the basin.

“The compelling question, then, is why are all the shallow earthquakes
limited to one area and not more widespread?” Zoback says.

Earthquakes can be induced by injecting fluids like wastewater underground.
When wastewater is injected into the rocks, pressures increase, putting the
rocks and any faults that are present under higher stress. If those
pressures and stresses get high enough, an earthquake can happen.

Earthquakes from injection in the southern Delaware Basin tend to be
shallow and relatively low-magnitude, typically strong enough to rattle the
dishes, but not enough to cause damage. However, if deeper faults are
activated, higher-magnitude earthquakes can occur and cause damage. For
example, in March 2020, a magnitude 4.6 earthquake rumbled in Mentone,
Texas, likely due to deep injection that interacted with faults in the
crystalline basement rock around five miles belowground.

“The size of an earthquake is limited by the size of the fault that slips,”
Dvory explains. Where faults are shallow and small (just a few kilometers
in size), quake magnitudes tend to be small. “You can still feel it, but
it’s less dangerous.”

Minimizing the risk of earthquakes is a goal for any subsurface operation,
whether it’s oil and gas production or carbon sequestration. That made the
Delaware Basin, with its odd pattern of earthquakes, a great target for
Dvory and Zoback. It was a natural experiment in geomechanics, the “why”
behind induced earthquakes.

To decipher the pattern, Dvory and Zoback first modeled the underground
pressures needed to cause faults in the basin to slip and connected those
values to estimated stress values. Once they had established that baseline,
they calculated the pore pressures around the Delaware Basin. Their results
showed a clear pattern: geologic formations in the northern basin where
hydrocarbons had previously been produced had lower pore pressures than in
“unperturbed” rock, and there were no earthquakes. The southern basin,
which had almost no previous production from the same formations, had
higher initial pressures and earthquakes.

“In some areas we have evidence of oil and gas development from even the
1950s,” Dvory says. “Where there was significant hydrocarbon production,
pressure was depleted, and the formations essentially became more stable.”

Now, when fluids are injected back into those ‘stable,’ previously drilled
rocks, the starting pressure is lower than the first time they were
drilled.

“So where oil production occurred previously, current injection results in
significantly lower pressure such that it’s much less likely to trigger
earthquakes,” Zoback explains. “It’s not inconceivable that at some point,
if you injected enough, you could probably cause an earthquake. But here in
the area we study, we are able to document that what happened previously
strongly affects how current operational processes affect the likelihood of
earthquake triggering.”

Targeting these sites of past oil production, with their lower earthquake
risk, could be a good approach for carbon sequestration.

“We have a global challenge to store enormous volumes of carbon dioxide in
the subsurface in the next ten to twenty years,” Zoback says. "We need
places to safely store massive volumes of carbon dioxide for hundreds of
years, which obviously includes not allowing pressure increases to trigger
earthquakes. The importance of geoscience in meeting this challenge can’t
be overstated. It’s an enormous problem, but geoscience is the critical
place to start.”

FEATURED ARTICLE

Prior oil and gas production can limit the occurrence of
injection-induced seismicity: A case study in the Delaware Basin of
western Texas and southeastern New Mexico, USA

Noam Z. Dvory; Mark D. Zoback

Author contact: Noam Z. Dvory, nzd@stanford.edu

https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G49015.1/604590/Prior-oil-and-gas-production-can-limit-the  

Credit: 
Geological Society of America

Oncotarget: Glucocorticoid receptor antagonism promotes apoptosis in solid tumor cells

image: Relacorilant improves the efficacy and promotes apoptotic activity of cytotoxic therapy in xenograft models under physiological cortisol conditions. (A) In the MIA PaCa-2 model, efficacy of the combination of paclitaxel + relacorilant was significantly better than paclitaxel alone (non-parametric T-test P < 0.0001). (B) In the MIA PaCa-2 model, efficacy of paclitaxel + gemcitabine + relacorilant was better than paclitaxel + gemcitabine alone (non-parametric T-test P = 0.0005). (C) In the HeLa model, efficacy of paclitaxel + relacorilant was significantly better than paclitaxel alone (non-parametric T-test P < 0.0001). (D) In the CC6279 model, efficacy of paclitaxel + relacorilant was significantly better than paclitaxel alone (non-parametric T-test P < 0.0001). Relacorilant alone had no significant effect on tumor growth. Error bars represent the standard error; all studies 10 animals/group. (E) Tumor cells were labeled using cytokeratin 18 immunohistochemistry (top). In serial sections, apoptotic caspase activity (cleaved caspase 3, bottom) and proliferation (Ki67, not shown) were assessed. (F) Relacorilant increased the cleaved caspase intensity and prevalence (H-score) within the tumor cells compared to paclitaxel alone. ***Mann-Whitney, P < 0.0001. Abbreviation: IHC, immunohistochemistry.

Image: 
Correspondence to - Andrew E. Greenstein - agreenstein@corcept.com

Oncotarget published "Glucocorticoid receptor antagonism promotes apoptosis in solid tumor cells" which reported that to guide studies in cancer patients, relacorilant, an investigational selective GR modulator that antagonizes cortisol activity, was assessed in various tumor types, with multiple cytotoxic combination partners, and in the presence of physiological cortisol concentrations.

In the MIA PaCa-2 cell line, paclitaxel-driven apoptosis was blunted by cortisol and restored by relacorilant.

A screen to identify optimal combination partners for relacorilant showed that microtubule-targeted agents consistently benefited from combination with relacorilant.

These findings were confirmed in xenograft models, including MIA PaCa-2, HeLa, and a cholangiocarcinoma patient-derived xenograft. In vivo, tumor-cell apoptosis was increased when relacorilant was added to paclitaxel in multiple models.

These observations support recently reported findings of clinical benefit when relacorilant is added to paclitaxel-containing therapy in patients with ovarian and pancreatic cancers and provide a new rationale for combining relacorilant with additional cytotoxic agents.

Support recently reported findings of clinical benefit when relacorilant is added to paclitaxel-containing therapy in patients with ovarian and pancreatic cancers and provide a new rationale for combining relacorilant with additional cytotoxic agents.

Dr. Andrew E. Greenstein from Corcept Therapeutics said, "Drug resistance, whether primary or acquired, is a major impediment to cancer therapy."

GR agonism could contribute to tumor cell biology even in patients with normal cortisol levels.

GR agonists, including cortisol, have demonstrated pro-apoptotic effects in hematological malignancies, cytostatic effects on sarcoma-derived cell lines, and anti-apoptotic effects in carcinoma cell lines.

Unlike the non-specific steroidal GR antagonist mifepristone, relacorilant does not exhibit partial agonist activity toward human or mouse GR.

Mifepristone and relacorilant are both competitive antagonists of the GR and are best studied in the context of physiologically relevant cortisol concentrations.

In a phase 2 study in patients with Cushing syndrome, relacorilant demonstrated the ability to reverse the effects of excess cortisol on hypertension and insulin resistance, and it is currently being studied in two phase 3 trials in patients with endogenous Cushing syndrome, GRACE and GRADIENT.

The Greenstein Research Team concluded in their Oncotarget Research Output that the initial in vitro observations suggested that an increase in apoptosis, rather than a decrease in proliferation rate, was achieved when relacorilant was added to a cytotoxic therapy.

To determine if this was recapitulated in vivo, apoptosis markers were assessed in relacorilant-treated xenografts.

The CC6279 cholangiocarcinoma model was assessed because the relacorilant effect size was greatest. Cleaved caspase 3 activity was qualitatively and quantitatively elevated in mice treated with relacorilant paclitaxel compared to paclitaxel alone, while no difference was observed for CK18 and Ki67.

Consistent with the initial in vitro observation, relacorilant promoted tumor cell apoptosis in xenograft models.

Credit: 
Impact Journals LLC

Nanotech OLED electrode liberates 20% more light, could slash display power consumption

A new electrode that could free up 20% more light from organic light-emitting diodes has been developed at the University of Michigan. It could help extend the battery life of smartphones and laptops, or make next-gen televisions and displays much more energy efficient.

The approach prevents light from being trapped in the light-emitting part of an OLED, enabling OLEDs to maintain brightness while using less power. In addition, the electrode is easy to fit into existing processes for making OLED displays and light fixtures.

"With our approach, you can do it all in the same vacuum chamber," said L. Jay Guo, U-M professor of electrical and computer engineering and corresponding author of the study.

Unless engineers take action, about 80% of the light produced by an OLED gets trapped inside the device. It does this due to an effect known as waveguiding. Essentially, the light rays that don't come out of the device at an angle close to perpendicular get reflected back and guided sideways through the device. They end up lost inside the OLED.

A good portion of the lost light is simply trapped between the two electrodes on either side of the light-emitter. One of the biggest offenders is the transparent electrode that stands between the light-emitting material and the glass, typically made of indium tin oxide (ITO). In a lab device, you can see trapped light shooting out the sides rather than traveling through to the viewer.

"Untreated, it is the strongest waveguiding layer in the OLED," Guo said. "We want to address the root cause of the problem."

By swapping out the ITO for a layer of silver just five nanometers thick, deposited on a seed layer of copper, Guo's team maintained the electrode function while eliminating the waveguiding problem in the OLED layers altogether.

"Industry may be able to liberate more than 40% of the light, in part by trading the conventional indium tin oxide electrodes for our nanoscale layer of transparent silver," said Changyeong Jeong, first author and a Ph.D. candidate in electrical and computer engineering.

This benefit is tricky to see, though, in a relatively simple lab device. Even though light is no longer guided in the OLED stack, that freed-up light can still be reflected from the glass. In industry, engineers have ways of reducing that reflection--creating bumps on the glass surface, or adding grid patterns or particles that will scatter the light throughout the glass.

"Some researchers were able to free up about 34% of the light by using unconventional materials with special emission directions or patterning structures," Jeong said.

In order to prove that they had eliminated the waveguiding in the light-emitter, Guo's team had to stop the light trapping by the glass, too. They did this with an experimental set-up using a liquid that had the same index of refraction as glass, so-called index-matching fluid--an oil in this case. That "index-matching" prevents the reflection that happens at the boundary between high-index glass and low-index air.

Once they'd done this, they could look at their experimental set-up from the side and see whether any light was coming sideways. They found that the edge of the light-emitting layer was almost completely dark. In turn, the light coming through the glass was about 20% brighter.

The finding is described in the journal Science Advances, in a paper titled, "Tackling light trapping in organic light-emitting diodes by complete elimination of waveguide modes."

This research was funded by Zenithnano Technology, a company that Guo co-founded to commercialize his lab's inventions of transparent, flexible metal electrodes for displays and touchscreens.

The University of Michigan has filed for patent protection.

The device was built in the Lurie Nanofabrication Facility.

Credit: 
University of Michigan