Tech

How robust is e-government in American state election administration?

image: Authoritative peer-reviewed journal published quarterly online with open access options and in print that provides global, interdisciplinary coverage of election law, policy, and administration

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, April 13, 2020--A new study examined how well American states are using Internet-based platforms to disseminate electoral information and communicate with voters. The study, which focused on website information provided on electoral administrators' websites, use of social media such as Facebook and Twitter, and responsiveness to voters' email communication, is published in Election Law Journal, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the Election Law Journal website through May 13, 2020.

The article entitled "Behind the Screens: E-Government in American State Election Administration" was authored by Holly Ann Garnett, Royal Military College of Canada, Kingston. The study analyzed the content of state election websites based on the activities they are all expected to perform, such as determining who is eligible to vote, conducting polling, and counting and tabulating the votes. The states were shown to perform well in terms of providing information about voter registration and election results. However, they had low rates of information about how to lodge complaints or report security concerns, suggesting poorer accountability and transparency in electoral management.

The study showed that U.S. state election officials have good social media presence, with 82% of states having a Facebook page and 88% of states having a Twitter handle.

Election Law Journal Editor-in-Chief David Canon, University of Wisconsin, states: "In this period of uncertainty caused by COVID-19, the ability for election officials to communicate digitally with voters is increasingly important. With more elections moving online or to mailed ballots, having a better understanding of electronic election administration is of critical importance. This timely article by Holly Ann Garnett should be read by everyone who is concerned with improving access to our elections during a pandemic."

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

The power of light

As COVID-19 continues to ravage global populations, the world is singularly focused on finding ways to battle the novel coronavirus. That includes the UC Santa Barbara's Solid State Lighting & Energy Electronics Center (SSLEEC) and member companies. Researchers there are developing ultraviolet LEDs that have the ability to decontaminate surfaces -- and potentially air and water -- that have come in contact with the SARS-CoV-2 virus.

"One major application is in medical situations -- the disinfection of personal protective equipment, surfaces, floors, within the HVAC systems, et cetera," said materials doctoral researcher Christian Zollner, whose work centers on advancing deep ultraviolet light LED technology for sanitation and purification purposes. He added that a small market already exists for UV-C disinfection products in medical contexts.

Indeed, much attention of late has turned to the power of ultraviolet light to inactivate the novel coronavirus. As a technology, ultraviolet light disinfection has been around for a while. And while practical, large-scale efficacy against the spread of SARS-CoV-2 has yet to be shown. UV light shows a lot of promise: SSLEEC member company Seoul Semiconductor in early April reported a "99.9% sterilization of coronavirus (COVID-19) in 30 seconds" with their UV LED products. Their technology currently is being adopted for automotive use, in UV LED lamps that sterilize the interior of unoccupied vehicles.

It's worth noting that not all UV wavelengths are alike. UV-A and UV-B -- the types we get a lot of here on Earth courtesy of the Sun -- have important uses, but the rare UV-C is the ultraviolet light of choice for purifying air and water and for inactivating microbes. These can be generated only via man-made processes.

"UV-C light in the 260 - 285 nm range most relevant for current disinfection technologies is also harmful to human skin, so for now it is mostly used in applications where no one is present at the time of disinfection," Zollner said. In fact, the World Health Organization warns against using ultraviolet disinfection lamps to sanitize hands or other areas of the skin -- even brief exposure to UV-C light can cause burns and eye damage.

Before the COVID-19 pandemic gained global momentum, materials scientists at SSLEEC were already at work advancing UV-C LED technology. This area of the electromagnetic spectrum is a relatively new frontier for solid-state lighting; UV-C light is more commonly generated via mercury vapor lamps and, according to Zollner, "many technological advances are needed for the UV LED to reach its potential in terms of efficiency, cost, reliability and lifetime."

In a letter published in the journal ACS Photonics, the researchers reported a more elegant method for fabricating high-quality deep-ultraviolet (UV-C) LEDs that involves depositing a film of the semiconductor alloy aluminum gallium nitride (AlGaN) on a substrate of silicon carbide (SiC) -- a departure from the more widely used sapphire substrate.

According to Zollner, using silicon carbide as a substrate allows for more efficient and cost-effective growth of high-quality UV-C semiconductor material than using sapphire. This, he explained, is due to how closely the materials' atomic structures match up.

"As a general rule of thumb, the more structurally similar (in terms of atomic crystal structure) the substrate and the film are to each other, the easier it is to achieve high material quality," he said. The better the quality, the better the LED's efficiency and performance. Sapphire is dissimilar structurally, and producing material without flaws and misalignments often requires complicated additional steps. Silicon carbide is not a perfect match, Zollner said, but it enables a high quality without the need for costly, additional methods.

In addition, silicon carbide is far less expensive than the "ideal" aluminum nitride substrate, making it more mass production-friendly, according to Zollner.

Portable, fast-acting water disinfection was among the primary applications the researchers had in mind as they were developing their UV-C LED technology; the diodes' durability, reliability and small form factor would be a game changer in less developed areas of the world where clean water is not available.

The emergence of the COVID-19 pandemic has added another dimension. As the world races to find vaccines, therapies and cures for the disease, disinfection, decontamination and isolation are the few weapons we have to defend ourselves, and the solutions will need to be deployed worldwide. In addition to UV-C for water sanitation purposes, UV-C light could be integrated into systems that turn on when no one is present, Zollner said.

"This would provide a low-cost, chemical-free and convenient way to sanitize public, retail, personal and medical spaces," he said.

For the moment, however, it's a game of patience, as Zollner and colleagues wait out the pandemic. Research at UC Santa Barbara has slowed to a trickle to minimize person-to-person contact.

"Our next steps, once research activities resume at UCSB, is to continue our work on improving our AlGaN/SiC platform to hopefully produce the world's most efficient UV-C light emitters," he said.

Other research contributors include Burhan K. SaifAddin (lead author), Shuji Nakamura, Steven P. DenBaars, James S. Speck, Abdullah S. Almogbel, Bastien Bonef, Michael Iza, and Feng Wu, all from SSLEEC and/or the Department of Materials at UC Santa Barbara.

Journal

ACS Photonics

DOI

10.1021/acsphotonics.9b00600

Credit: 
University of California - Santa Barbara

Long spaceflights affect astronaut brain volume

image: Pituitary deformity examples in three crewmembers before spaceflight and after spaceflight (day 1). (a) Only the first crewmember shown in a had no previous exposure to spaceflight. Reconstructed orthogonal sagittal three-dimensional T1-weighted images in the pituitary gland centered at the pituitary stalk are shown for each crewmember. (a) Before flight there is normal upward convexity of the pituitary gland dome (black arrowhead) and a straight pituitary stalk (white arrowhead). In this astronaut, there is no change in the morphologic structure of the pituitary gland or stalk after spaceflight (pituitary deformity score, 0). The anterior pituitary gland (indicated by the a in a) and posterior pituitary gland (p) are indicated. (b) Before spaceflight there is normal upward convexity of the pituitary gland dome. After spaceflight there is flattening of the pituitary gland dome (pituitary deformity score, 1). Note the cerebrospinal fluid (CSF) within the suprasellar cistern immediately above the dome of the pituitary gland. (c) Before spaceflight there is mild concavity of the pituitary gland dome. After spaceflight there is moderate concavity of the pituitary gland dome with loss of volume and new subtle posterior deviation with slight curvature of the pituitary stalk (arrows; pituitary deformity score, 1). *Increased congestion of the sphenoid sinus after spaceflight is shown.

Image: 
Radiological Society of North America

OAK BROOK, Ill. - Extended periods in space have long been known to cause vision problems in astronauts. Now a new study in the journal Radiology suggests that the impact of long-duration space travel is more far-reaching, potentially causing brain volume changes and pituitary gland deformation.

More than half of the crew members on the International Space Station (ISS) have reported changes to their vision following long-duration exposure to the microgravity of space. Postflight evaluation has revealed swelling of the optic nerve, retinal hemorrhage and other ocular structural changes.

Scientists have hypothesized that chronic exposure to elevated intracranial pressure, or pressure inside the head, during spaceflight is a contributing factor to these changes. On Earth, the gravitational field creates a hydrostatic gradient, a pressure of fluid that progressively increases from your head down to your feet while standing or sitting. This pressure gradient is not present in space.

"When you're in microgravity, fluid such as your venous blood no longer pools toward your lower extremities but redistributes headward," said study lead author Larry A. Kramer, M.D., from the University of Texas Health Science Center at Houston. Dr. Kramer further explained, "That movement of fluid toward your head may be one of the mechanisms causing changes we are observing in the eye and intracranial compartment."

To find out more, Dr. Kramer and colleagues performed brain MRI on 11 astronauts, including 10 men and one woman, before they traveled to the ISS. The researchers followed up with MRI studies a day after the astronauts returned, and then at several intervals throughout the ensuing year.

MRI results showed that the long-duration microgravity exposure caused expansions in the astronauts' combined brain and cerebrospinal fluid (CSF) volumes. CSF is the fluid that flows in and around the hollow spaces of the brain and spinal cord. The combined volumes remained elevated at one-year postflight, suggesting permanent alteration.

"What we identified that no one has really identified before is that there is a significant increase of volume in the brain's white matter from preflight to postflight," Dr. Kramer said. "White matter expansion in fact is responsible for the largest increase in combined brain and cerebrospinal fluid volumes postflight."

MRI also showed alterations to the pituitary gland, a pea-sized structure at the base of the skull often referred to as the "master gland" because it governs the function of many other glands in the body. Most of the astronauts had MRI evidence of pituitary gland deformation suggesting elevated intracranial pressure during spaceflight.

"We found that the pituitary gland loses height and is smaller postflight than it was preflight," Dr. Kramer said. "In addition, the dome of the pituitary gland is predominantly convex in astronauts without prior exposure to microgravity but showed evidence of flattening or concavity postflight. This type of deformation is consistent with exposure to elevated intracranial pressures."

The researchers also observed a postflight increase in volume, on average, in the astronauts' lateral ventricles, spaces in the brain that contain CSF. However, the overall resulting volume would not be considered outside the range of healthy adults. The changes were similar to those that occur in people who have spent long periods of bed rest with their heads tilted slightly downward in research studies simulating headward fluid shift in microgravity.

Additionally, there was increased velocity of CSF flow through the cerebral aqueduct, a narrow channel that connects the ventricles in the brain. A similar phenomenon has been seen in normal pressure hydrocephalus, a condition in which the ventricles in the brain are abnormally enlarged. Symptoms of this condition include difficulty walking, bladder control problems and dementia. To date, these symptoms have not been reported in astronauts after space travel.

The researchers are studying ways to counter the effects of microgravity. One option under consideration is the creation of artificial gravity using a large centrifuge that can spin people in either a sitting or prone position. Also under investigation is the use of negative pressure on the lower extremities as a way to counteract the headward fluid shift due to microgravity.

Dr. Kramer said the research could also have applications for non-astronauts.

"If we can better understand the mechanisms that cause ventricles to enlarge in astronauts and develop suitable countermeasures, then maybe some of these discoveries could benefit patients with normal pressure hydrocephalus and other related conditions," he said.

Credit: 
Radiological Society of North America

Is the Earth's inner core oscillating and translating anomalously?

image: Dislocation creep is a deformation mechanism transporting shear through the crystal lattice by the motion of line defects, called dislocations. This mechanism involves the elementary processes of dislocation glide along specific crystallographic planes and dislocation climb mediated by atomic diffusion.

Image: 
Ehime University

The Earth's inner core, hidden 5150 km below our feet, is primarily composed of solid iron and is exposed to pressures between 329 and 364 GPa (which are ~3.3 to 3.6 million times that of atmospheric pressure) and temperatures of ~5000 to ~6000 K (Image 1). Seismological observations previously revealed that the velocity of seismic waves produced by earthquakes depend strongly on their direction when travelling through the inner core, a phenomenon known as "seismic anisotropy". This is due to the alignment of the iron crystals, something that may be caused by deformation inside the inner core. More specific variations in seismic anisotropy between the eastern and western hemisphere of the inner core have also been reported. Other seismic studies furthermore suggest "distinct fluctuations in the inner core rotation rate" with respect to that of the Earth's crust and mantle. Although previous geodynamic modelings predict that the hemispherical asymmetry of the seismic anisotropy structure can be explained by "a translational motion of the inner core" and that variations in the length of a day can be explained by the gravitational coupling between the mantle and a weak inner core, the causes and mechanisms of these enigmatic features remain unclear because their modelings rely on the poorly constrained "viscous strength" of iron at the extreme conditions of the Earth's center.

The viscosity of the materials depends on the way iron crystals undergo plastic deformation in response to a mechanical stress, and deformation mechanisms called "creep" are generally expected under high-temperature and small stress conditions (Image 2). Creep of solid crystals is generally accommodated by the motion of imperfect arrangements of atoms in the crystal structures called "lattice defects" and is particularly limited by "atomic diffusion" under the conditions of the inner core. Such conditions impose technical difficulties on laboratory experiments making measurements of the inner core viscosity currently impossible. Instead, Dr. Sebastian Ritterbex, a post-doctoral researcher, and Prof. Taku Tsuchiya from the Geodynamics Research Center, Ehime University applied atomic scale computer simulations based on quantum mechanics theory, called "the ab initio methods", to quantify atomic diffusion in hexagonal close packed (hcp) iron, the most likely phase of iron stable in the inner core (Image 1). This theoretical mineral physics approach can compute electronic properties and chemical bondings highly accurately and thus is quite powerful in investigating material properties in extreme conditions which are difficult to handle by experiments. In this study, the technique was applied to compute iron self-diffusion through energetics of the formation and migration of point defects. Results are applied to macroscopic models of intracrystalline plasticity to compute the rate-limiting creep behavior of hcp iron numerically. The modeling provides evidence that the viscosity of hcp iron is lower than postulated in the previous geophysical modelings and determined by the transport of shear through the crystal lattice, a plastic deformation mechanism known as "dislocation creep" (Image 2), which can lead to the formation of crystallographic preferred orientations. This suggests that plastic flow of hcp iron might indeed contribute to the crystal alignment and thus the seismic anisotropy in the inner core.

The results are able to shed new light on the enigmatic properties of the inner core. It is demonstrated that the low viscosity of hcp iron derived from the theoretical mineral physics approach is consistent with a strong coupling between the inner core and mantle compatible with geophysical observations of small fluctuations in the inner core rotation rate. The results furthermore predict that the inner core is too weak to undergo translational motion, meaning that the hemispherical asymmetric structure is likely to have another, yet unknown, origin. Instead, mechanical stresses of tens of Pa are sufficient to deform hcp iron by dislocation creep at extremely low strain rates, comparable to the candidate forces able to drive inner core convection. The associated viscosity is not a constant but instead depends on the mechanical stress applied to the inner core, a behavior known as "non-Newtonian rheology". This non-linear deformation behavior is therefore expected to govern the dynamics of the Earth's inner core.

As a future prospect, it can be expected that more quantitative modelings using the viscous properties of hcp iron obtained in this study would help to enhance our understanding of the Earth's inner core.

Credit: 
Ehime University

'A bad time to be alive': Study links ocean deoxygenation to ancient die-off

image: Laminated black shales and cherts exposed on the Peel River, Yukon, Canada, that were deposited during the late Ordovician and earliest Silurian. These sediments show no evidence of organisms living on the seafloor due to anoxic conditions at the seabed. Researchers estimated the global extent of low-oxygen conditions during this time period using new trace metal isotope data and uncertainty modeling.

Image: 
Erik Sperling

In a new study, Stanford researchers have strongly bolstered the theory that a lack of oxygen in Earth's oceans contributed to a devastating die-off approximately 444 million years ago. The new results further indicate that these anoxic (little- to no-oxygen) conditions lasted over 3 million years - significantly longer than similar biodiversity-crushing spells in our planet's history.

Beyond deepening understandings of ancient mass extinction events, the findings have relevance for today: Global climate change is contributing to declining oxygen levels in the open ocean and coastal waters, a process that likely spells doom for a variety of species.

"Our study has squeezed out a lot of the remaining uncertainty over the extent and intensity of the anoxic conditions during a mass die-off that occurred hundreds of millions of years ago," said lead author Richard George Stockey, a graduate student in the lab of study co-author Erik Sperling, an assistant professor of geological sciences at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "But the findings are not limited to that one biological cataclysm."

The study, published in Nature Communications April 14, centered on an event known as the Late Ordovician Mass Extinction. It is recognized as one of the "Big Five" great dyings in Earth's history, with the most famous being the Cretaceous-Paleogene event that wiped out all non-avian dinosaurs some 65 million years ago.

Water world

At the outset of the Late Ordovician event about 450 million years ago, the world was a very different place than it is today or was even in the age of the dinosaurs. The vast majority of life occurred exclusively in the oceans, with plants having just begun to appear on land. Most of the modern-day continents were jammed together as a single super-continent, dubbed Gondwana.

An initial pulse of extinctions began due to global cooling that gripped much of Gondwana under glaciers. By approximately 444 million year ago, a second pulse of extinction then set in at the boundary between the Hirnantian and Rhuddanian geological stages largely - albeit inconclusively - attributed to ocean anoxia. Around 85 percent of marine species vanished from the fossil record by the time the Late Ordovician event ultimately passed.

The Stanford researchers and their study colleagues looked specifically at the second pulse of extinction. The team sought to constrain uncertainty regarding where in Earth's seas a dearth of dissolved oxygen - as critical for oceanic biology then as it is now - occurred, as well as to what extent and for how long. Prior studies have inferred ocean oxygen concentrations through analyses of ancient sediments containing isotopes of metals such as uranium and molybdenum, which undergo different chemical reactions in anoxic versus well-oxygenated conditions.

Elemental evidence

Stockey led the construction of a novel model that incorporated previously published metal isotope data, as well as new data from samples of black shale hailing from the Murzuq Basin in Libya, deposited in the geological record during the mass extinction. The model cast a wide net, taking into account 31 different variables related to the metals, including the amounts of uranium and molybdenum that leach off land and reach the oceans via rivers to settle into the seafloor.

The model's conclusion: In any reasonable scenario, severe and prolonged ocean anoxia must have occurred across large volumes of Earth's ocean bottoms. "Thanks to this model, we can confidently say a long and profound global anoxic event is linked to the second pulse of mass extinction in the Late Ordovician," Sperling said. "For most ocean life, the Hirnantian-Rhuddanian boundary was indeed a really bad time to be alive."

Effects on biodiversity

The lessons of the past suggest that the deoxygenation increasingly documented in the modern oceans, particularly in the upper slopes of the continental shelves that bracket major landmasses, will put strain on many organism types - possibly to the brink of extinction. "There is no way that low oxygen conditions are not going to have a severe effect on diversity," Stockey said.

In this way, in addition to shedding light on Earth of a distant yester-eon, the study's findings could help researchers better model the planet as it is now.

"We actually have a big problem modeling oxygenation in the modern ocean," Sperling said. "And by expanding our thinking of how oceans have behaved in the past, we could gain some insights into the oceans today."

Co-authors on the study are with the Georgia Institute of Technology, Yale University, University of Portsmouth and Czech University of Life Sciences Prague.

The research was supported by the Alfred P. Sloan Foundation, National Science Foundation, Packard Foundation and NASA.

Credit: 
Stanford's School of Earth, Energy & Environmental Sciences

Supercomputing future wind power rise

image: Wind energy is surging worldwide, but can its growth be sustained? (a) Global installed wind capacity shows a sharp rise in the last decade. (b) U.S. installed wind capacity projected to 2021 (pink - average rated capacity, green - rotor diameter, black - turbine hub height, orange - electricity generated, blue - total installed capacity). Supercomputer simulations and analysis helped develop scenarios that show quadrupled expansion in U.S. by 2030 make small impact on efficiency and local climate.

Image: 
Pryor et al., CC BY 4.0

Wind power surged worldwide in 2019, but will it sustain? More than 340,000 wind turbines generated over 591 gigawatts globally. In the U.S., wind powered the equivalent of 32 million homes and sustained 500 U.S. factories. What's more, in 2019 wind power grew by 19 percent, thanks to both booming offshore and onshore projects in the U.S. and China.

A study by Cornell University researchers used supercomputers to look into the future of how to make an even bigger jump in wind power capacity in the U.S.

"This research is the first detailed study designed to develop scenarios for how wind energy can expand from the current levels of seven percent of U.S. electricity supply to achieve the 20 percent by 2030 goal outlined by the U.S. Department of Energy National Renewable Energy Laboratory (NREL) in 2014," said study co-author Sara C. Pryor, a professor in the Department of Earth and Atmospheric Studies, Cornell University. Pryor and co-authors published the wind power study in Nature Scientific Reports, February 2020.

The Cornell study investigated plausible scenarios for how the expansion of installed capacity of wind turbines can be achieved without use of additional land. Their results showed that the US could double or even quadruple the installed capacity with little change to system-wide efficiency. What's more, the additional capacity would make very small impacts on local climate. This is achieved in part by deploying next generation, larger wind turbines.

The study focused on a potential pitfall of whether adding more turbines in a given area might decrease their output or even disrupt the local climate, a phenomenon caused by what's referred to as 'wind turbine wakes.' Like the water wake behind a motorboat, wind turbines create a wake of slower, choppy air that eventually spreads and recovers its momentum.

"This effect has been subject to extensive modelling by the industry for many years, and it is still a highly complex dynamic to model," Pryor said.

The researchers conducted simulations with the widely-used Weather Research Forecasting (WRF) model, developed by the National Center for Atmospheric Research. They applied the model over the eastern part of the U.S., where half of the current national wind energy capacity is located.

"We then found the locations of all 18,200 wind turbines operating within the eastern U.S. along with their turbine type," Pryor said. She added that those locations are from 2014 data, when the NREL study was published.

"For each wind turbine in this region, we determined their physical dimensions (height), power, and thrust curves so that for each 10-minute simulation period we can use a wind farm parameterization within WRF to compute how much power each turbine would generate and how extensive their wake would be," she said. Power and wake are both a function of the wind speed that strikes the turbines and what the local near-surface climate impact would be. They conducted the simulations at a grid resolution of 4 km by 4 km in order to provide detailed local information.

The authors chose two sets of simulation years because wind resources vary from year to year as a result of natural climate variability. "Our simulations are conducted for a year with relatively high wind speeds (2008) and one with lower wind speeds (2015/16)," Pryor said, because of the interannual variability in climate from the El Nino-Southern Oscillation. "We performed simulations for a base case in both years without the presence/action of wind turbines so we can use this as a reference against which they describe the impact from wind turbines on local climates," Pryor said.

The simulations were then repeated for a wind turbine fleet as of 2014, then for doubled installed capacity and quadrupled installed capacity, which represents the capacity necessary to achieve the 20 percent of electricity supply from wind turbines in 2030.

"Using these three scenarios we can assess how much power would be generated from each situation and thus if the electrical power production is linearly proportional to the installed capacity or if at very high penetration levels the loss of production due to wakes starts to decrease efficiency," Pryor said.

These simulations are massively computationally demanding. The simulation domain is over 675 by 657 grid cells in the horizontal and 41 layers in the vertical. "All our simulations were performed in the Department of Energy's National Energy Research Scientific Computing Center (NERSC) computational resource known as Cori. Simulations presented in our paper consumed over 500,000 CPU hours on Cori and took over a calendar year to complete on the NERSC Cray. That resource is designed for massively parallel computing but not for analysis of the resulting simulation output," Pryor said.

"Thus, all of our analyses were performed on the XSEDE Jetstream resource using parallel processing and big data analytics in MATLAB," Pryor added. The Extreme Science and Engineering Discovery Environment (XSEDE), awards supercomputer resources and expertise to researchers and is funded by the National Science Foundation (NSF).

The NSF-funded Jetstream cloud environment is supported by Indiana University, the University of Arizona, and the Texas Advanced Computing Center (TACC). Jetstream is a configurable large-scale computing resource that leverages both on-demand and persistent virtual machine technology to support a much wider array of software environments and services than current NSF resources can accommodate.

"Our work is unprecedented in the level of detail in the wind turbine descriptions, the use of self-consistent projections for increased installed capacity, study domain size, and the duration of the simulations," Pryor said. However, she acknowledged uncertainty is the best way to parameterize the action of the wind turbines on the atmosphere and specifically the downstream recovery of wakes.

The team is currently working on how to design, test, develop, and improve wind farm parameterizations for use in WRF. The Cornell team recently had a publication on this matter in the Journal of Applied Meteorology and Climatology where all the analysis was performed on XSEDE resources (this time on Wrangler, a TACC system) and have requested additional XSEDE resources to further advance that research.

Wind energy could play a bigger role in reducing carbon dioxide emissions from energy production, according to the study authors. Wind turbines repay their lifetime carbon emissions associated with their deployment and fabrication in three to seven months of operation. This amounts to nearly 30 years of virtually carbon-free electricity generation.

"Our work is designed to inform the expansion of this industry and ensure it's done in a way that maximizes the energy output from wind and thus continues the trend towards lower cost of energy from wind. This will benefit commercial and domestic electricity users by ensuring continued low electricity prices while helping to reduce global climate change by shifting toward a low-carbon energy supply," Pryor said.

Said Pryor: "Energy systems are complex, and the atmospheric drivers of wind energy resources vary across time scales from seconds to decades. To fully understand where best to place wind turbines, and which wind turbines to deploy requires long-duration, high-fidelity, and high-resolution numerical simulations on high performance computing systems. Making better calculations of the wind resource at locations across the U.S. can ensure better decision making and a better, more robust energy supply."

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

RNA drugs one step closer to be being used in cancer treatment

In recent years, RNA molecules, with the ability to affect or turn off pathogenic genes, have become promising drug candidates in several areas. However, it has been a challenge to develop techniques to deliver the RNA molecules into the cells where they have an effect. Researchers at Lund University in Sweden have now developed a sensitive technique that makes it possible to study the delivery into the cell, and have shown a possible way to effectively deliver RNA drugs to tumours. The study has now been published in Nature Communications.

DNA and its chemical relative, RNA, are molecules that all living organisms use for storing information and carrying out different functions in the cell. In the study in question, the researchers have studied a type of RNA drug called siRNA. During the 1990s, researchers discovered that siRNA, small double-stranded RNA molecules, could be used to turn off virtually any gene. The phenomenon was named RNA interference. In 2006, the discovery was awarded the Nobel Prize in Physiology or Medicine. There were considerable hopes that RNA interference could be used in the treatment of virus infections, cancer and other diseases.

"Two siRNA drugs have been approved by the FDA, but as yet, no drug has been approved for clinical use in cancer", says cancer researcher and physician Anders Wittrup at Lund University and Skåne University Hospital who led the study.

One considerable advantage of using RNA molecules is that they can be rapidly developed and produced. The major problem for all sorts of RNA drugs is getting these molecules into the interior of the cell, the so-called cytosol, where they have an effect. The size of the SiRNA molecule - about 50 times larger than a typical drug molecule - is a factor. In the study in question, the researchers studied siRNA molecules linked with cholesterol, which means that they are taken up effectively by most tumour cells.

"Even if we can make tumour cells take up siRNA, it has been observed that 99 per cent appears to get caught in a type of cellular waste dump, the so-called lysosome", explains Anders Wittrup.

The researchers then examined if, among other things, the use of the malaria drug chloroquine could cause damage in the lysosome, so that the siRNA drug could enter further into the cell. Being able to study this required new imaging techniques in order to see what happens in the cell.

"The completely new microscopy methods make it possible to study in detail when lysosomes and other structures in the cell are opened up for RNA delivery. This is a technique that is in high demand in both academic research and within the pharmaceutical industry", says doctoral student Hampus Du Rietz.

"We have shown how chloroquine can be used so that the siRNA molecules enter the cytosol, i.e. the space between the cell membrane and the nucleus of the cell", says Anders Wittrup.

This way of opening the lysosomes, which previously encapsulated the RNA molecules and thus prevented their effect, means that the main obstacle for the use of siRNA and other RNA-based drugs is now potentially on the way to being removed.

"We see a siRNA effect that is 50 times higher in cell culture studies and also effective delivery in the experimental tumours cultivated outside the body, so-called tumour spheroids, that could not previously be reached by siRNA", says Anders Wittrup.

More studies are needed before RNA drugs are put in clinical use. The research team in Lund has now gone on to studies of more tumour types and other methods of delivering RNA into tumour cells.

"Our findings open up exciting possibilities for how RNA can reach into the interior of cells. I think it will play a crucial role in developing RNA drugs for the treatment of diseases for which we currently lack effective drugs", concludes Anders Wittrup.

Credit: 
Lund University

Diet may help preserve cognitive function

According to a recent analysis of data from two major eye disease studies, adherence to the Mediterranean diet - high in vegetables, whole grains, fish, and olive oil - correlates with higher cognitive function. Dietary factors also seem to play a role in slowing cognitive decline. Researchers at the National Eye Institute (NEI), part of the National Institutes of Health, led the analysis of data from the Age-Related Eye Disease Study (AREDS) and AREDS2. They published their results today in the journal Alzheimer's and Dementia.

"We do not always pay attention to our diets. We need to explore how nutrition affects the brain and the eye" said Emily Chew, M.D., director of the NEI Division of Epidemiology and Clinical Applications and lead author of the studies.

The researchers examined the effects of nine components of the Mediterranean diet on cognition. The diet emphasizes consumption of whole fruits, vegetables, whole grains, nuts, legumes, fish, and olive oil, as well as reduced consumption of red meat and alcohol.

AREDS and AREDS2 assessed over years the effect of vitamins on age-related macular degeneration (AMD), which damages the light-sensitive retina. AREDS included about 4,000 participants with and without AMD, and AREDS2 included about 4,000 participants with AMD. The researchers assessed AREDS and AREDS2 participants for diet at the start of the studies. The AREDS study tested participants' cognitive function at five years, while AREDS2 tested cognitive function in participants at baseline and again two, four, and 10 years later. The researchers used standardized tests based on the Modified Mini-Mental State Examination to evaluate cognitive function as well as other tests. They assessed diet with a questionnaire that asked participants their average consumption of each Mediterranean diet component over the previous year.

Participants with the greatest adherence to the Mediterranean diet had the lowest risk of cognitive impairment. High fish and vegetable consumption appeared to have the greatest protective effect. At 10 years, AREDS2 participants with the highest fish consumption had the slowest rate of cognitive decline.

The numerical differences in cognitive function scores between participants with the highest versus lowest adherence to a Mediterranean diet were relatively small, meaning that individuals likely won't see a difference in daily function. But at a population level, the effects clearly show that cognition and neural health depend on diet.

The researchers also found that participants with the ApoE gene, which puts them at high risk for Alzheimer's disease, on average had lower cognitive function scores and greater decline than those without the gene. The benefits of close adherence to a Mediterranean diet were similar for people with and without the ApoE gene, meaning that the effects of diet on cognition are independent of genetic risk for Alzheimer's disease.

Credit: 
NIH/National Eye Institute

Flamingos form firm friendships

video: Flamingos preen together

Image: 
Paul Rose/WWT Slimbridge

Flamingos form friendships that last for years, new research shows.

The five-year study reveals that, despite being highly social as part of large flocks, flamingos consistently spend time with specific close "friends".

They also avoid certain individuals, suggesting some flamingos just don't get on.

The University of Exeter study examined four flamingo species at WWT Slimbridge Wetland Centre, and found social bonds including "married" couples, same-sex friendships and even groups of three and four close friends.

"Our results indicate that flamingo societies are complex. They are formed of long-standing friendships rather than loose, random connections," said Dr Paul Rose, of the University of Exeter.

"Flamingos don't simply find a mate and spend their time with that individual.

"Some mating couples spend much of their time together, but lots of other social bonds also exist.

"We see pairs of males or females choosing to 'hang out', we see trios and quartets that are regularly together.

"Flamingos have long lives - some of the birds in this study have been at Slimbridge since the 1960s - and our study shows their friendships are stable over a period of years.

"It seems that - like humans - flamingos form social bonds for a variety of reasons, and the fact they're so long-lasting suggests they are important for survival in the wild."

Dr Rose said the findings could help in the management of captive flamingos.

"When moving birds from one zoo to another, we should be careful not to separate flamingos that are closely bonded to each other," he said.

The study - which used data from 2012-16 - examined flocks of Caribbean, Chilean, Andean and Lesser flamingos.

The flocks varied in size from just over 20 to more than 140, and the findings suggest larger flocks contained the highest level of social interactions.

"The simple lesson of this is that captive flamingo flocks should contain as many birds as reasonably possible," Dr Rose said.

The study found that seasons affected social interactions, with more bonds forming in spring and summer - the breeding season.

In three of the four flocks, the study also looked at condition of the birds (measured by the health of their feet) to see if there were links between social lives and health.

No link was found, and Dr Rose said this could mean that socialising is so important to flamingos that they continue to do it even if they are not feeling at their best.

Credit: 
University of Exeter

Development of attachable sticker-type rechargeable batteries

image: Re-attachable and flexible rechargeable batteries available to use in next-generation flexible/wearable devices.

Image: 
Korea Institute of Energy Research (KIER)

Dr. Yoon Hana at Energy Conversion & Storage Materials Laboratory of Korea Institute of Energy Research (KIER, President Kim Jong-nam), Professor Kim Young-Jin (Dept. of Mechanical Engineering, Korea Advanced Institute of Science and Technology) and Professor Kim Seungchul (Dept. of Optics and Mechatronics Engineering, Pusan National University) jointly developed 're-attachable micro-supercapacitors (MSCs)* using highly swollen laser-induced-graphene electrodes' and their research findings were listed in Chemical Engineering Journal*, one of the world-renowned in the field.

* MSCs are thin-film based ultra-thin supercapacitors that are getting much attention as they are more stable with higher power and energy densities compared to Li thin-film batteries.

* Chemical Engineering Journal is considered as one of the best international journals in the chemical engineering sector (published by Elsevier, SCI I.F. 8.355)

As demands for lighter and smaller wearable devices and high functional IoT gadgets rise, there is a growing need for new technologies for power collection, storage, management. Wearable devices and IoT products are increasingly applied to various sectors of the society these days. Hence, researchers are actively carrying out R&D activities to develop energy storage devices with additional functions besides power supply.

Preconditions of wearable energy storage devices are that they should be able to change their forms along with changing shapes of human body and movements while being flexible, safe to use, and providing excellent durability. The conventional batteries were not flexible as they were developed to have a cylindrical, prismatic, or pouch type base structure and had limited energy densities. So, they had some limitations to be applied to next-generation products such as wearable devices or micro devices that require high flexibility, portability and areal or volumetric energy densities.

In the past, R&D efforts to develop energy storage devices for wearable devices were mostly put on Li thin-film batteries. Li thin-film microbatteries, which are widely and commercially available power sources for microelectronics, suffer from short life cycles, abrupt failures, unstable low-temperature kinetics, and pose safety concerns when associated with lithium.

Recently, MSCs are gaining high attention as next-generation energy storage devices to replace Li thin-film batteries. In principle, the supercapacitors were semi permanent to use and had many benefits such as high power densities (10 times more compared with lithium ion batteries), stability, efficiency and fast charge/discharge rates. However, their scope of use was somewhat limited to certain areas due to a low energy density per load (which was estimated as 1/10 of Li batteries). Compared to supercapacitors, MSCs have a significantly higher power density than lithium batteries and energy densities are similar or even higher than their rivals. Hence, they are considered as an alternative for ultra-thin high-performance energy storage devices.

The research team successfully developed sticker-type flexible MSCs that had a flexible structure and can be attached everywhere on objects or surfaces by using ultrashort-pulse-lasers.

Ultrashort-pulse laser can instantly generate strong intensity to make highly swollen graphene electrodes. By impregnating adhesive polymer composites to the inside of highly swollen graphene, researchers were able to develop sticker-type MSCs with excellent electrodes performance and durability while maintaining adhesiveness.

Dopamine, a functional mimic of the adhesive protein of mussel, was introduced as a coating material for the sticker-type flexible MSCs to improve electrochemical performance. The catechol groups in dopamine provide redox-active moieties for pseudocapacitive electrodes. By doing so, they were able to develop sticker-type flexible energy storage devices that had high volumetric energy densities similar to that of lithium thin-film batteries with excellent volumetric power density, 13 times greater than that of their counterparts.

Dr. Hana Yoon of KIER, the principle investigator of this study, said,"Our sticker-type flexible MSCs are easily re-attached to next-generation wearable devices and IoT gadgets and eco-friendly. They are expected to solve many obstacles of lithium based energy storage technologies."

Also, KAIST professor, Young-jin Kim, a co-researcher of this study, said, "The patterning technology developed from this study generated unique swollen graphene with ultrashort-pulse-laser in a relatively short period time, while minimizing loss of materials. This technology has a potential to promote industrial applications of laser-induced-graphene to various sectors."

Credit: 
National Research Council of Science & Technology

Inhibition of sphingolipid metabolism and neurodegenerative diseases

Disrupting the production of a class of lipids known as sphingolipids in neurons improved symptoms of neurodegeneration and increased survival in a mouse model, according to new research led by the joint laboratory of Robert Farese, Jr. and Tobias Walther at Harvard T.H. Chan School of Public Health and Howard Hughes Medical Institute.

The findings, published online April 13, 2020 in the Proceedings of the National Academy of Sciences (PNAS), could help in the development of therapies for a range of neurodegenerative diseases.

"This work began in yeast, and serendipitously we discovered that mutations related to neurodegeneration in humans led to abnormalities in cell lipid metabolism. Most investigators don't look at the lipids, and we were quite excited and surprised," said Walther, professor of molecular metabolism and executive director of the Harvard Chan Advanced Multi-omics Platform.

In the study, the Farese & Walther Laboratory identified a link between sphingolipid metabolism and a mutation that affected vesicle trafficking, the process by which molecules are transported to different parts of a cell. Defects in the trafficking process are known to play a role in neurodegenerative diseases, but the exact mechanism of the effect is not understood.

The Farese & Walther Laboratory has previously focused their research efforts on the Golgi-associated retrograde protein (GARP) complex. Earlier studies in yeast and flies showed that mutations in GARP proteins lead to sphingolipid abnormalities and stunted cell growth, and that these effects could be reversed by inhibiting sphingolipid metabolism. Many neurological diseases are caused by mutations in genes that are involved in lipid metabolism, and changes in lipid metabolism have been reported in amyotrophic lateral sclerosis (ALS), Parkinson's disease, and Alzheimer's disease. With that in mind, the Farese & Walther Laboratory investigated whether modulating sphingolipid metabolism affects cell dysfunction as it relates to neurodegenerative diseases.

The research team, which included Harvard Chan scientists Constance Petit and Jane Lee, used a model known as wobbler mice, which have a mutation in a specific GARP protein called VSP54. That mutation causes motor-neuron disease similar to ALS. They found that sphingolipid molecules that were toxic to cells accumulated in spinal cords of wobbler mice, as well as in embryonic fibroblasts, a type of cell that was cultured from the mice. Furthermore, they showed that certain molecules involved in sphingolipid metabolism were misplaced in neurons from wobbler mice.

The team then found that treating the mice with the sphingolipid synthesis inhibitor myriocin, which is already used as an antifungal and immunosuppressive drug, prevented the buildup of sphingolipids, reduced their toxic effects, improved wellness scores in the mice, and ultimately extended the animals' lifespan.

The results indicate that compromised sphingolipid metabolism in GARP mutations are a potential cause of neurodegeneration and that correcting defects in sphingolipid metabolism might restore neuronal function. Sphingolipid metabolism may therefore be an important target for therapeutic development for neurological disorders associated with mutations in membrane trafficking.

"We are cautiously optimistic," Farese said. "Perhaps targeting lipid abnormalities provides a new therapeutic angle for some diseases of neurodegeneration."

Credit: 
Harvard T.H. Chan School of Public Health

Why do so many pregnancies and in vitro fertilization attempts fail?

image: Chromosomes in an egg. Modeling can help predict how many eggs should be retrieved for in vitro fertilization to lead to a normal, successful pregnancy.

Image: 
Cecilia Blengini and Kristen Driscoll

Scientists have created a mathematical model that can help explain why so many pregnancies and in vitro fertilization attempts fail.

The Rutgers-led study, which may help to improve fertility, is published in the journal Proceedings of the National Academy of Sciences.

Mistakes in female meiosis, the cell division process that creates egg cells, result in eggs with an abnormal number of chromosomes (too many or too few). This phenomenon is strongly associated with the repeated loss of pregnancies and the failure of in vitro fertilization (IVF) procedures, as well as developmental disorders such as Down syndrome.

"Our study demonstrates that in the future, mathematical models can be powerful tools for predicting the outcomes of in vitro fertilization for infertility patients and/or provide the basis for considering alternative family planning options, such as adoption," said senior author Jinchuan Xing, an associate professor in the Department of Genetics in the School of Arts and Sciences and at the Human Genetics Institute of New Jersey at Rutgers University-New Brunswick.

"Modeling efforts such as ours can provide guidelines on, for instance, how many eggs must be collected during a single IVF cycle to ensure there will be at least one chromosomally normal conception," said co-author Karen Schindler, an associate professor in the Department of Genetics and at the Human Genetics Institute of New Jersey.

Pregnancy loss is extremely common, with nearly 20 percent of clinically recognized pregnancies resulting in miscarriage, and many more unrecognized pregnancies end earlier, the study notes.

A leading cause of early miscarriage is called aneuploidy, when eggs have the wrong number of chromosomes, and it's also the main cause of IVF failure. The vast majority of eggs with chromosome problems are linked to errors in female cell division that increase as women age. Understanding how that happens is crucial because the average age at conception is rising in developed countries.

"Such basic knowledge is required to pave the way for future diagnostic and therapeutic innovations to improve human fertility," the study says.

The scientists developed a mathematical model describing all possible abnormal chromosome count issues in eggs due to cell division errors. Using data on 11,157 early stage human embryos (blastocysts), the model revealed previously unknown patterns of errors.

The model can be used to identify IVF patients who produce an extreme number of abnormal embryos. It's also a powerful tool for understanding why abnormal numbers of chromosomes arise when cells divide and for predicting the outcomes of IVF reproduction. The model potentially could provide guidance for clinicians on the expected number of IVF cycles needed to get a normal conception for each patient. The modeling framework can also be expanded and adapted to address other processes, such as predicting errors in sperm.

Credit: 
Rutgers University

Coronavirus (COVID-19) testing, next steps, and the role of small business

image: This transmission electron microscope image shows SARS-CoV-2--also known as 2019-nCoV, the virus that causes COVID-19--isolated from a patient in the U.S. Virus particles are shown emerging from the surface of cells cultured in the lab. The spikes on the outer edge of the virus particles give coronaviruses their name, crown-like.

Image: 
NIAID-RML, CC BY 2.0 (https://creativecommons.org/licenses/by/2.0/deed.en)

The BioScience Talks podcast features discussions of topical issues related to the biological sciences.

Public health officials have argued that thorough and accurate testing for SARS-CoV-2 is essential for gaining a foothold in the fight against the deadly COVID-19 pandemic. To date, however, a lack of reliable testing in the United States has hampered efforts to achieve a thorough understanding of the disease's abundance and spread.
In this episode of BioScience Talks, we are joined by Dr. Crystal Icenhour, CEO of Aperiomics, and Dr. Robbie Barbero, Chief Business Officer of Ceres Nanosciences. Both companies have recently ramped up efforts to improve the prospects of broad-scale testing for the novel pathogen in human patients.
Aperiomics, whose core technology uses deep shotgun metagenomic sequencing to test for tens of thousands of bacteria, virus, fungus, and parasite at once, has launched a SARS-CoV-2-specific test, with the aims of increasing test availability and delivering crucially important public health data. Ceres Nanosciences's flagship Nanotrap particle technology enables the capture, concentration, and preservation of low abundance analytes from complex biological samples. The technology is presently being tested with the SARS-CoV-2 and is expected to help in improving the accuracy of existing testing protocols.

To hear the whole discussion, visit this link for this latest episode of the BioScience Talks podcast.

Credit: 
American Institute of Biological Sciences

Heavy iron isotopes leaking from Earth's core

image: Earth's molten core may be leaking iron, according to researchers at UC Davis and Aarhus University, Denmark. The new study suggests heavier iron isotopes migrate toward lower temperatures -- and into the mantle -- while lighter iron isotopes circulate back down into the core. This effect could cause core material infiltrating the lowermost mantle to be enriched in heavy iron isotopes.

Image: 
L. O'Dwyer Brown, Aarhus University

Earth's molten core may be leaking iron, according to researchers who analyzed how iron behaves inside our planet.

The boundary between the liquid iron core and the rocky mantle is located some 1,800 miles (2,900 km) below Earth's surface. At this transition, the temperature drops by more than a thousand degrees from the hotter core to the cooler mantle.

The new study suggests heavier iron isotopes migrate toward lower temperatures -- and into the mantle -- while lighter iron isotopes circulate back down into the core. (Isotopes of the same element have different numbers of neutrons, giving them slightly different masses.) This effect could cause core material infiltrating the lowermost mantle to be enriched in heavy iron isotopes.

"If correct, this stands to improve our understanding of core-mantle interaction," said Charles Lesher, lead author, professor emeritus of geology at UC Davis and professor of earth system petrology at Aarhus University in Denmark.

Understanding the physical processes operating at the core-mantle boundary is important for interpreting seismic images of the deep mantle, as well as modeling the extent of chemical and thermal transfer between the deep Earth and surface of our planet, Lesher said.

Lesher and his colleagues analyzed how iron isotopes move between areas of different temperatures during experiments conducted under high temperature and pressure. Their findings can explain why there are more heavy iron isotopes in mantle rocks than in chondrite meteorites, the primordial material from the early solar system, Lesher said.

"If true, the results suggest iron from the core has been leaking into the mantle for billions of years," he said.

Computer simulations performed by the research team show this core material can even reach the surface, mixed with and transported by hot, upwelling mantle plumes. Some lavas erupted at oceanic hot spots such as Samoa and Hawaii are enriched in heavy iron isotopes, which Lesher and the team propose could be a signature of a leaky core.

Credit: 
University of California - Davis

NIH BRAIN Initiative tool helps researchers watch neural activity in 3D

video: Mouse olfactory epithelial cells respond in real time to odor mixtures. When individual cells become activated by odors their green fluorescence increases. These images were obtained through the 3D SCAPE microscopy system, which was developed with funding from the NIH BRAIN Initiative.

Image: 
Video courtesy of Hillman and Firestein labs

Our ability to study networks within the nervous system has been limited by the tools available to observe large volumes of cells at once. An ultra-fast, 3D imaging technique called SCAPE microscopy, developed through the National Institutes of Health (NIH)'s Brain Research through Advancing Innovative Technologies (BRAIN) Initiative, allows a greater volume of tissue to be viewed in a way that is much less damaging to delicate networks of living cells.

"This is an elegant demonstration of the power of BRAIN Initiative technologies to provide new insights into how the brain decodes information to produce sensations, thoughts, and actions," said Edmund Talley, Ph.D., program director, National Institute of Neurological Disorders and Stroke (NINDS), a part of NIH.

The SCAPE microscope was developed in the laboratory of Elizabeth M.C. Hillman, Ph.D., professor of biomedical engineering and radiology and principal investigator at Columbia's Zuckerman Institute in New York City.

"SCAPE microscopy has been incredibly enabling for studies where large volumes need to be observed at once and in real time," said Dr. Hillman. "Because the cells and tissues can be left intact and visualized at high speeds in three dimensions, we are able to explore many new questions that could not be studied previously."

The olfactory epithelium is located deep within the nose and is made up of many thousands of nerve cells that each contain a single special receptor that reacts to a specific odor. Studies using individual, simple odors suggest that when we smell something, a specific combination of these nerve cells become activated, forming a code that is interpreted by the brain as that particular odor. In the past, researchers were only able to study a limited fraction of this area at any one time and the methods they used could damage the tissue, making it hard to draw definitive conclusions.

The olfactory epithelium proved to be an ideal target to study using SCAPE, because the nerve cells responsible for detecting odors are distributed somewhat randomly. This means that it is important to watch as many cells as possible to draw conclusions about their activity patterns.

Using SCAPE, the researchers who performed this study were able to measure thousands of olfactory nerve cells at once as they responded to combinations of different odors described as "almond," "floral/jasmine," and "citrus." However, although they saw simple patterns predicted by the existing theory when they exposed the tissue to odors individually, when two or three odors were mixed together they saw a much more complex system of interactive nerve cell responses than expected.

"We expected the response to a mixture of odors to look a lot like the sum of responses to the original odors," said Stuart Firestein, Ph.D., professor at Columbia University, New York City, and a senior author of the study. "Instead, we observed complex interactions where a second odor enhanced a neuron's response to the first odor, or in other cases, inhibited a neuron's response."

These results suggest that the signals reaching the brain are distorted by these receptor interactions within the nose, altering the code for a mixture of odors into something perceptibly different from the sum of its parts. Given that almost all odors around us are complex mixtures, this mechanism might explain how we can distinguish between a huge variety of different smells, while also explaining why it can often be difficult to pick out a mixture's individual ingredients.

"SCAPE provides a major advantage in that, because we can watch such a large area at one time, we can catch important events that happen rarely," said Dr. Firestein. "This technique has opened the door for us to explore many additional questions in the olfactory field."

Dr. Firestein also says he and others are now able to further study how complex odor combinations are encoded by the olfactory system and how those signals are eventually interpreted by the brain. The implications of this study also go beyond the current understanding of how the brain perceives odors, including a potential new way to screen drug candidates that affect the types of receptors at work in the olfactory system.

"This paper is transformative with respect to our understanding of how odor mixtures are encoded and highlights the remarkable complexity of odorant/receptor interactions," said Susan Sullivan, Ph.D., program director, National Institute on Deafness and Other Communications Disorders (NIDCD), part of NIH and co-funder of this project.

In addition, several neurological disorders, such as Alzheimer's disease and Parkinson's disease, and systemic diseases including COVID-19, often have loss of smell as an early symptom. A better understanding of the changes that cause this symptom could aid in an early detection system prior to the onset of more debilitating effects.

Because studies like this one involve collecting information on more than 10,000 cells over long periods of time, the amount of data gathered by the team was enormous. The team had to develop new analysis methods and even build their own computers to handle the computation.

"We are constantly improving and refining SCAPE, and intense interdisciplinary collaborations such as this one have been essential for guiding these innovations, further expanding what we can do with the technique," said Dr. Hillman, whose team is working hard to share the SCAPE system with as many labs as possible, both through direct interactions, dissemination, and commercialization.

One of the goals of the BRAIN Initiative is to accelerate the development and application of innovative technologies for studying the nervous system. The SCAPE method was developed and refined, in part, thanks to funding through this initiative.

Credit: 
NIH/National Institute of Neurological Disorders and Stroke