Tech

Texas A&M study: Marine heatwaves can strengthen hurricanes

Oceanographers have found that a hurricane can be considerably strengthened in the Gulf of Mexico through the compounding effects of two extreme weather events. This process could continue in the future as ocean temperatures continue to rise around the world, according to a study co-authored by a Texas A&M University at Galveston professor.

Kyeong Park, professor and head of the Department of Marine and Coastal Environmental Science at Texas A&M-Galveston, and colleagues have had their work published in Nature Communications.

The team examined Hurricane Michael, the first Category 5 hurricane on record to impact the Florida Panhandle in October 2018. Prior to Hurricane Michael, Tropical Storm Gordon in early September mixed cold bottom water with warm surface water, lowering the surface water temperature and increasing the capacity of absorbing more heat.

During the subsequent atmospheric heatwave, the water column could absorb more heat energy resulting in a marine heatwave, which later was used to strengthen Hurricane Michael to a Category 5 storm. Hurricane Michael became much stronger than was forecasted because it did not take into account this compound effect.

"During summer in the ocean, solar energy increases air temperature and surface water temperature so much that the entire water column - from surface to bottom - cannot absorb heat from the atmosphere," Park said.

Water in the Gulf of Mexico in the summer months is especially prone to these conditions, the study concluded. The compound effect of Tropical Storm Gordon followed by an atmospheric heatwave provided an optimal condition for Hurricane Michael to become stronger than expected."It does appear that a storm or hurricane can get stronger if the marine conditions are right," Park said. "Hurricanes Sally and Laura in the past few weeks are good examples of stronger hurricanes because of the compound effect we described in our paper. This pattern could also exacerbate other environmental problems in sensitive ecosystems such as bleaching of coral reefs, hypoxia (low oxygen in the water) and other problem that are predicted as global warming continues."

Credit: 
Texas A&M University

How green hydrogen can become cheap enough to compete with fossil fuels

Engineers from UNSW Sydney have crunched the numbers on green hydrogen production costs to reveal that Australia is in prime position to take advantage of the green hydrogen revolution, with its great solar resource and potential for export.

The researchers identified the key factors required to reduce the cost of green hydrogen to become competitive with other methods of producing hydrogen using fossil fuels.

In a paper published today in Cell Reports Physical Science, the authors show how different factors affect the cost of producing green hydrogen by electrolysis using a dedicated solar system and using no additional power from the grid.

Without using electricity from the grid, which is predominantly supplied by fossil fuel electricity, this method produces hydrogen with nearly zero emissions. Being free of the grid also means such a system could be deployed in remote locations with good, year-long exposure to sunlight.

The researchers examined a range of parameters that could affect the final price of green hydrogen energy including the cost of electrolyser and solar photovoltaic (PV) systems, electrolyser efficiency, available sunlight and the size of the installations.

In thousands of calculations using randomly ascribed values for various parameters in different scenarios, the researchers found the cost of green hydrogen ranged from $US2.89 to $US4.67 per kilogram ($AUD4.04 to $AUD6.53). It was possible to get even lower than this, the researchers said, with proposed scenarios approaching $US2.50 per kilogram ($AUD3.50), at which point green hydrogen starts to become competitive with fossil fuel production.

WHY A RANGE OF PRICES?

Co-author Nathan Chang, who is a postdoctoral fellow with UNSW's School of Photovoltaic Renewable Energy Engineering, says a common problem when trying to estimate the costs of developing technology is that calculations are based on assumptions that may only apply to certain situations or circumstances. This makes the results less relevant for other locations and doesn't take into account that technology performance and costs improve over time.

"But here, rather than getting a single calculated number, we get a range of possible numbers," he says.

"And each particular answer is a combination of a lot of possible input parameters."

"For example, we have recent data on the cost of PV systems in Australia, but we know that in some countries, they pay much more for their systems. We also have seen that PV costs are reducing each year. So we put cost values both lower and higher into the model to see what would happen to the cost of hydrogen.

"So after plugging all these different values into our algorithm and getting a range of prices of hydrogen energy, we then said, 'Okay, so there were some cases where we got closer to that $US2 ($AUD2.80) per kilogram figure. What was it about those cases that got it down so low?'"

Co-author Dr Rahman Daiyan, of ARC Training Centre for Global Hydrogen Economy and UNSW's School of Chemical Engineering, says that when they examined the cases where the cost per kilogram approached US$2, certain parameters stood out.

"Capital costs of electrolysers and their efficiencies still dictate the viability of renewable hydrogen," he says.

"One crucial way we could further decrease costs would be to use cheap transition metal-based catalysts in electrolysers. Not only are they cheaper, but they can even outperform catalysts currently in commercial use.

"Studies like these will provide inspiration and targets for researchers working in catalyst development."

IT ALL ADDS UP

The system and cost simulation model itself was built by undergraduate student Jonathon Yates, who got the opportunity to work on the project through UNSW's Taste of Research scholarship program.

"We used real weather data and worked out the optimum size of the PV system for each location," he says.

"We then saw how this would change the economics in different locations around the world where solar-powered electrolysis is being considered.

"We knew that each location that would install such a system would be different - requiring different sizes and having to wear different costs of components. Combining these with weather variations means that some locations will have lower cost potential than others, which can indicate an export opportunity."

He points to the example of Japan which does not have a great solar resource and where the size of the systems may be limited.

"So there is potentially a significant cost difference when compared to the spacious outback regions of Australia, which have plenty of sunlight," says Mr Yates.

LOOKING AHEAD

The researchers say that it is not far-fetched to imagine large scale hydrogen energy plants becoming cheaper than fossil fuel ones in the next couple of decades.

"Because PV costs are reducing, it is changing the economics of solar hydrogen production," says Dr Chang.

"In the past, the idea of a remote solar-driven electrolysis system was considered to be far too expensive. But the gap is reducing every year, and in some locations, there will be a cross-over point sooner rather than later."

Dr Daiyan says: "With technology improvements in electrolyser efficiency, an expectation of lower costs of installing these types of systems, and governments and industry being willing to invest in larger systems to take advantage of economies of scale, this green technology is getting closer to being competitive with alternative fossil fuel production of hydrogen."

Mr Yates says it is only a matter of time until green hydrogen becomes more economical than hydrogen produced from fossil fuel methods.

"When we recalculated the cost of hydrogen using other researchers' projections of electrolyser and PV costs, it's possible to see green hydrogen costs getting as low as US$2.20 per kg ($AUD3.08) by 2030, which is on par or cheaper than the cost of fossil-fuel produced hydrogen.

"As this happens, Australia, with its great solar resource, will be well placed to take advantage of this."

Credit: 
University of New South Wales

Scientists capture candid snapshots of electrons harvesting light at the atomic scale

image: Illustration of a PEC model system with 20-nanometer gold nanoparticles attached to titanium dioxide.

Image: 
Berkeley Lab

In the search for clean energy alternatives to fossil fuels, one promising solution relies on photoelectrochemical (PEC) cells - water-splitting, artificial-photosynthesis devices that turn sunlight and water into solar fuels such as hydrogen.

In just a decade, researchers in the field have achieved great progress in the development of PEC systems made of light-absorbing gold nanoparticles - tiny spheres just billionths of a meter in diameter - attached to a semiconductor film of titanium dioxide nanoparticles (TiO2 NP). But despite these advancements, researchers still struggle to make a device that can produce solar fuels on a commercial scale.

Now, a team of scientists led by the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) has gained important new insight into electrons' role in the harvesting of light in gold/TiO2 NP PEC systems. The scientists say that their study, recently published in the Journal of Physical Chemistry Letters, can help researchers develop more efficient material combinations for the design of high-performance solar fuels devices.

"By quantifying how electrons do their work on the nanoscale and in real time, our study can help to explain why some water-splitting PEC devices did not work as well as hoped," said senior author Oliver Gessner, a senior scientist in Berkeley Lab's Chemical Sciences Division.

And by tracing the movement of electrons in these complex systems with chemical specificity and picosecond (trillionths of a second) time resolution, the research team members believe they have developed a new tool that can more accurately calculate the solar fuels conversion efficiency of future devices.

Electron-hole pairs: A productive pairing comes to light

Researchers studying water-splitting PEC systems have been interested in gold nanoparticles' superior light absorption due to their "plasmonic resonance" - the ability of electrons in gold nanoparticles to move in sync with the electric field of sunlight.

"The trick is to transfer electrons between two different types of materials - from the light-absorbing gold nanoparticles to the titanium-dioxide semiconductor," Gessner explained.

When electrons are transferred from the gold nanoparticles into the titanium dioxide semiconductor, they leave behind "holes." The combination of an electron injected into titanium dioxide and the hole the electron left behind is called an electron-hole pair. "And we know that electron-hole pairs are critical ingredients to enabling the chemical reaction for the production of solar fuels," he added.

But if you want to know how well a plasmonic PEC device is working, you need to learn how many electrons moved from the gold nanoparticles to the semiconductor, how many electron-hole pairs are formed, and how long these electron-hole pairs last before the electron returns to a hole in the gold nanoparticle. "The longer the electrons are separated from the holes in the gold nanoparticles - that is, the longer the lifetime of the electron-hole pairs - the more time you have for the chemical reaction for fuels production to take place," Gessner explained.

To answer these questions, Gessner and his team used a technique called "picosecond time-resolved X-ray photoelectron spectroscopy (TRXPS)" at Berkeley Lab's Advanced Light Source (ALS) to count how many electrons transfer between the gold nanoparticles and the titanium-dioxide film, and to measure how long the electrons stay in the other material. Gessner said his team is the first to apply the X-ray technique for studying this transfer of electrons in plasmonic systems such as the nanoparticles and the film. "This information is crucial to develop more efficient material combinations."

An electronic 'count'-down with TRXPS

Using TRXPS at the ALS, the team shone pulses of laser light to excite electrons in 20-nanometer (20 billionths of a meter) gold nanoparticles (AuNP) attached to a semiconducting film made of nanoporous titanium dioxide (TiO2).

The team then used short X-ray pulses to measure how many of these electrons "traveled" from the AuNP to the TiO2 to form electron-hole pairs, and then back "home" to the holes in the AuNP.

"When you want to take a picture of someone moving very fast, you do it with a short flash of light - for our study, we used short flashes of X-ray light," Gessner said. "And our camera is the photoelectron spectrometer that takes short 'snapshots' at a time resolution of 70 picoseconds."

The TRXPS measurement revealed a few surprises: They observed two electrons transfer from gold to titanium dioxide - a far smaller number than they had expected based on previous studies. They also learned that only one in 1,000 photons (particles of light) generated an electron-hole pair, and that it takes just a billionth of a second for an electron to recombine with a hole in the gold nanoparticle.

Altogether, these findings and methods described in the current study could help researchers better estimate the optimal time needed to trigger solar fuels production at the nanoscale.

"Although X-ray photoelectron spectroscopy is a common technique used at universities and research institutions around the world, the way we expanded it for time-resolved studies and used it here is very unique and can only be done at Berkeley Lab's Advanced Light Source," said Monika Blum, a co-author of the study and research scientist at the ALS.

"Monika's and Oliver's unique use of TRXPS made it possible to identify how many electrons on gold are activated to become charge carriers - and to locate and track their movement throughout the surface region of a nanomaterial - with unprecedented chemical specificity and picosecond time resolution," said co-author Francesca Toma, a staff scientist at the Joint Center for Artificial Photosynthesis (JCAP) in Berkeley Lab's Chemical Sciences Division. "These findings will be key to gaining a better understanding of how plasmonic materials can advance solar fuels."

The team next plans to push their measurements to even faster time scales with a free-electron laser, and to capture even finer nanoscale snapshots of electrons at work in a PEC device when water is added to the mix.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Work bubbles can help businesses reopen while limiting risk of COVID-19 outbreaks

Creating "work bubbles" during the COVID-19 pandemic can help reduce the risk of company-wide outbreaks while helping essential businesses continue to function, as the example of Bombardier Aviation demonstrates in an analysis published in CMAJ (Canadian Medical Association Journal).

The need to keep essential businesses open during the pandemic has resulted in large outbreaks in factories and other locations where employees work in close proximity, jeopardizing the safety of employees and the community as well as disrupting supply chains.

"Employers have a responsibility to provide a safe work environment for their employees," says lead author Dr. Jeffrey Shaw, a critical care physician and fellow at the University of Calgary's Cumming School of Medicine, Calgary, Alberta. "Creating company cohorts, or work bubbles, can reduce the risk of a company-wide COVID-19 outbreak that could affect the larger community."

Bombardier Aviation example

The authors describe how Bombardier Aviation, a large Canadian company that employs 22 000 people at 7 factories across 4 provinces/states in Canada and the United States, adjusted to the pandemic. Most office staff worked from home, ensuring that only employees who built or supported aircraft delivery were on site. Essential employees were organized into cohorts that interacted only with each other to minimize contact with other staff.

Cohorts were organized on the principles that work bubbles should

Include the least number of people required to do the job

Be designed to allow business continuation if another work bubble is removed from the workforce

Be strictly separated from other bubbles in time and/or space to prevent virus transmission between groups.

Scheduling rotating workdays and disinfecting shared spaces after use by a work bubble can ensure physical separation of employees. Daily symptom screening and rapid isolation of infected employees is also key to containing and preventing outbreaks.

"Adjusting our operational activities to the pandemic was challenging, but we are extremely proud of how proactive and efficient our teams were in adapting to their new working conditions. Keeping our employees safe is our number one priority," says coauthor Nancy Barber, COO, Industrialization, Footprint and Central Planning, Bombardier Aviation.

Despite some challenges, work bubbles offer benefits including

Reducing the reproduction number of the disease

Increasing efficiency of contact tracing

Protecting employees from contracting severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) at work

Increasing employee confidence in workplace safety

Allowing for business to continue in the case of positive cases

"As we begin to relax the public health measures brought in to slow the spread of COVID-19 in Canada, we must think of how to limit the risk of becoming infected at work," says Dr. Shaw. "Using a work bubbles strategy can help businesses continue to function and ensure the safety of employees."

Listen to a podcast with coauthors Dr. Jeffrey Shaw and Hayley Wickenheiser discussing work bubbles and their practical application to factories, schools and sports.

Credit: 
Canadian Medical Association Journal

Zebrafish embryos help prove what happens to nanoparticles in the blood

image: Fluorescently labelled 70 nm SiO2 nanoparticles were injected into the bloodstream of 3 days old transgenic zebrafish embryos and live-imaged at 3 min after the injection. The insets show schematics for the two conditions tested: control nanoparticles with a corona of endogenous proteins (orange) and nanoparticles with a pre-formed corona of fetal bovine serum (FBS) proteins (blue) and additional endogenous proteins. Control nanoparticles are rapidly captured by macrophages, while the nanoparticles with a non-self biological identity are effectively sequestered by scavenger endothelial cells.

Image: 
Yuya Hayashi. Adapted from Mohammad-Beigi et al. (2020) ACS Nano. Copyright 2020 American Chemical Society

A variety of nanoparticles are designed for targeted drug delivery, but unfortunately only a very small proportion of the injected nanoparticles reach the target site such as solid tumours. The reason behind the low targeting efficiency is often considered a "black box" and had thus been little explored for many years.

Recently, an international research team led by Yuya Hayashi from the Department of Molecular Biology and Genetics (MBG), Aarhus University, demonstrated the beauty of zebrafish embryos in nano-bioimaging that can visualise dynamic interactions between nanoparticles and cells of interest in a living organism (see another article "Zebrafish let you see the biological fate of nanoparticles in vivo").

Now, teaming up with researchers from Interdisciplinary Nanoscience Center (iNANO), Yuya seeks to answer unsolved mysteries in bionanoscience - the first in line is the biological identity concept, which explains how cells recognise nanoparticles through a "corona" of proteins that surround each particle. This concept has now been proved for the first time in a living organism by the use of zebrafish embryos uncovering what happens to nanoparticles injected into the blood.

Friend or foe? How biological systems recognise nanoparticles

"What the Cell Sees in Bionanoscience" is one of the early publications that have defined how a corona of proteins forms around a nanoparticle and how such a protein corona implies the need for rethinking the way we look at nanoparticles within a biological milieu. From extensive research in the past decade, we now understand that two opposing effects mainly contribute to nanoparticle uptake by cells. In general, the protein corona prevents the nanoparticle surface from direct physical interactions with the cell membrane. However, what if the protein corona presents a signal that triggers a specific biological interaction with receptors deployed on the cell membrane? That is something the cell sees and thus confers a biological identity to the nanoparticle.

Now the researchers from Aarhus University have thus provided the first "visual" evidence for the striking contribution of the protein corona to nanoparticle clearance from the blood that entailed adverse outcomes in the zebrafish embryo model. The research team used a species-mismatched source of proteins for the corona formation to create a "non-self" biological identity and traced the journey of nanoparticles travelling through the blood and to their final destination - endolysosomes in the cell. This revealed surprisingly rapid uptake and acidification of the nanoparticles by scavenger endothelial cells (functional equivalent to the liver sinusoidal endothelial cells in mammals) followed by pro-inflammatory activation of macrophages (see the movie featured in Yuya's group webpage).

"It sounds like a crazy idea to inject nanoparticles with proteins from another animal," says Yuya, "but for example, biomolecule-inspired nanomedicines are tested in a mouse model without particular concerns for the species-mismatched combination. Or else some clever folks humanise the mouse to take care of the species compatibility problem. In fact, even at the cell culture level nanoparticles are still routinely tested following the tradition to use serum supplement derived from cows while knowing that nanoparticle-protein interactions are a key driver of cellular uptake."

"What makes this kind of experiments rather challenging is," adds first-author Hossein Mohammad-Beigi, "to maximally retain the original protein corona in a living organism. If the pre-formed corona gets quickly exchanged by endogenous blood proteins, the hypothesis tested becomes invalid. We have made quite some efforts to characterise the protein corona to ensure the nanoparticles preserve the non-self biological identity."

Seeing is believing - the zebrafish model can offer what rodent models cannot

The greatest advantage of the zebrafish model is its power in multicolour real-time imaging, whereby multiple combinations of fluorescence tracers and reporter proteins can be imaged in a simple setup at high spatio-temporal resolution. This provides a new opportunity that lies between less realistic cell culture systems and more challenging rodent experiments such as intravital microscopy.

"Using cell cultures, we have learnt quite a lot about how cells recognise nanoparticles rather as dynamic aggregates of proteins but it was never tested in a more realistic situation," Yuya explains. "With establishment of the zebrafish model, we have finally acquired a means to further explore this question in a living organism. It was a simple approach with an extreme scenario tested in a very complex system, but I believe we are now one step closer to understanding what the protein corona can really mean to nanoparticles. In an environment rich in proteins, nanoparticles can wear a mask that gives them a biological identity, and its non-selfness can make them a foe. What defines the degree of the non-selfness? Well, it's the next big question we have to address."

Credit: 
Aarhus University

Colloidal quantum dot light emitters go broadband in the infrared

image: The multi-stack of CQDs of different size are built on top of a flexible plastic substrate that is later deposited onto a commercial visible LED to produce broadband IR light. Image credit: ©ICFO

Image: 
©ICFO

Broadband light emission in the infrared has proven to be of paramount importance for a large range of applications that include food quality and product/process monitoring, recycling, environmental sensing and monitoring, multispectral imaging in automotive as well as safety and security. With the advent of IoT and the increasing demand in adding more functionalities to portable devices (such as smart watches, mobile phones etc.) the introduction of on-chip spectrometers for health monitoring, allergen detection food quality inspection, to name a few, is expected to happen soon. But in order to have such functionalities easily integrated and implemented in mass production consumer electronics, several prerequisites need to be met. More specifically, the light source needs to be compact, highly efficient and ideally CMOS integrated to guarantee low-cost and high volume manufacturing.

So far, broadband light emitters in the shortwave infrared (a portion of the infrared spectrum between 1 - 2.5 um) range in which these aforementioned applications work, are based on previous-century technology, which is actually based on incandescent light sources, i.e. black body radiators. Even though their cost of production is low, their functionality is based on the principle of heating, which does not allow miniaturization of those sources, ending up in bulky form factors. Furthermore heat dissipation becomes a major issue when it comes to integration in compact portable systems. What makes matters even worse is the fact that these sources are uncontrollably broadband, emitting across a spectrum that is far broader than usually needed, which means that they are highly inefficient since most of the generated light is essentially useless.

To address this challenge, ICFO researchers Dr. Santanu Pradhan and Dr. Mariona Dalmases led by ICREA Prof. at ICFO Gerasimos Konstantatos, developed a new class of broadband solid state light emitters based on colloidal quantum dot (CQD) thin film technology. The results of their study have been published in the journal Advanced Materials.

Now, CQDs offer the advantages of low-cost solution processability, easy CMOS integration and a readily tunable bandgap. By leveraging these properties, ICFO researchers designed and engineered a multi-stack of CQDs of different size, which showed to be capable of emitting light with a spectrum that depends on the size of the emitting QDs. The sequence and thickness of the layers was optimized to maximize the photoconversion efficiency of this down-converting nanophosphor type of thin film. The stacks were built on top of a flexible plastic substrate which was then glued on top of a LED that emits in the visible range. This LED emits visible light that is then absorbed and converted by the CQDs to infrared light with a desired spectrum and, more importantly, with an outstanding photon conversion efficiency of 25%. They showed that the shape of the emission spectrum can be tuned by choosing the appropriate populations of CQD sizes. For this particular case, the researchers developed a broadband light source covering an emission range between 1100 - 1700 nm with a FWHM of 400 nm.

Then, by exploiting the conductive nature of the CQD thin films, the researchers were able to take a step further in their experiment and also construct electrically driven active broadband LEDs with a FWHM in excess of 350 nm and quantum efficiency of 5%. Such achievement represents the first monolithic electrically driven broadband Short Wave Infrared (SWIR) LED that does not need to rely on external light sources for excitation. This is a remarkable discovery since current available technologies based on III-V semiconductors not only are CMOS incompatible, but also require the use of multiple InGaAs chips in the form of an array to deliver a broadband spectrum, which adds complexity, cost and device volume increase.

Finally, to demonstrate how suitable this technology could be for market applications based on spectroscopy techniques, the team of researchers searched for several real case examples that could be good candidates for such technology. They took their CQD light source setup and by putting it together with commercially available spectrometers, they were able to distinguish between different types of plastics, liquids and milks that have distinct spectral signatures in the SWIR. The successful results open a new realm for the field of SWIR spectroscopy since they prove that this technology could definitely be used for applications that range from plastic sorting in recycling process, to health and safety or even food inspection, to name a few.

Credit: 
ICFO-The Institute of Photonic Sciences

Gene links short-term memory to unexpected brain area

A new study in mice identifies a gene that is critical for short-term memory but functions in a part of the brain not traditionally associated with memory.

The study, "A Thalamic Orphan Receptor Drives Variability in Short Term Memory," was published on Sept. 29 in the journal Cell.

To discover new genes and brain circuits that are important for short-term memory, the researchers turned to studying genetically diverse mice, rather than inbred mice commonly used in research.

"We needed a population that is diverse enough to be able to answer the question of what genetic differences might account for variation in short-term memory," said Praveen Sethupathy '03, associate professor of biomedical sciences in the College of Veterinary Medicine, director of the Cornell Center for Vertebrate Genomics, and a senior author of the study.

Priya Rajasethupathy '04, the Jonathan M. Nelson Family Assistant Professor and head of the Laboratory of Neural Dynamics and Cognition at Rockefeller University, is the other senior author of the paper. Sethupathy and Rajasethupathy are siblings; they conceived of this study over family dinners. Kuangfu Hsiao, a research associate at Rockefeller University, is the lead author of the study.

The researchers began with about 200 genetically diverse mice to identify regions of the DNA that contribute to the observed variation in short-term memory among the mice. They screened the mice on a short-term memory task and used genetic mapping techniques to identify a region of the genome, harboring 26 genes, that is associated with working memory. With further genome-scale analyses, they whittled the list of genes down to four of special interest. By disabling each of these four genes one at a time, they found that one in particular, Gpr12, coded for a protein that is required for and promotes working memory.

"I expected the prefrontal cortex would be the region most globally changed by the activity of Gpr12," Rajasethupathy said. "Strikingly, it was actually the thalamus, by far."

They also found that when they took low-performing mice and increased the amount of the Gpr12 protein in the thalamus, their accuracy in the memory task increased from 50% to 80%, similar to the level of high-performers.

To understand the neural circuits involved, the researchers compared low performers against low performers with artificially increased Gpr12 protein in the thalamus. These mice were also engineered with fluorescent calcium sensors that light up when a neuron is active. They recorded neurons firing in multiple brain regions while the mice performed the memory task. During many phases of the task, when short-term memory was required, the researchers observed synchronous activity between the prefrontal cortex and thalamus.

"When the thalamus activity went down, prefrontal went down; when the thalamus went up, prefrontal went up," Rajasethupathy said. "We found that these two brain regions are very highly correlated with each other in high-performers but not in low-performers. This finding implies a directionality [where one area influences the other], but we don't yet know the direction."

Often, when scientists identify that a specific gene is linked to a certain behavior, it takes time and more research to understand how that gene is driving the behavior, Rajasethupathy said.

"We were inspired in this study to link genetics to neural circuits to behavior," Sethupathy added. "Future work will investigate what mechanisms regulate the Gpr12 gene and what signaling pathways downstream of the Gpr12 protein mediate its effects."

Interestingly, the Gpr12 gene is highly conserved among mammals, including humans. The work therefore offers the possibility of a novel therapeutic angle for reversing deficits in short-term memory. More immediately, it adds a new dimension to classical models by emphasizing the importance of a two-way neural dialogue between the prefrontal cortex and the thalamus in support of short-term memory.

Credit: 
Cornell University

Coral's resilience to warming may depend on iron

image: According to a new study, coral's ability to respond to warming ocean temperatures may depend in part on the amount of iron in the environment.

Image: 
Penn State

How well corals respond to climate change could depend in part on the already scarce amount of iron available in their environment, according to a new study led by Penn State researchers. The study reveals that the combination of hot water temperatures and low iron levels compromises the algae that live within coral cells, suggesting that limited iron levels--which could decline with warming ocean waters--could exacerbate the effects of climate change on corals.

"Corals are the foundation for one of the most important ecosystems in the world," said Todd LaJeunesse, professor of biology at Penn State. "They support significant amounts of biodiversity, protect our shorelines from storms, provide habitat for our fisheries, and boost our economies with their opportunities for tourism. Climate change affects not only the coral, but also their symbiotic microalgae and the partnership between them. In this study, we explored two aspects of climate change--warming waters and altered amounts of trace metals like iron--on the algae."

The researchers previously found that the photosynthetic microalgae that live within coral cells--which provide up to 90 % of the coral's daily nutritional needs through photosynthesis--have very high iron demands.

"In this study, we found that limiting the available iron lowered the heat tolerances of two species of microalgae, which potentially could have cascading effects on the coral and on the reef ecosystem," said Hannah Reich, a graduate student in biology at Penn State at the time of the research and an author of the study.

In their study, which appears online Sept. 30 in the Journal of Phycology, the researchers investigated the effects of high water temperatures and limited iron availability on the growth of two species of microalgae cultured in the lab--one species typically found in tropical waters and one from more temperate areas. At high temperatures and limited iron, both species grew poorly compared to at moderate temperatures and normal iron levels.

"High temperatures increase metabolic demands, which forces the microalgae to work harder function properly," said Reich. "It also increases dependence on processes that require iron, like photosynthesis and assimilating other nutrients. We found that under high temperatures, the microalgae needed more than five times as much iron to reach typical, exponential growth rates."

Limited iron availability at high temperatures also compromised the photosynthetic ability of the algae, reducing their efficiency, which the researchers think contributes to the reduced growth under these conditions. Additionally, warmer temperatures affected the relative amounts of trace metals within the algae, known as their metal profiles.

"These alterations could indicate differences in metal usage, likely affecting the biological functions in which they are used," said Reich. "Notably, with limited iron, the more tropical species grew better and had less compromised photosynthetic ability at high temperatures and a larger reserve of many trace metals."

"Our results also highlight that trace metal profiles could be a metric with which to assess heat sensitivity or tolerance among symbiont species," said LaJeunesse. "Moreover, access to higher concentrations of trace metals may improve a coral's tolerance of thermal stress."

In the future, the researchers plan to explore how trace metal requirements change in different conditions in the field, and to explore the impacts of limited iron and warming waters on microalgae living within a host.

"While it is important to understand how access to iron supplies can impact the ability of corals to respond to climate change stressors, there is still a dire need to reduce carbon dioxide emissions to combat the climate crisis," added Reich.

Credit: 
Penn State

Shedding light on how urban grime affects chemical reactions in cities

Many city surfaces are coated with a layer of soot, pollutants, metals, organic compounds and other molecules known as "urban grime." Chemical reactions that occur in this complex milieu can affect air and water quality. Now, researchers reporting in ACS Earth and Space Chemistry have taken a closer look at urban grime collected from two U.S. cities, revealing for the first time that the material absorbs sunlight and therefore might participate in photochemical reactions.

Scientists have previously analyzed lab-prepared urban grime, as well as samples collected from cities, but they still don't have a complete understanding of what's in the material, or how it varies by location. Some components can react with other molecules in the grime or air, which could affect what gets released into the atmosphere or into the water when it rains. To better understand these processes, Tara Kahan and colleagues wanted to investigate the physical properties, light absorption and composition of urban grime samples collected from Syracuse, New York, and Scranton, Pennsylvania.

The researchers collected samples of urban grime from Syracuse by placing vertical quartz plates outdoors for 30 days and then analyzed the surfaces. Although urban grime was long thought to be predominantly a film, the samples showed collections of particles, rather than a uniform film, consistent with evidence from other recent studies. In different experiments, the team scraped grime from wet exterior surfaces of windows in both cities and analyzed their compositions. The results were similar to those reported from other cities in Canada and Europe, but there were differences in specific ions. For example, higher chloride levels were found in North American cities, which could be from the use of road salt in the winter, whereas higher sulfate levels were reported in some European cities, likely because of coal combustion. The team also observed that urban grime absorbed light at wavelengths found in sunlight, which suggests that the sun could speed up or slow down chemical reactions that affect air and water quality in cities.

The authors acknowledge funding from the Oak Ridge Associated Universities Ralph E. Powe Junior Faculty Enhancement Award, the Alfred P. Sloan Foundation Chemistry of Indoor Environments program and the Canada Research Chairs program.

The abstract that accompanies this paper can be viewed here.

Credit: 
American Chemical Society

Innovative model improves Army human-agent teaming

image: Army researchers develop a novel computational model for analyzing cognitive data that will be a game changer for the effectiveness of human-agent teams on the battlefield.

Image: 
U.S. Army

ADELPHI, Md. -- Army researchers developed a novel computational model for gathering cognitive data that may be a game changer in the fields of neuroscience and econometrics, and has broad relevance to networked and multi-agent systems.

At the U.S. Army Combat Capabilities Development Command's Army Research Laboratory and the University of Maryland, College Park, researchers developed what is known as the Autoregressive Linear Mixture, or ALM, a novel model for analyzing time-series data, or how things change over time.

For the Army, this comes in to play to assess the cognitive states of Soldiers to allow an intelligent adaptive system to balance the workload of the crew when completing challenging critical missions.

Employing recent advancements in optimization for nonconvex problems, the researchers adapted a proximal gradient algorithm and validated the proposed model and algorithm on an open-source electroencephalography, or EEG, dataset.

This work, recently featured in IEEE Xplore, innovates two fundamental models in signal processing, dictionary learning and multivariate autoregressive, or MVAR, models.

"These models are so popular due to their simplicity and explanatory power," said CCDC ARL researcher Dr. Addison Bohannon. "As generative models for data, dictionary learning and MVAR models allow researchers and engineers to encode their understanding of the underlying physical process into a concise model. Accordingly, they find applications in neuroscience, economics and image processing, where we have a rich understanding of the underlying science. However, the simplicity of these models also limits their effectiveness in describing complex, real-world signals like those that will be observed outside of laboratory experiments that the Army will require."

On the other hand, he said, complex models do not necessarily lead to greater insight, as they pose problems of reliability and interpretability. This work grapples with these issues head-on.

The researchers hypothesize that cognitive processes can be composed of independent neurological processes that can be observed in EEG recordings. Although they do not directly test this hypothesis in the paper, their validation results show that they can reliably identify independent neurological processes in EEG recordings of individuals while sleeping.

"Our results suggest that the EEG activity associated with different sleep stages can be composed of different combinations of these independent neurological processes," Bohannon said. "That is to say that these neurological processes are fundamental in some way. Moreover, these fundamental processes that we discover correspond to known neurological phenomena used in sleep stage analysis."

This work is inspired by the incredibly successful thread of research on functional connectivity in neuroscience.

Functional connectivity explains cognitive processes in the brain through functional relationships (activity correlation) as opposed to structural relationships (connective tracts).

The first step in estimating functional connectivity often requires fitting an MVAR model to observed data, Bohannon said. By various means, researchers can derive networks from the estimated parameters of the MVAR model.

Through experimentation, he said, these networks can be associated with various cognitive processes. However, this process implicitly assumes that a single brain network contributes to a cognitive process since the MVAR model only models a single generative process for the data.

"The ALM innovates on this approach by explicitly modeling data as the output of simultaneous and independent processes," Bohannon said. "Each of these processes can be interpreted as separate functional brain networks. Through the combination of these separate functional brain networks, we get high-level cognitive functions.Although this contribution is primarily to propose a new analysis technique, it could enable fundamental discovery in network neuroscience. In fact, we do find evidence for multiple functional networks coming together to yield different sleep states in our validation results."

According to Bohannon, this effort will support the Next Generation Combat Vehicle via the Human-Autonomy Teaming, or HAT, Essential Research Program.

One project under this effort is currently developing techniques for estimating cognitive states of Soldiers from biophysical and other sensors.

"This project supports the broader vision to provide an adaptive and interactive crew station where autonomy within the crew station is able to adapt to the state of the crew," Bohannon said. "The cognitive state of the individual crew members is one of the most informative driving factors of that adaptation."

For instance, a user who is focused on a cognitively demanding task, such as route planning, may need the interface to eliminate potential distractions, Bohannon said.

Additionally the cognitive state of the individual crew members can inform dynamic tasking, as a vehicle commander is not going to ask a cognitively fatigued crew member to analyze newly arriving sensor data from an unmanned aerial system. Rather, this would allow an intelligent adaptive system to balance the workload of the crew, he said.

"One of the major challenges in developing systems for human-autonomy teaming is reliably estimating the cognitive state of individuals," said Army researcher Dr. Nicholas Waytowich. "We believe that our approach can be used as a tool for improving cognitive state estimation that will enable the design of autonomous systems that adapt to the user, instead of requiring the user to adapt to the system."

The researchers believe that their approach also has potential to elucidate the relationship between underlying cognitive states of individuals and emergent team behaviors in teams of humans and autonomous systems, said Army researcher Dr. Vernon Lawhern.

"For example, identifying individuals' cognitive states that are likely to produce effective teams will be critical to enabling successful team missions," Lawhern said.

According to Bohannon, it's very important that this research impacts the Army of the future.

"The nature of basic research-long-term, high-risk-makes it difficult to predict whether this, or any similar effort, will directly impact the Army," Bohannon said. "That makes it especially important to spend time understanding the challenges that the future Army will face and use that understanding to inform the work that we do."

The next step is to transition this knowledge into an applied research project such as individual state estimation within the Human-Autonomy Teaming ERP, he said.

Credit: 
U.S. Army Research Laboratory

'Street' ERTs are more useful in predicting companies' future tax outcomes, study finds

image: Erik Beardsley, assistant professor of accountancy at Notre Dame's Mendoza College of Business

Image: 
University of Notre Dame

Before considering a company as a potential investment, smart investors will analyze a company's financial statements and look at its taxes and other expenses alongside net income.

New research from the University of Notre Dame sheds light on the most effective methods to predict future tax outcomes, which simplifies the decision-making process for investors.

Effective tax rates (ETRs) are, in effect, a company's tax expense divided by pretax income as reported on its financial statements. "Street" ETRs are created when, for one reason or another, analysts adjust the ETR reported under generally accepted accounting principles (GAAP). GAAP is the set of rules that companies follow when creating their financial statements -- to provide consistency and comparability of financial information between different companies as well as the same company over time.

If analysts notice an unusual or infrequent item included in the financial statements, they might adjust the GAAP ETR to remove the item, resulting in a street ETR. For example, if the GAAP ETR includes a large settlement with the IRS, an analyst might remove that item because it is unusual and does not reflect the current performance of the company.

"Street vs. GAAP: Which Effective Tax Rate Is More Informative?" is forthcoming in Contemporary Accounting Research from Erik Beardsley, assistant professor of accountancy at Notre Dame's Mendoza College of Business, along with Michael Mayberry at University of Florida and Sean McGuire at Texas A&M University.

The study finds that street ETRs provided by analysts do a better job of predicting future tax outcomes than the ETR included in financial reports prepared using GAAP. The study also finds that the market responds to street tax information more than GAAP tax information, suggesting investors find street tax expense more relevant for their decisions.

"Taxes are a significant component of earnings, but the extent to which analysts understand and use tax information is not clear," Beardsley said. "We find that they make adjustments to tax information in such a way that improves the informative nature of GAAP ETRs."

The team first examined 2,221 analyst reports to determine the types of adjustments analysts make to create a street ETR. They found about one-third of street ETRs included a tax-specific adjustment, but surprisingly, more than 90 percent included the tax effects of pre-tax adjustments.

Next, the researchers pulled data for 14,627 firm-years from 2003 to 2016 to assess the relative ability of street and GAAP ETRs to predict future tax outcomes. They found street ETRs performed better at predicting future tax outcomes.

"Street earnings metrics such as street ETRs are becoming more and more common, but regulators like the SEC are concerned that non-GAAP metrics can be misleading to readers of financial statements," Beardsley said. "Our study should be informative to regulators and users of financial statements because it provides evidence that street ETRs are useful."

Credit: 
University of Notre Dame

NASA imagery reveals Kujira transitioning into an extratropical cyclone 

image: On Sept. 30 at 0300 UTC (Sept. 29 at 11 p.m. EDT), the MODIS instrument aboard NASA's Aqua satellite provided a visible image of Kujira that showed the storm had transitioned into an extra-tropical cyclone in the Northwestern Pacific Ocean.

Image: 
NASA/NRL

Tropical cyclones can become post-tropical before they dissipate, meaning they can become sub-tropical, extra-tropical or a remnant low-pressure area. NASA's Aqua satellite provided a visible image that showed Typhoon Kujira transitioning into an extra-tropical storm, and the effects of strong wind shear on the system.

What is a Post-tropical Storm? 

A post-tropical storm is a generic term for a former tropical cyclone that no longer possesses sufficient tropical characteristics to be considered a tropical cyclone. Former tropical cyclones that have become fully extratropical, subtropical, or remnant lows are classes of post-tropical cyclones. They no longer possesses sufficient tropical characteristics to be considered a tropical cyclone. However, post-tropical cyclones can continue carrying heavy rains and high winds.

What is an Extra-tropical Storm?

Often, a tropical cyclone will transform into an extra-tropical cyclone as it recurves toward the poles (north or south, depending on the hemisphere the storm is located in). An extra-tropical cyclone is a storm system that primarily gets its energy from the horizontal temperature contrasts that exist in the atmosphere.

Tropical cyclones have their strongest winds near the earth's surface, while extra-tropical cyclones have their strongest winds near the tropopause - about 8 miles (12 km) up. Tropical cyclones, in contrast, typically have little to no temperature differences across the storm at the surface and their winds are derived from the release of energy due to cloud/rain formation from the warm moist air of the tropics.

Visible NASA Imagery Shows the Transition

Visible imagery from NASA's Aqua satellite revealed Kujira's extra-tropical transition under way as the storm appeared asymmetric due to wind shear.

On Sept. 30 at 0300 UTC (Sept. 29 at 11 p.m. EDT), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite provided a visible image of the storm. Kujira's center of circulation was surrounded by wispy clouds, while powerful southwesterly vertical wind shear (outside winds that push against a tropical cyclone) had pushed the bulk of clouds and showers northeast of the center.

Kujira's Final Advisory

At 5 p.m. EDT (2100 UTC) on Sept. 29, the center of Post-Tropical Cyclone Kujira was located near latitude 38.6 degrees north and longitude 159.4 degrees east. That is about 802 nautical miles east of Misawi, Japan. The post-tropical cyclone was moving toward the northeast. As it was transitioning, it weakened from typhoon strength to tropical storm strength. Maximum sustained winds had decreased to near 55 knots (63 mph/102 kph).

In the last bulletin by the Joint Typhoon Warning Center at that time, forecasters noted "Environmental analysis indicates the system has drifted into high vertical wind shear, [greater than 40 knots (46 mph/74 kph) and cold (less than 25 degrees Celsius/77 Fahrenheit) sea surface temperatures] and has entered into the baroclinic zone." Tropical cyclones need sea surface temperatures of at least 26.6 degrees Celsius/80 degrees Fahrenheit to maintain strength.

A baroclinic zone is a region in which a temperature gradient exists on a constant pressure surface. Baroclinic zones are favored areas for strengthening and weakening systems while barotropic systems, on the other hand, do not exhibit significant changes in intensity. In addition, wind shear is characteristic of a baroclinic zone.

Kujira is expected to complete extratropical transition and weaken to a post-tropical depression by the afternoon of September 30, 2020.

NASA Researches Earth from Space

For more than five decades, NASA has used the vantage point of space to understand and explore our home planet, improve lives and safeguard our future. NASA brings together technology, science, and unique global Earth observations to provide societal benefits and strengthen our nation. Advancing knowledge of our home planet contributes directly to America's leadership in space and scientific exploration.

Credit: 
NASA/Goddard Space Flight Center

Drugs aren't typically tested on women. AI could correct that bias

Researchers at Columbia University have developed AwareDX--Analysing Women At Risk for Experiencing Drug toXicity--a machine learning algorithm that identifies and predicts differences in adverse drug effects between men and women by analyzing 50 years' worth of reports in an FDA database. The algorithm, described September 22 in the journal Patterns, automatically corrects for the biases in these data that stem from an overrepresentation of male subjects in clinical research trials.

Though men and women can have different responses to medications--the sleep aid Ambien, for example, metabolizes more slowly in women, causing next-day grogginess--even doctors may not know about these differences because most clinical trial data itself are biased toward men. This trickles down to impact prescribing guidelines, drug marketing, and ultimately, patients' health.

"Pharma has a history of ignoring complex problems. Traditionally, clinical trials have not even included women in their studies. The old-fashioned way used to be to get a group of healthy guys together to give them the drug, make sure it didn't kill them, and you're off to the races. As a result, we have a lot less information about how women respond to drugs than men," says Nicholas Tatonetti (@nicktatonetti), an associate professor of biomedical informatics at Columbia University and a co-author on the paper. "We haven't had the ability to evaluate these differences before, or even to quantify them."

Tatonetti teamed up with one of his students--Payal Chandak, a senior biomedical informatics major at Columbia University and the other co-author on the paper. Together they developed AwareDX. Because it is a machine learning algorithm, AwareDX can automatically adjust for sex-based biases in a way that would take concerted effort to do manually.

"Machine learning is definitely a buzzword, but essentially the idea is to correct for these biases before you do any other statistical analysis by building a balanced subset of patients with equal parts men and women for each drug," says Chandak.

The algorithm uses data from the FDA Adverse Event Reporting System (FAERS), which contains reports of adverse drug effects from consumers, healthcare providers, and manufacturers all the way back to 1968. AwareDX groups the data into sex-balanced subsets before looking for patterns and trends. To improve the results, the algorithm then repeats the whole process 25 times.

The researchers compiled the results into a bank of over 20,000 potential sex-specific drug effects, which can then be verified either by looking back at older data or by conducting new studies down the line. Though there is a lot of work left to do, the researchers have already had success verifying the results for several drugs based on previous genetic research.

For example, the ABCB1 gene, which affects how much of a drug is usable by the body and for how long, is known to be more active in men than women. Because of this, the researchers expected to see a greater risk of muscle aches for men taking simvastatin--a cholesterol medication--and a greater risk of slowing heart rate for women taking risperidone--an antipsychotic. AwareDX successfully predicted both of these effects.

"The most exciting thing to me is that not only do we have a database of adverse events that we've developed from this FDA resource, but we've shown that for some of these events, there is preexisting knowledge of genetic differences between men and women," says Chandak. "Using that knowledge, we can actually predict different responses that men and women should have and validate our method against those. That gives us a lot of confidence in the method itself."

By continuing to verify their results, the researchers hope that the insights from AwareDX will help doctors make more informed choices when prescribing drugs, especially to women. "Doctors actually look at adverse effect information specific to the drug they prescribe. So once this information is studied further and corroborated, it's actually going to impact drug prescriptions and people's health," says Tatonetti.

Credit: 
Cell Press

Stellar explosion in Earth's proximity

image: This manganese crust started to grow about 20 million years ago. It grew layer by layer until it was retrieved a few years ago and analyzed in the Maier-Leibnitz-Laboratory at the Technical University of Munich. In layers that are around 2.5 million years old, the researchers found iron-60 and elevated levels of manganese-53. Their occurrence is evidence of a near-Earth supernova 2.5 million years ago.

Image: 
Dominik Koll / TUM

When the brightness of the star Betelgeuse dropped dramatically a few months ago, some observers suspected an impending supernova - a stellar explosion that could also cause damage on Earth. While Betelgeuse has returned to normal, physicists from the Technical University of Munich (TUM) have found evidence of a supernova that exploded near the Earth around 2.5 million years ago.

The life of stars with a mass more than ten times that of our sun ends in a supernova, a colossal stellar explosion. This explosion leads to the formation of iron, manganese and other heavy elements.

In layers of a manganese crust that are around two and a half million years old a research team led by physicists from the Technical University of Munich has now confirmed the existence of both iron-60 and manganese-53.

"The increased concentrations of manganese-53 can be taken as the "smoking gun" - the ultimate proof that this supernova really did take place," says first author Dr. Gunther Korschinek.

While a very close supernova could inflict massive harm to life on Earth, this one was far enough away. It only caused a boost in cosmic rays over several thousand years. "However, this can lead to increased cloud formation," says co-author Dr. Thomas Faestermann. "Perhaps there is a link to the Pleistocene epoch, the period of the Ice Ages, which began 2.6 million years ago."

Ultra-trace analysis

Typically, manganese occurs on earth as manganese-55. Manganese-53, on the other hand, usually stems from cosmic dust, like that found in the asteroid belt of our solar system. This dust rains down onto the earth continuously; but only rarely do we perceive larger specks of dust that glow as meteorites.

New sediment layers that accumulate year for year on the sea floor preserve the distribution of the elements in manganese crusts and sediment samples. Using accelerator mass spectrometry, the team of scientists has now detected both iron-60 and increased levels of manganese-53 in layers that were deposited about two and a half million years ago.

"This is investigative ultra-trace analysis," says Korschinek. "We are talking about merely a few atoms here. But accelerator mass spectrometry is so sensitive that it even allows us to calculate from our measurements that the star that exploded must have had around 11 to 25 times the size of the sun."

The researchers were also able to determine the half-life of manganese-53 from comparisons to other nuclides and the age of the samples. The result: 3.7 million years. To date, there has only been a single measurement to this end worldwide.

Credit: 
Technical University of Munich (TUM)

Screen time can change visual perception -- and that's not necessarily bad

BINGHAMTON, NY -- The coronavirus pandemic has shifted many of our interactions online, with Zoom video calls replacing in-person classes, work meetings, conferences and other events. Will all that screen time damage our vision?

Maybe not. It turns out that our visual perception is highly adaptable, according to research from Psychology Professor and Cognitive and Brain Sciences Coordinator Peter Gerhardstein's lab at Binghamton University.

Gerhardstein, Daniel Hipp and Sara Olsen -- his former doctoral students -- will publish "Mind-Craft: Exploring the Effect of Digital Visual Experience on Changes in Orientation Sensitivity in Visual Contour Perception," in an upcoming issue of the academic journal Perception. Hipp, the lead author and main originator of the research, is now at the VA Eastern Colorado Health Care System's Laboratory for Clinical and Translational Research. Olsen, who designed stimuli for the research and aided in the analysis of the results, is now at the University of Minnesota's Department of Psychiatry.

"The finding in the work is that the human perceptual system rapidly adjusts to a substantive alteration in the statistics of the visual world, which, as we show, is what happens when someone is playing video games," Gerhardstein said.

The experiments

The research focuses on a basic element of vision: our perception of orientation in the environment.

Take a walk through the Binghamton University Nature Preserve and look around. Stimuli -- trees, branches, bushes, the path -- are oriented in many different angles. According to an analysis by Hipp, there is a slight predominance of horizontal and then vertical planes -- think of the ground and the trees -- but no shortage of oblique angles.

Then consider the "carpentered world" of a cityscape -- downtown Binghamton, perhaps. The percentage of horizontal and vertical orientations increases dramatically, while the obliques fall away. Buildings, roofs, streets, lampposts: The cityscape is a world of sharp angles, like the corner of a rectangle. The digital world ramps up the predominance of the horizontal and vertical planes, Gerhardstein explained.

Research shows that we tend to pay more attention to horizontal and vertical orientations, at least in the lab; in real-world environments, these differences probably aren't noticeable, although they likely still drive behavior. Painters, for example, tend to exacerbate these distinctions in their work, a focus of a different research group.

Orientation is a fundamental aspect of how our brain and eyes work together to build the visual world. Interestingly, it's not fixed; our visual system can adapt to changes swiftly, as the group's two experiments show.

The first experiment established a method of eye tracking that doesn't require an overt response, such as touching a screen. The second had college students play four hours of Minecraft -- one of the most popular computer games in the world -- before and after showing them visual stimuli. Then, researchers determined subjects' ability to perceive phenomena in the oblique and vertical/horizontal orientations using the eye-tracking method from the first experiment.

A single session produced a clearly detectable change. While the screen-less control group showed no changes in their perception, the game-players detected horizontal and vertical orientations more easily. Neither group changed their perception in oblique orientations.

We still don't know how temporary these changes are, although Gerhardstein speculates that the vision of the game-playing research subjects likely returned to normal quickly.

"So, the immediate takeaway is the impressive extent to which the young adult visual system can rapidly adapt to changes in the statistics of the visual environment," he said.

In the next phase of research, Gerhardstein's lab will track the visual development of two groups of children, one assigned to regularly play video games and the other to avoid screen-time, including television. If the current experiment is any indication, there may be no significant differences, at least when it comes to orientation sensitivity. The pandemic has put in-person testing plans on hold, although researchers have given a survey about children's playing habits to local parents and will use the results to design a study.

Adaptive vision

Other research groups who have examined the effects of digital exposure on other aspects of visual perception have concluded that long-term changes do take place, at least some of which are seen as helpful.

Helpful? Like other organisms, humans tend to adapt fully to the environment they experience. The first iPhone came out in 2008 and the first iPad in 2010. Children who are around 10 to 12 years old have grown up with these devices, and will live and operate in a digital world as adults, Gerhardstein pointed out.

"Is it adaptive for them to develop a visual system that is highly sensitive to this particular environment? Many would argue that it is," he said. "I would instead suggest that a highly flexible system that can shift from one perceptual 'set' to another rapidly, so that observers are responding appropriately to the statistics of a digital environment while interacting with digital media, and then shifting to respond appropriately to the statistics of a natural scene or a cityscape, would be most adaptive."

Credit: 
Binghamton University