Tech

Current trend reversed

image: Artistic impression of the experiment in which Häusler and colleagues first heat one of two quantum-?gas clouds and then connect them with a two-?dimensional channel, such that they can equilibrate.

Image: 
D. Husmann & S. Häusler, ETH Zurich

When a piece of conducting material is heated up at one of its ends, a voltage difference can build up across the sample, which in turn can be converted into a current. This is the so-called Seebeck effect, the cornerstone of thermoelectric effects. In particular, the effect provides a route to creating work out of a temperature difference. Such thermoelectric engines do not have any movable part and are therefore convenient power sources in various applications, including propelling NASA's Mars rover Perseverance. The Seebeck effect is interesting for fundamental physics, too, as the magnitude and sign of the induced thermoelectric current is characteristic of the material and indicates how entropy and charge currents are coupled. Writing in Physical Review X, the group of Prof. Tilman Esslinger at the Department of Physics of ETH Zurich now reports on the controlled reversal of such a current by changing the interaction strength among the constituents of a quantum simulator made of extremely cold atoms trapped in shaped laser fields. The capability to induce such a reversal means that the system can be turned from a thermoelectric engine into a cooler.

Which way please?

The experiment, conducted by doctoral researcher Samuel Häusler and co-workers in the Esslinger group, starts with a cloud of fermionic Lithium atoms that are cooled to temperatures low enough that quantum effects determine the behaviour of the ensemble. The cloud is then separated into two independent halves of equal atom number. One of them is heated, before the two reservoirs are connected by a two-dimensional channel. The equilibrium state that thus develops is as expected: after a sufficiently long time, the two halves contain equal atom numbers at equal temperatures. More interesting is the transient behaviour. During the equilibration process, the atom number in each reservoir changes, with the atoms ebbing and flowing between them. In which direction and with what amplitude this happens depends on the thermoelectric properties of the system.

Thanks to the exquisite control over the system, the researchers were able to measure the transient behaviours for different interaction strengths and atomic densities inside the channel and compared them to a simple model. In contrast to solid-state systems, where most thermoelectric properties can be measured in simple, well-defined experiments, in these small clouds of atoms the parameters are inferred from fundamental quantities such as the atom density. Finding a procedure that properly extracts the thermoelectric quantities over a wide range of parameters was a key point of the work.

The team found that the current direction results from a competition between two effects (see the figure). On the one hand (left), the thermodynamic properties of the reservoirs favour the increase of atom number in the hot reservoir, to equilibrate the chemical potentials of the two halves. On the other hand (right), the properties of the channel typically make the transport of hot, energetic particles easier -- because they have a large number of possible pathways (or, modes) available to them -- leading to an increase of the atom number in the cold reservoir.

A superfluid traffic regulator

With a non-interacting gas, it is possible to compute the dominating trend between the two competing effects once the precise shape of the atom cloud is known and taken into account. In the system of Häusler et al. this can be done very accurately. Both in the computation and in the measurements, the initial atom current flows from the hot to the cold reservoir and is stronger for low atomic densities in the channel. When the interactions are tuned up to the so-called unitary regime, the behaviour of the system becomes considerably more difficult to predict. The computation becomes intractable without wide-ranging approximations, due to the strong correlations that build up in the gas.

In this regime, the quantum simulation device of the ETH researchers showed that for high-enough mean temperature and low atom density in the channel, the current also flows from the hot to the cold reservoir. However, it can be reversed when the channel density is increased using an attractive gate potential. Above a certain density threshold, the atoms in the channel undergo a phase transition where they form pairs showing superfluid behaviour. This superfluid region in the channel limits the transport of unpaired, energetic particles, favouring the transport from the cold to the hot reservoir and hence the reversal of the thermoelectric current.

Towards better thermoelectric materials thanks to interactions

Understanding the properties of matter through thermoelectric measurement improves the fundamental understanding of interacting quantum systems. Equally important is to identify new ways to design well-performing thermoelectric materials that could efficiently transform small heat differences into work or, if used in reverse mode, act as a cooling device (known as a Peltier cooler).

The efficiency of a thermoelectric material is characterized by the thermoelectric figure of merit. Häusler et al. have measured a strong enhancement of the value of this figure when cranking up the interactions. While this enhancement cannot be directly translated into material science, this excellent cooling capability could already be used to reach lower temperatures for atomic gases, which in turn might enable a broad range of novel fundamental experiments in quantum science.

Credit: 
ETH Zurich Department of Physics

Two-in-one: Wide-angle monitoring meets high-resolution capture in new camera platform

image: Researchers from Shibaura Institute of Technology, Japan design a dual camera-based platform employing an omnidirectional camera for target detection and a separate camera for its high-resolution capture and report an overall improved performance, opening doors to potential applications in security systems.

Image: 
Shibaura Institute of Technology, Japan

If you're a fan of spy movies, you've probably come across scenes where the intelligence agents try to identify or detect a perpetrator using some sophisticated image enhancement technology on surveillance camera images. While the idea behind surveillance cameras and object detection is the same in real life, unlike in movies, there is often a trade-off between the camera's field-of-view and its resolution.

Surveillance cameras are typically required to have a wide field-of-view to make the detection of a threat more likely. Due to this, omnidirectional cameras allowing a 360-degree capture range have become a popular choice, for the obvious reason that they leave out no blind spot but also because they're cheap to install. However, recent studies on object recognition in omnidirectional cameras show that distant objects captured in these cameras have rather poor resolution, making their identification difficult. While increasing the resolution is an obvious solution, the minimum resolution required, according to a study, is 4K (3840 X 2160 pixels), which translates to enormous bitrate requirements and a need for efficient image compression.

Moreover, 3D omnidirectional images often cannot be processed in raw form due to lens distortion effects and must be projected onto 2D first. "Continuous processing under high computational loads incurred by tasks such as moving object detection combined with converting a 360-degree video at 4K or higher resolutions into 2D images is simply infeasible in terms of real-life performance and installation costs," says Dr. Chinthaka Premachandra from Shibaura Institute of Technology (SIT), Japan, who researches image processing.

Addressing this issue in his latest study published in IEEE Sensors Journal, Dr. Premachandra, along with his colleague Masaya Tamaki from SIT, considered a system in which an omnidirectional camera would be used to locate a region of interest while a separate camera would capture its high-resolution image, thus allowing highly accurate object identification without incurring large computation costs. Accordingly, they constructed a hybrid camera platform consisting of an omnidirectional camera and a pan-tilt (PT) camera with 180-degree field-of-view kept on either side of it. Incidentally, the omnidirectional camera itself comprised two fisheye lenses sandwiching the camera body, with each lens covering a 180-degree capture range.

The researchers used Raspberry Pi Camera Modules v2.1 as PT cameras on which they mounted a pan-tilt module and connected the system to a Raspberry Pi 3 Model B. Finally, they connected the whole system, the omnidirectional camera, the PT cameras and the Raspberry Pi, to a personal computer for overall control.

The operational flow was as follows: the researchers first processed an omnidirectional image to extract a target region, following which its coordinate information was converted into angle information (pan and tilt angles) and subsequently transferred to the Raspberry Pi. The Raspberry Pi, in turn, controlled each PT camera based on this information and determined whether a complementary image was to be taken.

The researchers mainly performed four types of experiments to demonstrate the performance in four different aspects of the camera platform and separate experiments to verify the image capturing performance for different target object locations.

While they contemplate that a potential issue could arise from capturing moving objects for which the complementary images could be shifted due to time delay in image acquisition, they have proposed alongside a potential countermeasure--introducing a Kalman filtering technique to predict the future coordinates of the object when capturing images.

"We expect that our camera system will create positive impacts on future applications employing omnidirectional imaging such as robotics, security systems, and monitoring systems," comments Dr. Premachandra excitedly.

Who wouldn't be excited, when one extra camera can make so much difference?

Credit: 
Shibaura Institute of Technology

Orangutan finding highlights need to protect habitat

image: A male orangutan eating non-fruit vegetation instead of the fruit orangutans prefer on the island of Borneo in Southeast Asia.

Image: 
Kristana Parinters Makur/Tuanan Orangutan Research Project

Wild orangutans are known for their ability to survive food shortages, but scientists have made a surprising finding that highlights the need to protect the habitat of these critically endangered primates, which face rapid habitat destruction and threats linked to climate change.

Scientists found that the muscle mass of orangutans on the island of Borneo in Southeast Asia was significantly lower when less fruit was available. That's remarkable because orangutans are thought to be especially good at storing and using fat for energy, according a Rutgers-led study in the journal Scientific Reports.

The findings highlight that any further disruption of their fruit supply could have dire consequences for their health and survival.

"Conservation plans must consider the availability of fruit in forest patches or corridors that orangutans may need to occupy as deforestation continues across their range," said lead author Caitlin A. O'Connell, a post-doctoral fellow in the lab of senior author Erin R. Vogel, Henry Rutgers Term Chair Professor and an associate professor in the Department of Anthropology and Center for Human Evolutionary Studies in the School of Arts and Sciences at Rutgers University-New Brunswick.

Orangutans weigh up to about 180 pounds and live up to 55 years in the wild. One of our closest living relatives, they are the most solitary of the great apes, spending almost all of their time in trees. Orangutans in Borneo also spend some time on the ground. Deforestation linked to logging, the production of palm oil and paper pulp, and hunting all pose threats to orangutans, whose populations have plummeted in recent decades.

Orangutans also face great challenges in meeting their nutritional needs. With low and unpredictable fruit availability in their Southeast Asian forest habitats, they often struggle to eat enough to avoid calorie deficits and losing weight. Because these animals are critically endangered, researchers need to explore new ways to monitor their health without triggering more stress in them.

Researchers in Vogel's Laboratory for Primate Dietary Ecology and Physiology measured creatinine, a waste product formed when muscle breaks down, in wild orangutan urine to estimate how much muscle the primates had when fruit was scarce versus when it was abundant.

In humans, burning through muscle as the main source of energy marks the third and final phase of starvation, which occurs after stores of body fat are greatly reduced. So, the research team was surprised to find that both males and females of all ages had reduced muscle mass when fruit availability was low compared with when it was high, meaning they had burned through most of their fat reserves and resorted to burning muscle mass .

"Orangutans seem to go through cycles of building fat and possibly muscle mass and then using fat and muscle for energy when preferred fruits are scarce and caloric intake is greatly reduced," Vogel said. "Our team plans to investigate how other non-invasive measures of health vary with muscle mass and how the increasingly severe wildfires on Borneo might contribute to muscle loss and other negative health impacts."

Rutgers co-authors include Andrea L. DiGiorgio, a lecturer at Princeton University and post-doctoral fellow in Vogel's lab; Alexa D. Ugarte, the lab's manager; Rebecca S. A. Brittain, a doctoral student in the lab; and Daniel Naumenko, a former Rutgers undergraduate student who is now at doctoral student at the University of Colorado Boulder. Scientists at New York University and Universitas Nasional in Indonesia contributed to the study.

Credit: 
Rutgers University

Life may have become cellular by using unusual molecules

image: Starting from a liquid alpha hydroxy acid monomer library, drying through heating results in synthesis of polyester polymers, which form a gel-like state. This gel is then rehydrated, resulting in phase separation and formation of a turbid solution. Further microscopy analysis of this turbid solution reveals the existence of membrane-less polyester microdroplets, which have been proposed as relevant compartments on early Earth.

Image: 
Tony Z. Jia

All modern life is composed of cells, from single-celled bacteria to more complex organisms such as humans, which may contain billions or even trillions of cells, but how life came to be cellular remains uncertain. New research led by specially appointed assistant professor Tony Z. Jia at the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology, along with colleagues from around the world (Japan, Malaysia, France, Czech Republic, India and the USA), shows that simple chemical compounds known as hydroxy acids, which were likely common on primitive Earth, spontaneously link together and form structures reminiscent of modern cells when dried from solution, as may have happened on or in ancient beaches or puddles. The resulting structures may have helped scaffold the emergence of biological cellularity, and offer scientists a new avenue for studying early proto-biological evolution and the origins of life itself.

Modern cells are very complex, precisely organized assemblages of millions of molecules oriented in precise ways which help traffic materials into and out of cells in a highly coordinated fashion. As an analogy, a city is not just a random collection of buildings, streets and stoplights; rather, in an optimized city, the streets are arranged to allow for easy access to the buildings, and traffic flow is controlled to make the entire system function efficiently. As much as cities are the result of a historical and evolutionary process when primitive roving bands of humans settled down to work together in larger groups, cells are likely the result of similar processes by which simple molecules began to cooperate to form synchronized molecular systems.

How cellularization emerged is a long-standing scientific problem, and scientists are trying to understand how simple molecules can form the boundary structures which could have defined the borders of primitive cells. The boundaries of modern cells are typically composed of lipids, which are themselves composed of molecules which have the molecularly unusual property of spontaneously forming bounded structures in water known as vesicles. Vesicles form from simple molecules known as "amphiphiles," a word derived from the Greek meaning "loving both" to reflect that such molecules have propensity to self-organize with water as well as with themselves. This molecular dance causes these molecules to orient themselves such that one part of these molecules preferentially aligns with the water they are dissolved in and another portion of these molecules tend to align with one another. This kind of self-organizational phenomenon is observed when groups of people enter elevators: rather than everyone facing in random directions, for various reasons people in elevators tend to all align themselves to face the elevator door. In the experiments investigated by Jia and colleagues, the low molecular weight hydroxy acid molecules, upon joining together become a new type of polymer (that could be similar in nature to amphiphiles), form droplets, rather than the bag-like structures biological lipids do.

Modern cell boundaries, or membranes as they are called, are primarily composed of a few types of amphiphilic molecules, but scientists suspect the property of forming a membrane is a more general property of many types of molecules. As much as modern cities likely adapted roads, buildings and traffic controls to deal with the subsequent problems of handling foot traffic, horse traffic and automobile traffic, primitive cells may have also slowly changed their composition and function to adapt to changes in the way other biological functions evolved. This new work offers insights into the problems primitive emergent biological systems may have had tom adapt to.

The types of molecules that help modern cells form their boundaries are only a small subset of the types which could allow for this kind of spontaneous self-assembly behavior. Previously, Jia and colleagues showed that hydroxy acids can be easily joined together to form larger molecules with emergent amphiphilic and self-assembly properties. They show in their new work that the subtle addition of one more type of subtly different hydroxy acid, in this case one bearing a positive electric charge, to the starting pool of reactants can result in new types of polyesters that spontaneously self-assemble into still more unexpected types of cell-like structures and lends them new functions which may help explain the origins of biological cellularity.

The novel structures Jia and coworkers prepared show emergent functions such as the ability to segregate nucleic acids, which are essential for conveying heredity in modern cells, or the ability to emit fluorescent light. That such minor changes in chemical complexity can result in major functional changes is significant. Jia and colleagues suggest that by further increasing the chemical complexity of their experimental system, still further emergent functions could arise among the resulting primitive compartments that could lead to greater understanding of the rise of the first cells.

Jia notes that this work is not merely theoretical nor even just relevant to basic science research. Major COVID vaccines such as those devised by Moderna and Pfizer involve the dispersion of RNA molecules in metabolizable lipid droplets; the systems Jia and coworkers have developed could be similarly biodegradable in vivo, and thus polyester droplets similar to the ones they prepared could be useful for similar drug delivery applications.

Credit: 
Tokyo Institute of Technology

Earthworms could help reduce antibiotic resistance genes in soil

Earthworms improve the soil by aerating it, breaking down organic matter and mineralizing nutrients. Now, researchers reporting in ACS' Environmental Science & Technology have dug up another possible role: reducing the number and relative abundance of antibiotic-resistance genes (ARGs) in soils from diverse ecosystems. These results imply that earthworms could be a natural, sustainable solution to addressing the global issue of antibiotic resistance, the researchers say.

The overuse of antibiotics in humans and animals has caused ARGs to accumulate in soils, which could contribute to the rise in antibiotic-resistant infections. Earthworms consume tons of soil per year worldwide, and their guts have a unique combination of low-oxygen conditions, neutral pH and native microbial inhabitants that could have an effect on ARGs. However, the role of earthworms in the spread of antibiotic resistance has been controversial. Some studies in controlled settings suggest that their guts are hot spots for ARGs, which they can spread through soil with their movements, while other studies indicate that the earthworms' guts can reduce ARG abundance by destroying host bacteria and mobile genetic elements. To better understand the issue, Yong-Guan Zhu and colleagues wanted to compare the microbiomes and ARGs of earthworm guts with those of soils from diverse ecosystems across China.

The researchers collected earthworms and surrounding soil samples from 28 provinces in China. Then, they analyzed the composition of microbial communities in the worms' guts and the surrounding soil, finding that they differed between guts and soil and also among sites. In addition, the team found a lower number and relative abundance of ARGs in the earthworm guts than in the corresponding soil across all sampling sites. The earthworm guts also had lower levels of bacterial species that commonly host ARGs. These bacteria and their ARGs could be destroyed during digestion, or bacteria that live in the gut could out-compete them, the researchers say. In other experiments, they used controlled environments to show that the number and relative abundance of ARGs were higher in earthworm guts than in their feces, and that the addition of earthworms reduced ARGs in soil samples. These findings suggest that earthworms have the potential to mitigate these genes in soils as a form of natural bioremediation, the researchers say.

The authors acknowledge funding from the National Natural Science Foundation of China and the Marie Sk?odowska-Curie Actions Research Fellowship Program.

The paper's abstract will be available on May 12 at 8 a.m. Eastern time here: http://pubs.acs.org/doi/abs/10.1021/acs.est.1c00811

The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS' mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world's scientific knowledge. ACS' main offices are in Washington, D.C., and Columbus, Ohio.

To automatically receive news releases from the American Chemical Society, contact newsroom@acs.org.

Follow us: Twitter | Facebook

Credit: 
American Chemical Society

Harnessing the hum of fluorescent lights for more efficient computing

The property that makes fluorescent lights buzz could power a new generation of more efficient computing devices that store data with magnetic fields, rather than electricity.

A team led by University of Michigan researchers has developed a material that's at least twice as "magnetostrictive" and far less costly than other materials in its class. In addition to computing, it could also lead to better magnetic sensors for medical and security devices.

Magnetostriction, which causes the buzz of fluorescent lights and electrical transformers, occurs when a material's shape and magnetic field are linked--that is, a change in shape causes a change in magnetic field. The property could be key to a new generation of computing devices called magnetoelectrics.

Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient, slashing the electricity requirements of the world's computing infrastructure.

Made of a combination of iron and gallium, the material is detailed in a paper published May 12 in Nature Communication. The team is led by U-M materials science and engineering professor John Heron and includes researchers from Intel; Cornell University; University of California, Berkeley; University of Wisconsin; Purdue University and elsewhere.

Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data. Tiny pulses of electricity cause them to expand or contract slightly, flipping their magnetic field from positive to negative or vice versa. Because they don't require a steady stream of electricity, as today's chips do, they use a fraction of the energy.

"A key to making magnetoelectric devices work is finding materials whose electrical and magnetic properties are linked." Heron said. "And more magnetostriction means that a chip can do the same job with less energy."

Cheaper magnetoelectric devices with a tenfold improvement

Most of today's magnetostrictive materials use rare-earth elements, which are too scarce and costly to be used in the quantities needed for computing devices. But Heron's team has found a way to coax high levels of magnetostriction from inexpensive iron and gallium.

Ordinarily, explains Heron, the magnetostriction of iron-gallium alloy increases as more gallium is added. But those increases level off and eventually begin to fall as the higher amounts of gallium begin to form an ordered atomic structure.

So the research team used a process called low-temperature molecular-beam epitaxy to essentially freeze atoms in place, preventing them from forming an ordered structure as more gallium was added. This way, Heron and his team were able to double the amount of gallium in the material, netting a tenfold increase in magnetostriction compared to unmodified iron-gallium alloys.

"Low-temperature molecular-beam epitaxy is an extremely useful technique--it's a little bit like spray painting with individual atoms," Heron said. "And 'spray painting' the material onto a surface that deforms slightly when a voltage is applied also made it easy to test its magnetostrictive properties."

Researchers are working with Intel's MESO program

The magnetoelectric devices made in the study are several microns in size--large by computing standards. But the researchers are working with Intel to find ways to shrink them to a more useful size that will be compatible with the company's magnetoelectric spin-orbit device (or MESO) program, one goal of which is to push magnetoelectric devices into the mainstream.

"Intel is great at scaling things and at the nuts and bolts of making a technology actually work at the super-small scale of a computer chip," Heron said. "They're very invested in this project and we're meeting with them regularly to get feedback and ideas on how to ramp up this technology to make it useful in the computer chips that they call MESO."

While a device that uses the material is likely decades away, Heron's lab has filed for patent protection through the U-M Office of Technology Transfer.

Credit: 
University of Michigan

Brand new physics of superconducting metals refuted by Lancaster physicists

image: Superconducting circuits find applications in sensing and information processing.

Image: 
Lancaster University

Lancaster scientists have demonstrated that other physicists' recent "discovery" of the field effect in superconductors is nothing but hot electrons after all.

A team of scientists in the Lancaster Physics Department have found new and compelling evidence that the observation of the field effect in superconducting metals by another group can be explained by a simple mechanism involving the injection of the electrons, without the need for novel physics.

Dr Sergey Kafanov, who initiated this experiment, said: "Our results unambiguously refute the claim of the electrostatic field effect claimed by the other group. This gets us back on the ground and helps maintain the health of the discipline."

The experimental team also includes Ilia Golokolenov, Andrew Guthrie, Yuri Pashkin and Viktor Tsepelin.

Their work is published in the latest issue of Nature Communications.

When certain metals are cooled to a few degrees above absolute zero, their electrical resistance vanishes - a striking physical phenomenon known as superconductivity. Many metals, including vanadium, which was used in the experiment, are known to exhibit superconductivity at sufficiently low temperatures.

For decades it was thought that the exceptionally low electrical resistance of superconductors should make them practically impervious to static electric fields, owing to the way the charge carriers can easily arrange themselves to compensate for any external field.

It therefore came as a shock to the physics community when a number of recent publications claimed that sufficiently strong electrostatic fields could affect superconductors in nanoscale structures - and attempted to explain this new effect with corresponding new physics. A related effect is well known in semiconductors and underpins the entire semiconductor industry.

The Lancaster team embedded a similar nanoscale device into a microwave cavity, allowing them to study the alleged electrostatic phenomenon at much shorter timescales than previously investigated. At short timescales, the team could see a clear increase in the noise and energy loss in the cavity - the properties strongly associated with the device temperature. They propose that at intense electric fields, high-energy electrons can "jump" into the superconductor, raising the temperature and therefore increasing the dissipation.

This simple phenomenon can concisely explain the origin of the "electrostatic field effect" in nanoscale structures, without any new physics.

Credit: 
Lancaster University

Pandemic screen time tops 6 hours a day for some kindergartners

COLUMBUS, Ohio - Kindergartners from low-income families spent more than six hours a day in front of screens during two early months of the COVID-19 pandemic, a small Ohio study suggests.

That is nearly double the screen time found before the pandemic in similar children, according to other research.

Caregivers from low-income households may have faced more difficulties than those from more advantaged families in managing the time their children spent watching TV and using computers, phones and tablets when child care was shut down, according to the researchers.

Still, the results are concerning, said Rebecca Dore, lead author of the study and senior research associate at The Ohio State University's Crane Center for Early Childhood Research and Policy.

"We found a high level of media use compared to what many experts think is appropriate for this age group," Dore said.

"Some of that time spent using media was positive: watching educational videos and connecting with friends and family. But the amount of time they spent is something we should be aware of."

Dore conducted the research with Ohio State colleagues Kelly Purtell, associate professor of human sciences, and Laura Justice, professor of educational studies and executive director of The Crane Center.

The study was published online recently in the Journal of Developmental & Behavioral Pediatrics.

The study involved 151 low-income caregivers of kindergartners in Ohio who completed online questionnaires between May 1 and June 30, 2020, as part of a larger study.

Caregivers responded to 12 questions assessing children's media use on the most recent weekday and weekend day. Media use included any kind of video, including television, movies or short clips on any electronic device, and using apps or games on any type of device.

Results showed that children averaged 6.6 hours a day of media use. Contrary to previous research, weekday use (6.8 hours) was higher than weekend media use (5.8 hours).

"That suggests parents might have been using media as a substitute for the time their children would have been spending in some type of child care that was closed because of the pandemic," Dore said.

"Increased screen time may be particularly concerning for children from low-income households who had higher levels even before the pandemic: over 3 and a half hours per day compared with less than 2 hours for children from high-income homes."

Remote schooling didn't seem to be the main reason driving the results. Findings showed 84% of children had direct contact with their teachers once a week or less, with 53% reporting no direct contact.

Still, 61% of caregivers reported their child was using media for learning more than usual, perhaps watching educational TV or using educational apps unrelated to formal schooling. Also, 47% reported increased entertainment use, 45% said there was more use as a way to occupy the child's time, 42% reported increased use for maintaining relationships with remote family and friends, and 34% said their child was spending more time using screens for family bonding.

"Importantly, we saw increases in media use in a lot of areas often condidered positive, such as learning and fostering friendships, suggesting that caregivers may have been using media to supplement children's educational and social experiences at a time when in-person options were not safe," Justice said.

"But these families don't have some of the resources that more advantaged families have to help with children while the parents work or do other things. For low-income families, occupying a child's time may also be very important and necessary at times."

Results showed that children living in families with more kids had higher levels of screen time, potentially reflecting the pressures that caregivers have with larger families, Dore said.

Girls in the study spent more time than boys did using media to connect with family and friends. Dore said caregivers should be encouraged to provide more support to boys in maintaining relationships through technology when they can't meet in person.

It is not clear whether the high levels of media use found in this study would also be found in higher-income families.

"Other reports would suggest children from all backgrounds had higher media use during this time," Dore said.

"But use may be even higher in lower-income families because they are less likely to have flexibility to manage children's activities during work hours or be able to afford other child care options."

Credit: 
Ohio State University

Physicists extract proton mass radius from experimental data

image: Fig. 1. Vector meson near threshold photoproduction process on the proton target.

Image: 
KOU Wei

Researchers have recently extracted the proton mass radius from the experimental data.

A research group at the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences (CAS) presented an analysis of the proton mass radius in Physical Review D on May 11. The proton mass radius is determined to be 0.67 ± 0.03 femtometers, which is obviously smaller than the charge radius of the proton.

In the Standard Model, the proton is a composite particle made of quarks and gluons and it has a non-zero size. The radius of the proton is a global and fundamental property of the proton. It is related to the color confinement radius -- a property governed by quantum chromodynamics (QCD).

The radius of the proton is approximately 100,000 times smaller than that of the atom, and the sizes of the quark and gluon are several orders smaller than the proton radius. Scientists use various distributions to describe the shape of the proton, such as charge distribution and mass distribution.

The charge radius of the proton has been precisely measured by scientists via Lamb shift of the muonic hydrogen or the high energy electron-proton elastic scattering, with the average value of 0.8409± 0.0004 femtometers provided by the Particle Data Group. Nevertheless, knowledge of proton gravitational properties such as proton mass radius has still been very limited.

"According to recent theoretical studies by Dmitri Kharzeev, the proton mass radius is related to the scalar gravitational form factor of the proton," said Dr. WANG Rong, first author of the paper. By investigating the vector meson photoproduction data for omega, phi and J/psi from the SAPHIR (Spectrometer Arrangement for PHoton Induced Reactions) experiment at Bonn University, the LEPS (Laser Electron Photons) experiment at SPring-8 facility, and the GlueX experiment at Jefferson Lab, the researchers determined the scalar gravitational form factor and the proton mass radius.

Meanwhile, Prof. Dmitri Kharzeev, a theoretical physicist at Stony Brook University, obtained a similar result by using GlueX J/psi data. The proton mass radius was estimated to be 0.55 ± 0.03 femtometers.

"Both results might be the first-ever values of the proton mass radius with experimental evidence," said WANG. "The determination of the proton mass radius will improve our understanding of the origins of proton mass and the color confinement mechanism of strong interaction."

A lot of questions still remain. "The smaller mass radius implies that the mass distribution is significantly different from the charge distribution of the proton," said Prof. CHEN Xurong, a researcher at IMP.

Scientists are now trying to get a clearer picture of the proton mass radius and the proton structure. The GlueX experiment at Jefferson Lab will provide more data in the near future. Even more exciting, future electron-ion colliders both in the United States and in China will provide Upsilon vector meson electroproduction data for researchers to better understand these questions.

Credit: 
Chinese Academy of Sciences Headquarters

Observing individual atoms in 3D nanomaterials and their surfaces

image: a. Overall atomic structure of a Pt nanoparticle determined in this study, with SiN substrate represented as black and gray disks. b. Identified facet structure of the Pt nanoparticle, showing all facets. c, d. Iso-surfaces of reconstructed 3D density from the electron tomography, before (c) and after (d) the deep-learning based augmentation, respectively. e, f. Tomographic reconstruction volume intensity and traced atom positions. Each slice represents an atomic layer, and the blue dots indicate the traced 3D atomic positions before (e) and after (f) the deep-learning based augmentation. The grayscale backgrounds are iso-surfaces of 3D density.

Image: 
KAIST

Atoms are the basic building blocks for all materials. To tailor functional properties, it is essential to accurately determine their atomic structures. KAIST researchers observed the 3D atomic structure of a nanoparticle at the atom level via neural network-assisted atomic electron tomography.

Using a platinum nanoparticle as a model system, a research team led by Professor Yongsoo Yang demonstrated that an atomicity-based deep learning approach can reliably identify the 3D surface atomic structure with a precision of 15 picometers (only about 1/3 of a hydrogen atom's radius). The atomic displacement, strain, and facet analysis revealed that the surface atomic structure and strain are related to both the shape of the nanoparticle and the particle-substrate interface. This research was reported at Nature Communications.

Combined with quantum mechanical calculations such as density functional theory, the ability to precisely identify surface atomic structure will serve as a powerful key for understanding catalytic performance and oxidation effect.

"We solved the problem of determining the 3D surface atomic structure of nanomaterials in a reliable manner. It has been difficult to accurately measure the surface atomic structures due to the 'missing wedge problem' in electron tomography, which arises from geometrical limitations, allowing only part of a full tomographic angular range to be measured. We resolved the problem using a deep learning-based approach," explained Professor Yang.

The missing wedge problem results in elongation and ringing artifacts, negatively affecting the accuracy of the atomic structure determined from the tomogram, especially for identifying the surface structures. The missing wedge problem has been the main roadblock for the precise determination of the 3D surface atomic structures of nanomaterials.

The team used atomic electron tomography (AET), which is basically a very high-resolution CT scan for nanomaterials using transmission electron microscopes. AET allows individual atom level 3D atomic structural determination.

"The main idea behind this deep learning-based approach is atomicity--the fact that all matter is composed of atoms. This means that true atomic resolution electron tomogram should only contain sharp 3D atomic potentials convolved with the electron beam profile," said Professor Yang.

"A deep neural network can be trained using simulated tomograms that suffer from missing wedges as inputs, and the ground truth 3D atomic volumes as targets. The trained deep learning network effectively augments the imperfect tomograms and removes the artifacts resulting from the missing wedge problem."

The precision of 3D atomic structure can be enhanced by nearly 70% by applying the deep learning-based augmentation. The accuracy of surface atom identification was also significantly improved.

Structure-property relationships of functional nanomaterials, especially the ones that strongly depend on the surface structures, such as catalytic properties for fuel-cell applications, can now be revealed at one of the most fundamental scales: the atomic scale.

Professor Yang concluded, "We would like to fully map out the 3D atomic structure with higher precision and better elemental specificity. And not being limited to atomic structures, we aim to measure the physical, chemical, and functional properties of nanomaterials at the 3D atomic scale by further advancing electron tomography techniques."

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

How social media and AI enable companies to track brand reputations in real-time

Researchers from University of Maryland, North Carolina State University, National Taiwan University, Oxford University, Kings College London, and Perceptronics Solutions, Inc. published a new paper in the Journal of Marketing that examines how artificial intelligence (AI)-based text analysis of social media can monitor the extent to which brand reputation rises and falls over time.

The study, forthcoming in the Journal of Marketing, is titled "Real-Time Brand Reputation Tracking using Social Media" and is authored by Roland Rust, William Rand, Ming-Hui Huang, Andrew Stephen, Gillian Brooks, and Timur Chabuk.

Organizations' brand reputations can rise and fall based on brand-related events. For example, when Goya CEO Robert Unanue suggested that the 2020 U.S. presidential election was fraudulent, that controversial assertion likely offended a large segment of the population. How can we tell? This research team demonstrates that using artificial intelligence (AI)-based text analysis of social media can monitor the extent to which brand reputation rises and falls over time. What's more, merging this social media monitoring with the Rust-Zeithaml-Lemon customer equity drivers can show exactly which dimensions of brand reputation are changing.

All marketers know that brands are important and that stakeholders' views of the brand are reflective of many different factors. Also, brand reputation may rise or fall over time, due to events that affect the brand. The fact that brand reputation is not constant makes it essential for companies to monitor their brands continuously, to determine whether a brand's reputation is changing, and to evaluate which aspects of the brand are causing these changes. Survey-based approaches exist, but surveying stakeholders on a daily basis is generally too expensive to be practical. Another approach is to infer what is happening to brand reputation by mining social media. Automatic, AI-based text analysis of social media posts is a realistic alternative.

Because Twitter is widely used by people to express opinions about brands and is often monitored by the public, the research team chose it as the platform to explore and baseline results. By analyzing millions of Twitter tweets, they demonstrate that their brand reputation tracker accurately reflected major brand events in real-time. For example, when it was revealed that Facebook had improperly shared personal information with an outside company (Cambridge Analytica), the brand tracker reflected that right away with a decline in brand reputation. On the positive side, when Google added new features, its brand reputation ratings went up.

Rust says that "It is one thing to know that brand reputation is improving or declining, but another thing entirely to figure out why. To ensure the actionability of our brand reputation tracker, we sorted the tweets according to the Rust-Zeithaml-Lemon customer equity drivers, which have been applied by many Fortune 500 companies. These three drivers, along with their sub-drivers, help managers know where to focus, making the brand tracker managerially relevant and actionable." The three main drivers of customer equity according to this framework are value, brand, and relationship. The value driver considers the rational or objective aspects of the brand, such as price, quality, or convenience. The brand driver considers the emotional or subjective aspects of the brand, such as attitude toward the brand, or perceptions of the brand's ethics. The relationship driver focuses on the aspects of the brand that create switching costs, such as loyalty programs or knowledge of the brand.

Rand explains that "Our brand reputation tracker is unique in that it can reflect the impact of brand events in real-time and connect them in a more granular way to managerially specific drivers of brand reputation. Managers can use it to drive programs that enhance their brands' standing with customers, forging deeper relationships and ultimately delivering more revenues to their bottom line."

Credit: 
American Marketing Association

Defining climate-smart pathways towards tree crop yield intensification

image: The top figure represents expected changes in low-C and high C land area conversion to oil palm production associated with three different development scenarios designed to match productive capacity with projected demand for Indonesia crude palm oil by 2035. The bottom figure shows the magnitude of Global Warming Potential (GWP) associated with each development scenario.

Image: 
Adapted from Monzon et al. 2021

A global team of researchers recently released the results of a 'data-rich' modeling approach designed to illustrate a range of what-if scenarios for future oil palm plantation development in Indonesia. The study provides new insight into crop production strategies available to an industry facing increasing scrutiny.

Oil palm production is challenged by global and domestic concerns related to how it operates within its tropical rainforest environment, which is highly valued for its contribution to climate change mitigation potential and biodiversity protection. The study sheds new light on the future implications of maintaining business-as-usual versus increased adoption of alternative plantation management strategies.

Published in the March 2021 Issue of Nature Sustainability, Monzon et al. offer three distinct directions of development for Indonesia's plantations over the next two decades. Dr. Thomas Oberthür, African Plant Nutrition Institute (APNI) Director of Business and Partnership Development, and one of the authors with years of engagement in sustainable oil palm intensification, considers this study "a critical milestone for a roadmap enabling concerted and systematic action to responsibly develop the oil palm sector in Southeast Asia and beyond."

As the largest single source of the most-used vegetable oil in the world, Indonesia's crude palm oil (CPO) production has experienced a six-fold increase in production over the last two decades. Most notably, this increase has largely come through area expansion, including the conversion of over 10 M ha of tropical forests and peatlands during this same timeframe.

Current day actions that favour further area expansion as a means of increasing plantation capacity are commonly criticized. Besides the loss of highly valued ecological reserve land, further plantation expansion might steadily encroach on high C-stock lands that can disproportionally contribute to higher global warming potential (GWP) if unlocked through agricultural development.

The study engages discussion around a "climate-smart" approach to sustainable growth in palm oil production. The approach is not dependent on the size or structure of the plantation, but instead targets the plantations' exploitable yield gap--the difference between actual yields and realistically attainable yields. Analysis of yield gaps across Indonesia's plantations reveals that on a national level, only 62% and 53% of attainable yields are achieved in large and smallholder plantations, respectively.

Historically, this gap exists due to structural limitations faced by plantation managers. Breaking through the historical yield trends requires a more intensive approach that managers can feel ill-equipped to undertake due to a number of reasons including lack of experience, skilled labour, or dependable access to required inputs. Often in permanent cropping systems like oil palm, there is less incentive for managers to adopt experimental practices that take a few years to pay-off. Rather, the decision to make a long-term investment in new lands wins over, which essentially delays any innovation on existing lands.

Three Pathways to Palm Oil Production

The study describes three scenarios for agronomic intensification and plantation area expansion based on an approach that combined spatial analysis, crop modelling, and data on prevailing weather, soil productivity, and plantation productivity, age, and size. All projections strive to achieve the country's stated productivity targets of 60 M t CPO by the year 2035.

First, a business-as-usual (BAU) scenario relies on past yield and area expansion trends to meet the stated productivity targets. According to the study, BAU requires an additional 9.2 M ha of new land under oil palm--a significant proportion consisting of high-C peatlands. In terms of global warming potential (GWP), a large net increase of 767 M t CO2e is expected under BAU. The most significant contributor of C emission is organic matter decomposition caused by the conversion of high-C lands, but the GWP associated with farming these new lands is also a major contributor to GWP.

The second intensification (INT) scenario assumes an upward shift in existing plantation productivity. Advances in yield per unit of existing lands--capable of closing the exploitable yield gap--are gained through adoption of best management practices gleaned from new agricultural research and development efforts over the course of the 17-year modelling timeframe. Most notably, no further increase in planted area is required under this scenario. In addition, a 60% decline in GWP is achieved compared to BAU. The study notes that the elimination of the exploitable yield gap requires plantations to collectively (and ambitiously) work to raise the national yield average from 18 to 30.6 t FFB/ha over the next two decades.

Lastly, a third scenario outlines intensification with targeted expansion (INT-TE) through more modest expectations for yield improvement--reducing the exploitable yield gap by one-third--plus area expansion that excludes high-C peat lands that have such a large impact on GWP under BAU. Compared to BAU, this combined strategy avoids the conversion of over 5 M ha--2.6 M ha being high-C stock lands, and reduces GWP by 732 M t CO2e. The more modest degree of yield improvement to a national yield average to 22.5 t/ha is noted as a far less challenging threshold to overcome for the majority of plantations.

This study has demonstrated a strong win-win case for targeted yield gap closure. Enhanced productive capacity within existing planted areas can change the trajectory of permanent crop development. "So many people from so many different backgrounds are all working together to fine-tune management strategies and put them into practice," explains Dr. Patrico Grassini, Associate Professor of Agronomy at the University of Nebraska-Lincoln. "Robust education and extension efforts will be key to fully exploit the potential for growth."

Expanding the Approach

As a partner in this work, APNI is encouraged by the transferability of this approach to the permanent cropping systems in Africa. For example, cacao production in West Africa drives one of the most significant deforestation pressures in the region.

There is a strong need to develop partnerships in Africa that are focused on taking a closer look at the intensification opportunities of cacao production. Such partnerships will be the critical step to building strategies best suited to identifying the exploitable yield gaps across the region and securing profitable, climate-resilient farms. "The African Plant Nutrition Institutes invites partners from the industry, the farming sector, financial and public institutions to join forces for the development of an initiative that incentivizes truly sustainable and responsible intensification of West African cacao lands and thereby helps curb further deforestation," says Oberthür.

Credit: 
African Plant Nutrition Institute

Study finds ghost forest 'tree farts' contribute to greenhouse gas emissions

A new study from North Carolina State University finds that greenhouse gas (GHG) emissions from standing dead trees in coastal wetland forests - colloquially called "tree farts" - need to be accounted for when assessing the environmental impact of so-called "ghost forests."

In the study, researchers compared the quantity and type of GHG emissions from dead tree snags to emissions from the soil. While snags did not release as much as the soils, they did increase GHG emissions of the overall ecosystem by about 25 percent. Researchers say the findings show snags are important for understanding the total environmental impact of the spread of dead trees in coastal wetlands, known as ghost forests, on GHG emissions.

"Even though these standing dead trees are not emitting as much as the soils, they're still emitting something, and they definitely need to be accounted for," said the study's lead author Melinda Martinez, a graduate student in forestry and environmental resources at NC State. "Even the smallest fart counts."

In the study, researchers measured emissions of carbon dioxide, methane and nitrous oxide from dead pine and bald cypress snags in five ghost forests on the Albemarle-Pamlico Peninsula in North Carolina, where researchers have been tracking the spread of ghost forests due to sea-level rise.

"The transition from forest to marsh from these disturbances is happening quickly, and it's leaving behind many dead trees," Martinez said. "We expect these ghost forests will continue to expand as the climate changes."

Using portable gas analyzers, researchers measured gases emitted by snags and from soils in each forest in 2018 and 2019. Overall average emissions from soils were approximately four times higher than average emissions from snags in both years. And while snags did not contribute as much as soils, researchers said they do contribute significantly to emissions.

In addition to finding that soils emit more GHGs than snags, the work lays the foundation for the researchers' ongoing work to understand the role snags are playing in emissions - whether they prevent emissions, like corks, or release them like straws. That is an area of future research they're currently continuing to explore.

"We started off this research wondering: Are these snags straws or corks?" said study co-author Marcelo Ard?n, associate professor of forestry and environmental sciences at NC State. "Are they facilitating the release from soils, or are they keeping the gases in? We think that they act as straws, but as a filtered straw. They change those gases, as the gases move through the snags."

Credit: 
North Carolina State University

Backyard chickens, rabbits, soybeans can meet household protein demand

image: In 2020, stores sold out of garden seed, coops and rabbit cages. Meat shortages led many to wonder what to eat for protein when supply chains are disrupted and some people turned to gathering eggs, raising animals and growing their own food. A team from Michigan Tech and the University of Alaska assessed backyard protein sources.

Image: 
Allison Mills/Michigan Tech

In 2020, stores sold out of garden seed, coops and rabbit cages. Now, we have an idea how much protein people can grow in their backyards.

The 2020 meat shortages led many to wonder what to eat for protein when supply chains are disrupted. Some people turned to gathering eggs, raising animals and growing their own food. A team from Michigan Technological University and the University of Alaska Fairbanks found that the work is well worth it. In a new study published in Sustainability, the researchers looked at how a typical household with a typical backyard can raise chickens, rabbits or soybeans to meet its protein needs.

People eat a lot of protein in the U.S. and the average person needs 51 grams of protein every day, according to the National Institutes of Health (NIH) Dietary Reference Intakes (DRI). That comes to 18,615 grams each year or, for an average household of 2.6 people, 48,399 grams per year. Americans love burgers, but few people have room to raise a steer next to the garage -- and most city ordinances quake at the mere thought of a rogue cowpie. But small animals are more efficient protein producers and are often allowed within city limits. The average backyard provides plenty of space, typically 800 to 1,000 square meters or about 8,600 to 10,700 square feet.

"You don't have to convert your entire backyard into a soybean farm. A little goes a long way," said Joshua Pearce, one of the study co-authors and Michigan Tech's Richard Witte Endowed Professor of Materials Science and Engineering and professor of electrical and computer engineering. "I'm a solar engineer; I look at surface area and think of photovoltaic production. Many people don't do that -- they don't treat their backyards as a resource. In fact, they can be a time and money sink that they have to mow and pour fertilizer on. But we can actually be very self-reliant when we treat our yards as an asset."

Pearce's co-authors are interdisciplinary and include Michigan Tech students Theresa Meyer and Alexis Pascaris, along with David Denkenberger of the University of Alaska. The lab group originally came together to do an agrovoltaics study to assess raising rabbits under solar panels. But when they sought to purchase cages in spring 2020, they discovered animal equipment and home garden supply shortages throughout the country. Like many labs, the group pivoted and refocused their work to address impacts of the pandemic.

They found that using only backyard resources to raise chickens or rabbits offset protein consumption up to 50%. To reach full protein demand with animals and eggs required buying grain and raising 52 chickens or 107 rabbits. That's more than most city ordinances allow, of course, and raising a critter is not as simple as plopping down a planter box. While pasture-raised rabbits mow the lawn for you, Pearce says the "real winner is soy." Consuming plant protein directly instead of feeding it to animals first is far more efficient. The plant-based protein can provide 80% to 160% of household demand and when prepared as edamame, soy is like a "high-protein popcorn." The team's economic analyses show that savings are possible -- more so when food prices rise -- but savings depend on how people value food quality and personal effort.

"It does take time. And if you have the time, it's a good investment," Pearce said, pointing to other research on building community with gardens, mental health benefits of being outside and simply a deeper appreciation for home-raised food. "Our study showed that many Americans could participate in distributed food production and help make the U.S. not only more sustainable, but more resilient to supply chain disruptions."

Credit: 
Michigan Technological University

Measuring brain blood flow and activity with light

image: A noninvasive method for measuring brain blood flow with light has been developed by biomedical engineers and neurologists at UC Davis, and used to detect brain activation. The new method, functional interferometric diffusing wave spectroscopy, or fiDWS, promises to be cheaper than existing technology and could be used for assessing brain injuries, or in neuroscience research.

Image: 
Wenjun Zhou, UC Davis

A new, noninvasive method for measuring brain blood flow with light has been developed by biomedical engineers and neurologists at the University of California, Davis, and used to detect brain activation. The new method, functional interferometric diffusing wave spectroscopy, or fiDWS, promises to be cheaper than existing technology and could be used for assessing brain injuries, or in neuroscience research. The work is published May 12 in Science Advances.

"Now we can assess how well the brain regulates blood flow, and even detect brain activation noninvasively in adult humans, using principles similar to functional magnetic resonance imaging (fMRI), but at a fraction of the cost," said Vivek Srinivasan, adjunct associate professor of biomedical engineering at UC Davis and senior author on the study.

The human brain makes up 2% of our body weight but takes 15% to 20% of blood flow from the heart. Measuring cerebral blood flow is important for diagnosing strokes, and for predicting secondary damage in subarachnoid hemorrhages or traumatic brain injuries. Doctors who provide neurological intensive care, would also like to monitor a patient's recovery by imaging brain blood flow and oxygenation.

Existing technology is expensive and cannot be applied continuously or at the bedside. For example, current techniques to image cerebral blood flow require expensive MRI or computed tomography scanners. There are light-based technologies, such as near-infrared spectroscopy, but these also have drawbacks in accuracy.

The new method takes advantage of the fact that near-infrared light can penetrate through body tissues. If you shine a near-infrared laser on someone's forehead, the light will be scattered many times by tissue, including blood cells. By picking up the fluctuation signal of the light that finds its way back out of the skull and scalp, you can get information about blood flow inside the brain.

Naturally, that signal is extremely weak. Srinivasan and postdoctoral researcher Wenjun Zhou overcame that problem by making use of interferometry: the ability of light waves to superimpose, reinforcing or canceling one another. In particular, through interferometry, a strong light wave can boost a weak light wave by increasing its detected energy.

Prefrontal cortex activation

They first split the laser beam into "sample" and "reference" paths. The sample beam goes into the patient's head and the reference beam is routed so that it reconnects with the sample beam before going to the detector. Through interferometry, the stronger reference beam boosts the weak sample signal. This allowed the team to measure the output with the type of light-detecting chip found in digital cameras, instead of expensive photon counting detectors. They then use software to calculate a blood flow index for different locations in the brain.

Srinivasan and Zhou worked with Dr. Lara Zimmerman, Dr. Ryan Martin and Dr. Bruce Lyeth at the UC Davis Department of Neurological Surgery to test the technology. They found that with this new technology, they could measure blood flow more rapidly and deeper below the surface than with current light-based technology. They could measure pulsating cerebral blood flow and could also detect changes when volunteers were given a mild increase in carbon dioxide.

When volunteers were given a simple math problem, the researchers were able to measure activation of the prefrontal cortex through the forehead.

Credit: 
University of California - Davis