Tech

How will seafarers fare once automated ships take over? Scientists predict the future

image: Disrupting technologies in the shipping industry: How will MASS development affect the maritime workforce in Korea

Image: 
Korea Maritime and Ocean University

Artificial intelligence and automation are changing the world, one industry at a time! Whatever humans can do, machines are learning to also do effectively, with lower costs and fewer errors. The maritime shipping industry is no different. Ships are now increasingly automated (called maritime autonomous surface ships or MASSs), reducing the need for human input. While this bodes well for labor and fuel costs, the question naturally raised is, what happens to the jobs of seafarers, the chief workforce of the shipping industry, once MASSs take over.

To find out, researchers from Korea used complex mathematical models and simulations to determine the effect of MASS technology on jobs lost and gained over time. In their study published in Marine Policy, Assistant Professor Sohyun Jo from Korea Maritime and Ocean University--the lead scientist on this study and a former navigation officer--simulated four possible scenarios depicting varying speeds of growth of MASS technology. The projected outcomes in all the scenarios were consistent; in all examined scenarios, the number of seafarer jobs decreased, but at least fifty times as many shore-based jobs as the lost seafarer jobs were newly created.

These findings are encouraging, but not the endpoint, believes Dr. Jo "This indicates an overall increase in the number of jobs, but we need to nevertheless be prepared; specific and dynamic education, training and development of human resource policies for skills development should be introduced," she says. Other countries that provide manpower for the maritime industry can benefit from this study by introducing "political willingness and technical ability" to adapt to the changing employment sphere. Technology development and timely training internationally competitive human resources is also essential.

"Moreover, to ensure that the marine industry grows sustainably in a new business ecosystem, preemptive efforts to create new business opportunities incorporating ICT technologies are needed," Dr. Jo suggests.

Technology is changing the marine industry for the better, but people can also grow with it, by find their place in this new automated world.

Credit: 
National Korea Maritime and Ocean University

Turbulence model could enhance rotorcraft, munitions performance

image: A new modeling approach allows engineers to simulate an entire vortex collision without needing to do extensive data processing on a supercomputer.

Image: 
Purdue University/Carlo Scalo

RESEARCH TRIANGLE PARK, N.C. -- Design of Army aerial vehicles and weapon systems relies on the ability to predict aerodynamic behavior, often aided by advanced computer simulations of the flow of air over the body. High-fidelity simulations assist engineers in maximizing how much load a rotorcraft can lift or how far a missile can fly, but these simulations aren't cheap.

The simulations that designers currently use require extensive data processing on supercomputers and capture only a portion of vortex collision events - which can cause significant performance degradation, from loss of lift on a rotor to complete loss of control of a munition. A new turbulence model could change that.

The Army Research Office, an element of the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory, funded researchers at Purdue University to advance a turbulence model known as the Coherent-vorticity-Preserving Large-Eddy Simulation, known as CvP LES. Published in the Journal of Fluid Mechanics, the new methodology simulates the entire process of a vortex collision event up to 100 times faster than current state-of-the-art simulation techniques.

"The thing that's really clever about Purdue's approach is that it uses information about the flow physics to decide the best tactic for computing the flow physics," said Dr. Matthew Munson, Program Manager for Fluid Dynamics at ARO. "There is enormous potential for this to have a real impact on the design of vehicle platforms and weapons systems that will allow our Soldiers to successfully accomplish their missions."

The fluid dynamics of aircraft turbulence are complex, and simulating them accurately in the computer is nearly impossible. Prof. Carlo Scalo has taken a leap forward in this process, by modeling the collision of vortices in two ways: once with direct numerical simulation, and once with large-eddy simulation. This model can now be used by engineers to design better aircraft, without having to wait months for supercomputer calculations. Carlo Scalo's Compressible Flow and Acoustics Lab: https://engineering.purdue.edu/~scalo/ Mechanical Engineering: https://purdue.edu/ME

The model can be used to simulate vortices over any length of time to best resemble what happens around an aircraft. For instance, as a rotor blade moves through the air, it generates a complex system of vortices that are encountered by the next blade passage. The interaction between the blade and the vortices can lead to vibration, noise, and degraded aerodynamic performance. Understanding these interactions is the first step to modifying designs to reduce their impact on the vehicle's capabilities.

In this study, researchers simulated the collision events of two vortex tubes called trefoil knotted vortices. This interaction shares many common features to the vortices often present in Army applications. Simulating the evolution of the collision requires extremely fine resolution, substantially increasing the computational cost.

The methodology relies on clever techniques that balance cost and accuracy. It is capable of rapidly detecting regions of the flow characterized by fine turbulent scales and then determining, on-the-fly, the appropriate numerical scheme and turbulence model to apply locally. This also allows computational power to be applied only where most needed, achieving a solution with the highest possible fidelity for a given budgeted amount of computational resources.

"When vortices collide, there's a clash that creates a lot of turbulence," said Carlo Scalo, a Purdue associate professor of mechanical engineering with a courtesy appointment in aeronautics and astronautics. "It's very hard computationally to simulate because you have an intense localized event that happens between two structures that look pretty innocent and uneventful until they collide."

Using the Brown supercomputer at Purdue University for mid-size computations and Department of Defense facilities for large-scale computations, the team simulated an entire collision event, fully simulating the thousands of events that take place when these vortices collide.

The team is now working with the Department of Defense to apply the model to large-scale test cases pertaining to Army vehicle and weapons systems.

"If you're able to accurately simulate the thousands of events in flow like those coming from a helicopter blade, you could engineer much more complex systems," Scalo said.

Credit: 
U.S. Army Research Laboratory

Drink and drug risk is lower among optimistic pupils with 'happy' memories, says study

Teenagers with happy childhood memories are likely to drink less, take fewer drugs and enjoy learning, according to research published in the peer-reviewed journal Addiction Research & Theory.

The findings, based on data from nearly 2,000 US high school students, show a link between how pupils feel about the past, present and future and their classroom behavior. This in turn influences their grades and risk of substance misuse, according to the study.

The authors say action is needed now because Covid-19 has left many teenagers struggling with online study, suffering mentally and turning to drink and drugs.

They are calling on teachers - and parents - to help students develop more positive mindsets and become motivated to learn so they are less likely to binge drink or use marijuana.

"School often seems a source of stress and anxiety to students," says John Mark Froiland from Purdue University in Indiana, US.

"This puts them at greater risk of not participating in lessons, getting lower grades and of substance misuse.

"Many teenagers also aren't engaging with online learning during Covid or have lower engagement levels.

"But they're more likely to be enthusiastic learners and not use drink and drugs if teachers take time to build more positive relationships with them. They can help students see that everything they're learning is truly valuable. Parents have a role to play too."

Teenagers with a balanced attitude towards their childhoods and other time periods have already been shown by studies to be more likely to abstain from drink and drugs and achieve academically. This is compared to those with a pessimistic outlook.

The aim of this study was to establish how substance misuse and behaviors towards learning are affected by students' feelings about the past, present and future.

The data was based on assessments and questionnaires completed by 1,961 students at a high school in the San Francisco Bay Area. More than half (53%) of the pupils included in the study were female.

The study authors looked at responses from pupils where they rated how nostalgic they were towards their childhood, current happiness levels in life and how much they look forward to future happiness.

They also analysed marijuana and alcohol habits over the past 30 days including binge drinking, and average academic grades. They analysed motivation levels, and behavior in lessons such as how much teenagers paid attention and listened.

Statistical techniques were used by the researchers to assess the associations between all these different factors and establish the key predictors for alcohol and marijuana misuse.

In general, the study found that positive attitudes towards the past, present and future put adolescents at lower risk for alcohol use, binge drinking, and marijuana.

The opposite was true for those displaying pessimistic or negative ways of thinking or feeling about their life in the past, now or ahead of them.

The reason for this was that a content and optimistic outlook increased the likelihood they would be motivated and behave in a focused way on the chance to learn.

Other findings include girls having stronger levels of behavioral engagement than boys, and students who drank being most likely to use cannabis.

The study did not examine the long term relationship between positive attitudes, levels of student engagement and their substance misuse. The authors say this is an area for future research.

Credit: 
Taylor & Francis Group

Epilepsy research focused on astrocytes

image: Mariko Onodera and Jan Meyer perform an experiment with potassium-sensitive microelectrodes in the Institute of Neurobiology at HHU.

Image: 
HHU / Institute of Neurobiology

During epileptic seizures, a large number of nerve cells in the brain fire excessively and in synchrony. This hyperactivity may lead to uncontrolled shaking of the body and involve periods of loss of consciousness. While about two thirds of patients respond to anti-epileptic medication, the remainder is refractory to medical treatment and shows drug-resistance. These patients are in urgent need for new therapeutic strategies.

Together with colleagues in Japan, Prof. Dr. Christine Rose and her doctoral student Jan Meyer from the Institute of Neurobiology at HHU have performed a study to address the cellular mechanisms that promote the development of epilepsy. While up to now, most studies and anti-epileptic drugs targeted nerve cells (neurons), this research team focused on a class of glial cells known as astrocytes.

Glial cells account for approximately half of all cells in the brain. There are different types of glial cells, which perform different functions. Astrocytes control the local environment and are responsible for the ion balance in the brain, but also play an important role in signal transmission between neurons.

In their recent paper, the researchers show that epileptic discharges lead to a rise in the pH of astrocytes, that is in their intracellular 'alkalisation'. The change in pH disrupts the communication within the intercellular astrocyte networks. This reduced communication between astrocytes appears to exacerbate epileptic activity of neurons.

This finding points towards a potential new target for suppressing epileptogenesis at a very early stage, namely by using drugs to suppress changes in astrocytic pH accompanying neuronal activity.

The researchers were able to confirm this option by showing that animals which were given such drugs suffered less severely from epileptic hyperexcitability than untreated animals.

Prof. Rose said: "This observation is very intriguing. But it still needs to be established whether or not it can be transferred to humans. And it will take a very long time before any potential drug can be developed and be really used in the clinics."

Credit: 
Heinrich-Heine University Duesseldorf

Missing protein helps small cell lung cancer evade immune defenses

image: Small cell lung cancer (SCLC) is a highly metastatic cancer. The liver is one of the common sites of metastases, as seen in this image of mouse liver with metastatic SCLC lesions. SCLC tumors are composed of tightly packed epithelial cells with few immune cells infiltrating inside the tumor.

Image: 
UT Southwestern Medical Center

DALLAS - Jan. 25, 2021 - Small cell lung cancer (SCLC) cells are missing a surface protein that triggers an immune response, allowing them to hide from one of the body's key cancer defenses, a new study led by UT Southwestern researchers suggests. The findings, reported online today in Cancer Research, a journal of the American Association for Cancer Research, could lead to new treatments for SCLC, which has no effective therapies.

Despite decades of study, SCLC - a subset of lung cancer that makes up about 13 percent of lung cancer diagnoses - has a very poor prognosis, with only about 6 percent of patients surviving five years after diagnosis. For the past 30 years, this disease has been treated with a combination of chemotherapies. Although most SCLC tumors initially respond to treatment, the majority of patients relapse within a year.

These tumors tend to carry many genetic mutations - often a good predictor of a strong immunotherapy response. However, says Esra Akbay, Ph.D., assistant professor of pathology and a member of the Harold C. Simmons Comprehensive Cancer Center at UTSW, immunotherapy drugs tend to not work well for SCLC patients, typically extending survival by just a few months.

"SCLC's inability to respond to immunotherapy made us think that there might be something about these tumors that allowed them to evolve to hide from the immune system," Akbay says. "We thought there might be defects in how these tumors communicate with immune cells that are supposed to recognize them as cancer."

To investigate this idea, Akbay and her colleagues looked at publicly available cancer datasets from patient tumors and data gathered from human tumor cell lines at UTSW to compare proteins on the surfaces of SCLC cells against non-small cell lung cancer (NSCLC) cells, which tend to respond better to immunotherapy. They quickly noticed that SCLC cells were missing the surface protein NKG2DL, which is known to interact with natural killer (NK) cells. NK cells make up a key part of the innate immune system, an evolutionary ancient part of the body's natural defense system that continually monitors for foreign invaders to launch an attack.

Data from mouse models of SCLC confirmed that the rodent version of NKG2DL was also missing from the surfaces of their cancer cells. When the researchers examined the animals' tumors, they found far fewer immune cells compared with those from mouse models of NSCLC. Additionally, the immune cells in the SCLC tumors weren't activated and therefore were unprepared to fight.

To better understand what role NKG2DL plays in SCLC immunity, Akbay and her colleagues genetically manipulated SCLC cell lines to force them to produce this protein on their surfaces. When they implanted these cells in mice, they grew smaller tumors and were less likely to spread. These tumors had a significantly higher population of immune cells than SCLC tumors that didn't express NKG2DL, and far more of the immune cells in tumors with NKG2DL were activated and ready to fight.

Akbay explains that some chemotherapy drugs can induce surface expression of NKG2DL; however, when she and her colleagues dosed SCLC cell lines with these medicines, they didn't prompt the cells to make this protein. Further investigation showed that the gene for NKG2DL wasn't mutated, suggesting that this protein was missing due to a problem with turning the gene on, rather than a faulty gene. Sure enough, further experiments showed that in SCLC cells, the gene responsible for making NKG2DL is hidden behind tightly coiled DNA, making it impossible for the cellular machinery that translates this gene into a protein to access it. When the researchers dosed animal models of SCLC with drugs called histone deacetylase (HDAC) inhibitors, which loosen DNA coils, the SCLC cells began expressing NKG2DL on their surfaces, translating into significantly smaller tumors that had more activated immune cells.

Turning again to a public cancer dataset, the researchers saw that neuroblastoma - one of the most common childhood cancers - is also typically missing NKG2DL on its cell surfaces. When the researchers dosed neuroblastoma cell lines with HDAC inhibitors, they also began expressing surface NKG2DL.

Together, Akbay says, these findings could lead to new ways to more accurately predict a patient's prognosis and guide better treatment choices for SCLC, neuroblastoma, and potentially other cancers. Patients whose tumor cell surfaces lack NKG2DL may have a more aggressive disease that is unlikely to respond to immunotherapy drugs, she explains. But the hope is that treatment with HDAC inhibitors may spur patients' immune systems to fight these tumors, enhancing immunotherapy effectiveness.

"The more we know about how the immune system interacts with cancer," Akbay says, "the more we can take advantage of the body's inherent defense system to fight this disease."

Credit: 
UT Southwestern Medical Center

Simulating 800,000 years of California earthquake history to pinpoint risks

image: A randomly selected 3,000-year segment of the physics-based simulated catalog of earthquakes in California, created on Frontera.

Image: 
Kevin Milner, University of Southern California

Massive earthquakes are, fortunately, rare events. But that scarcity of information blinds us in some ways to their risks, especially when it comes to determining the risk for a specific location or structure.

"We haven't observed most of the possible events that could cause large damage," explained Kevin Milner, a computer scientist and seismology researcher at the Southern California Earthquake Center (SCEC) at the University of Southern California. "Using Southern California as an example, we haven't had a truly big earthquake since 1857 -- that was the last time the southern San Andreas broke into a massive magnitude 7.9 earthquake. A San Andreas earthquake could impact a much larger area than the 1994 Northridge earthquake, and other large earthquakes can occur too. That's what we're worried about."

The traditional way of getting around this lack of data involves digging trenches to learn more about past ruptures, collating information from lots of earthquakes all around the world and creating a statistical model of hazard, or using supercomputers to simulate a specific earthquake in a specific place with a high degree of fidelity.

However, a new framework for predicting the likelihood and impact of earthquakes over an entire region, developed by a team of researchers associated with SCEC over the past decade, has found a middle ground and perhaps a better way to ascertain risk.

A new study led by Milner and Bruce Shaw of Columbia University, published in the Bulletin of the Seismological Society of America in January 2021, presents results from a prototype Rate-State earthquake simulator, or RSQSim, that simulates hundreds of thousands of years of seismic history in California. Coupled with another code, CyberShake, the framework can calculate the amount of shaking that would occur for each quake. Their results compare well with historical earthquakes and the results of other methods, and display a realistic distribution of earthquake probabilities.

According to the developers, the new approach improves the ability to pinpoint how big an earthquake might occur in a given location, allowing building code developers, architects, and structural engineers to design more resilient buildings that can survive earthquakes at a specific site.

"For the first time, we have a whole pipeline from start to finish where earthquake occurrence and ground-motion simulation are physics-based," Milner said. "It can simulate up to 100,000s of years on a really complicated fault system."

Applying massive computer power to big problems

RSQSim transforms mathematical representations of the geophysical forces at play in earthquakes -- the standard model of how ruptures nucleate and propagate -- into algorithms, and then solves them on some of the most powerful supercomputers on the planet. The computationally-intensive research was enabled over several years by government-sponsored supercomputers at the Texas Advanced Computing Center, including Frontera -- the most powerful system at any university in the world -- Blue Waters at the National Center for Supercomputing Applications, and Summit at the Oak Ridge Leadership Computing Facility.

"One way we might be able to do better in predicting risk is through physics-based modeling, by harnessing the power of systems like Frontera to run simulations," said Milner. "Instead of an empirical statistical distribution, we simulate the occurrence of earthquakes and the propagation of its waves."

"We've made a lot of progress on Frontera in determining what kind of earthquakes we can expect, on which fault, and how often," said Christine Goulet, Executive Director for Applied Science at SCEC, also involved in the work. "We don't prescribe or tell the code when the earthquakes are going to happen. We launch a simulation of hundreds of thousands of years, and just let the code transfer the stress from one fault to another."

The simulations began with the geological topography of California and simulated over 800,000 virtual years how stresses form and dissipate as tectonic forces act on the Earth. From these simulations, the framework generated a catalogue -- a record that an earthquake occurred at a certain place with a certain magnitude and attributes at a given time. The catalog that the SCEC team produced on Frontera and Blue Waters was among the largest ever made, Goulet said. The outputs of RSQSim were then fed into CyberShake that again used computer models of geophysics to predict how much shaking (in terms of ground acceleration, or velocity, and duration) would occur as a result of each quake.

"The framework outputs a full slip-time history: where a rupture occurs and how it grew," Milner explained. "We found it produces realistic ground motions, which tells us that the physics implemented in the model is working as intended." They have more work planned for validation of the results, which is critical before acceptance for design applications.

The researchers found that the RSQSim framework produces rich, variable earthquakes overall - a sign it is producing reasonable results - while also generating repeatable source and path effects.

"For lots of sites, the shaking hazard goes down, relative to state-of-practice estimates" Milner said. "But for a couple of sites that have special configurations of nearby faults or local geological features, like near San Bernardino, the hazard went up. We are working to better understand these results and to define approaches to verify them."

The work is helping to determine the probability of an earthquake occurring along any of California's hundreds of earthquake-producing faults, the scale of earthquake that could be expected, and how it may trigger other quakes.

Support for the project comes from the U.S. Geological Survey (USGS), National Science Foundation (NSF), and the W.M. Keck Foundation. Frontera is NSF's leadership-class national resource. Compute time on Frontera was provided through a Large-Scale Community Partnership (LSCP) award to SCEC that allows hundreds of U.S. scholars access to the machine to study many aspects of earthquake science. LSCP awards provide extended allocations of up to three years to support long-lived research efforts. SCEC - which was founded in 1991 and has computed on TACC systems for over a decade -- is a premier example of such an effort.

The creation of the catalog required eight days of continuous computing on Frontera and used more than 3,500 processors in parallel. Simulating the ground shaking at 10 sites across California required a comparable amount of computing on Summit, the second fastest supercomputer in the world.

"Adoption by the broader community will be understandably slow," said Milner. "Because such results will impact safety, it is part of our due diligence to make sure these results are technically defensible by the broader community," added Goulet. But research results such as these are important in order to move beyond generalized building codes that in some cases may be inadequately representing the risk a region face while in other cases being too conservative.

"The hope is that these types of models will help us better characterize seismic hazard so we're spending our resources to build strong, safe, resilient buildings where they are needed the most," Milner said.

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

With new design, stretchable electronics perform better under strain

Our bodies send out hosts of signals - chemicals, electrical pulses, mechanical shifts - that can provide a wealth of information about our health.

But electronic sensors that can detect these signals are often made of brittle, inorganic material that prevents them from stretching and bending on our skin or within our bodies.

Recent technological advances have made stretchable sensors possible, but their changes in shape can affect the data produced, and many sensors cannot collect and process the body's faintest signals.

A new sensor design from the Pritzker School of Molecular Engineering (PME) at the University of Chicago helps solve that problem. By incorporating a patterned material that optimizes strain distribution among transistors, researchers have created stretchable electronics that are less compromised by deformation. They also created several circuit elements with the design, which could lead to even more types of stretchable electronics.

The results were published in the journal Nature Electronics. Asst. Prof. Sihong Wang, who led the research, is already testing his design as a diagnostic tool for amyotrophic lateral sclerosis, a nervous system disease that causes loss of muscle control.

"We want to develop new kinds of electronics that can integrate with the human body," he said. "This new design allows electronics to stretch without compromising data and could ultimately help lead us to an out-of-clinic approach for monitoring our health."

Designing a pattern of stiffness

To design the electronics, the researchers used a patterned strain-distribution concept. When creating the transistor, they used substrates made of elastomer, an elastic polymer. They varied the density of the elastomer layers, meaning some remained softer, while others were stiffer while still elastic. The stiffer layers - termed "elastiff" by the researchers - were used for the active electronic areas.

The result was transistor arrays that had nearly the same electrical performance when they were stretched and bent as when they were undeformed. In fact, they had less than 5 percent performance variation when stretched with up to 100 percent strain.

They also used the concept to design and fabricate other circuit parts, including NOR gates, ring oscillators, and amplifiers. NOR gates are used in digital circuits, while ring oscillators are used in radio-frequency identification (RFID) technology. By making these parts successfully stretchable, the researchers could make even more complex electronics.

The stretchable amplifier they developed is among the first skin-like circuit that is capable of amplifying weak electrophysiological signals - down to a few millivolts. That's important for sensing the body's weakest signals, like those from muscles.

"Now we can not only collect signals, we can also process and amplify them right on the skin," Wang said. "That's a very important step for the future of electrophysiological sensing, when we can sense signals continuously."

A potential new diagnostic tool

Wang is already collaborating with a physician to test his design as a diagnostic tool for ALS. By measuring signals from muscles, the researchers hope to better diagnose the disease while gaining knowledge about how the disease affects the body.

They also hope to test their design in electronics that can be implanted within the body and create sensors for all kinds of bodily signals.

"With advancing designs, a lot of things that were previously impossible can now be done," Wang said. "We hope to not only help those in need, but also to take health monitoring out of the clinic, so patients can monitor their own signals in their everyday lives."

Credit: 
University of Chicago

Watching decision making in the brain

image: Stanford neuroscientists and engineers used neural implants to track decision making in the brain, in real time.

Image: 
Gil Costa

In the course of deciding whether to keep reading this article, you may change your mind several times. While your final choice will be obvious to an observer - you'll continue to scroll and read, or you'll click on another article - any internal deliberations you had along the way will most likely be inscrutable to anyone but you. That clandestine hesitation is the focus of research, published Jan. 20 in Nature, by Stanford University researchers who study how cognitive deliberations are reflected in neural activity.

These scientists and engineers developed a system that read and decoded the activity of monkeys' brain cells while the animals were asked to identify whether an animation of moving dots was shifting slightly left or right. The system successfully revealed the monkeys' ongoing decision-making process in real time, complete with the ebb and flow of indecision along the way.

"I was just looking at the decoded activity trace on the screen, not knowing which way the dots were moving or what the monkey was doing, and I could tell Sania [Fong], the lab manager, 'He's going to choose right,' seconds before the monkey initiated the movement to report that same choice," recalled Diogo Peixoto, a former postdoctoral scholar in neurobiology and co-lead author of the paper. "I would get it right 80 to 90 percent of the time, and that really cemented that this was working."

In subsequent experiments, the researchers were even able to influence the monkeys' final decisions through subliminal manipulations of the dot motion.

"Fundamentally, much of our cognition is due to ongoing neural activity that is not reflected overtly in behavior, so what's exciting about this research is that we've shown that we can now identify and interpret some of these covert, internal neural states," said study senior author William Newsome, the Harman Family Provostial Professor in the Department of Neurobiology at Stanford University School of Medicine.

"We're opening up a window onto a world of cognition that has been opaque to science until now," added Newsome, who is also the Vincent V.C. Woo Director of the Wu Tsai Neurosciences Institute.

One decision at a time

Neuroscience studies of decision making have generally involved estimating the average activity of populations of brain cells across hundreds of trials. But this process overlooks the intricacies of a single decision and the fact that every instance of decision making is slightly different: The myriad factors influencing whether you choose to read this article today will differ from those that would affect you if you were to make the same decision tomorrow.

"Cognition is really complex and, when you average across a bunch of trials, you miss important details about how we come to our perceptions and how we make our choices," said Jessica Verhein, MD/PhD student in neuroscience and co-lead author of the paper.

For these experiments, the monkeys were outfitted with a neural implant about the size of a pinky fingernail that reported the activity of 100 to 200 individual neurons every 10 milliseconds as they were shown digital dots parading on a screen. The researchers placed this implant in the dorsal premotor cortex and the primary motor cortex because, in previous research, they found that neural signals from these brain areas convey the animals' decisions and their confidence in those decisions.

Each video of moving dots was unique and lasted less than two seconds, and the monkeys reported their decisions about whether the dots were moving right or left only when prompted - a correct answer given at the correct time earned a juice reward. The monkeys signaled their choice clearly, by pressing a right or left button on the display.

Inside the monkeys' brains, however, the decision process was less obvious. Neurons communicate through rapid bursts of noisy electrical signals, which occur alongside a flurry of other activity in the brain. But Peixoto was able to predict the monkeys' choices easily, in part because the activity measurements he saw were first fed through a signal processing and decoding pipeline based on years of work by the lab of Krishna Shenoy, the Hong Seh and Vivian W. M. Lim Professor in the School of Engineering and a professor, by courtesy, of neurobiology and of bioengineering, and a Howard Hughes Medical Institute Investigator.

Shenoy's team had been using their real-time neural decoding technique for other purposes. "We are always trying to help people with paralysis by reading out their intentions. For example, they can think about how they want to move their arms and then that intention is run through the decoder to move a computer cursor on the screen to type out messages," said Shenoy, who is co-author of the paper. "So, we're constantly measuring neural activity, decoding it millisecond by millisecond, and then rapidly acting on this information accordingly."

In this particular study, instead of predicting the immediate movement of the arm, the researchers wanted to predict the intention about an upcoming choice as reported by an arm movement - which required a new algorithm. Inspired by the work of Roozbeh Kiani, a former postdoctoral scholar in the Newsome lab, Peixoto and colleagues perfected an algorithm that takes in the noisy signals from groups of neurons in the dorsal premotor cortex and the primary motor cortex and reinterprets them as a "decision variable." This variable describes the activity happening in the brain preceding a decision to move.

"With this algorithm, we can decode the ultimate decision of the of the monkey way before he moves his finger, let alone his arm," said Peixoto.

Three experiments

The researchers speculated that more positive values of the decision variable indicated increased confidence by the monkey that the dots were moving right, whereas more negative values indicated confidence that the dots were shifting left. To test this hypothesis, they conducted two experiments: one where they would halt the test as soon as the decision variable hit a certain threshold and another where they stopped it when the variable seemed to indicate a sharp reversal of the monkey's decision.

During the first experiments, the researchers stopped the tests at five randomly chosen levels and, at the highest positive or negative decision variable levels, the variable predicted the monkey's final decision with about 98 percent accuracy. Predictions in the second experiment, in which the monkey had likely undergone a change of mind, were almost as accurate.

In advance of the third experiment, the researchers checked how many dots they could add during the test before the monkey became distracted by the change in the stimulus. Then, in the experiment, the researchers added dots below the noticeable threshold to see if it would sway the monkey's decision subliminally. And, even though the new dots were very subtle, they did sometimes bias the monkey's choices toward whatever direction they were moving. The influence of the new dots was stronger if they were added early in the trial and at any point where the monkey's decision variable was low - which indicates a weak level of certainty.

"This last experiment, led by Jessie [Verhein], really allowed us to rule out some of the common models of decision making," said Newsome. According to one such model, people and animals make decisions based on the cumulative sum of evidence during a trial. But if this were true, then the bias the researchers introduced with the new dots should have had the same effect no matter when it was introduced. Instead, the results seemed to support an alternative model, which states that if a subject has enough confidence in a decision building in their mind, or has spent too long deliberating, they are less inclined to consider new evidence.

New questions, new opportunities

Already, Shenoy's lab is repeating these experiments with human participants with neural dysfunctions who use these same neural implants. Due to differences between human and nonhuman primate brains, the results could be surprising.

Potential applications of this system beyond the study of decision making include investigations of visual attention, working memory or emotion. The researchers believe that their key technological advance - monitoring and interpreting covert cognitive states through real-time neural recordings - should prove valuable for cognitive neuroscience in general, and they are excited to see how other researchers build on their work.

"The hope is that this research captures some undergraduate's or new graduate student's interest and they get involved in these questions and carry the ball forward for the next 40 years," said Shenoy.

Credit: 
Stanford University

Dairy calves benefit from higher-protein starter feed, Illinois study says

URBANA, Ill. - Dairy producers know early nutrition for young calves has far-reaching impacts, both for the long-term health and productivity of the animals and for farm profitability. With the goal of increasing not just body weight but also lean tissue gain, a new University of Illinois study finds enhanced milk replacer with high crude-protein dry starter feed is the winning combination.

"Calves fed more protein with the starter had less fat in their body weight gain, and more protein was devoted to the development of the gastrointestinal system, compared with the lower starter protein," says James Drackley, professor in the Department of Animal Sciences at Illinois and co-author on the study. "Our results say producers who are feeding calves a more aggressive amount of milk for greater rates of gain should be feeding a higher protein starter along with that."

Producers typically feed milk replacer along with a grain-based starter feed to kick-start development of the rumen ahead of forage consumption. Yet the Journal of Dairy Science study is the first to specifically examine body composition changes, versus simple body weight, in response to milk replacer and high-protein starter feed.

Understanding where the nutrients go in the body makes a big difference.

"If producers aren't providing enough protein in the starter as the calves go through the weaning process, they might be limiting development of the gastrointestinal system, which is needed to provide nutrients for the rest of the body," Drackley says.

Drackley and his co-authors started two-to-three-day-old calves on one of three experimental diets: a low rate of milk replacer + conventional starter (18% crude protein, as-fed basis); a high rate of milk replacer + conventional starter; and a high rate of milk replacer + high crude-protein starter (22% crude protein, as-fed basis). Additional protein in the high-protein starter was provided by soybean meal, compared with conventional starter, which was a mixture of wheat middlings, soybean meal, and corn, among other ingredients. The calves were weaned at six weeks of age, and were harvested at five or 10 weeks to determine body composition.

"After weaning, the weights of the digestive system and liver were greater with the higher protein starter," Drackley says. "It might be part of the reason why a slump in growth is often seen right around the time of weaning when calves are fed a conventional starter. The calves just don't have the developed digestive system to be able to keep things going as they change from the milk diet to the dry feed diet."

He adds that calves fed the higher rate of milk replacer grew more rapidly and had more lean tissue, with less fat.

"The low rate of milk replacer has been fairly standard, historically. It's designed to provide the maintenance needs and a small rate of growth, and to encourage calves to consume the dry feed at an earlier age. But research has supported the use of higher rates, so we're trying to shift the industry towards rates of milk feeding we think are more appropriate," Drackley says. "Now we have good reason to point producers to high-protein starter, as well."

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

COVID-19 cases, deaths in US increase with higher income inequality

image: Tim Liao, a University of Illinois Urbana-Champaign sociology professor, led a study that examined the association between social and economic inequalities in U.S. counties and COVID-19 infections and deaths. Economic inequality has been a focus of his research for more than 15 years.

Image: 
Photo courtesy Tim Liao

CHAMPAIGN, Ill. -- U.S. counties with higher income inequality faced higher rates of COVID-19 infections and deaths in the first 200 days of the pandemic, according to a new study. Counties with higher proportions of Black or Hispanic residents also had higher rates, the study found, reinforcing earlier research showing the disparate effects of the virus on those communities.

The findings, published last week by JAMA Network Open, were based on county-level data for all 50 states and Washington, D.C. Data sources included the Centers for Disease Control and Prevention, USAFacts and the U.S. Census Bureau.

The lead author of the study, Tim Liao, head of the sociology department at the University of Illinois Urbana-Champaign, initiated the study last summer after noticing that economic inequality - a focus of his research for more than 15 years - was getting little attention as a potential factor in how the virus was being experienced.

"We needed actual data to really fully understand the social dimensions of the pandemic," he said. "We knew all along that racial inequality was important, but most of the time people were missing the more complete picture, which includes economic inequality."

Fernando De Maio, a DePaul University sociology professor and the director of research and data use at the American Medical Association's Center for Health Equity, was a co-author of the study.

The researchers' analysis included 3,141 of 3,142 counties in the U.S. with available data, with the remaining county excluded due to incomplete information. The 200 days for which they collected data spanned Jan. 22, 2020, when the first U.S. case was confirmed, to Aug. 8.

Controlling for other variables, the researchers found that a 1.0% increase in a county's Black population corresponded to an average 1.9% increase in infections and a 2.6% increase in mortality due to COVID-19. A 1.0% increase in a county's Hispanic population corresponded to an average 2.4% increase in incidence and a 1.9% increase in mortality.

A 1.0% rise in a county's income inequality, as determined by a research measure called the Gini index, corresponded to an average 2.0% rise in COVID-19 incidence and a 3.0% rise in mortality. The researchers noted that the average Gini index in U.S. counties was 44.5 and ranged from 25.7 to 66.5, based on a 100-point scale.

Among other study results, the researchers found that the rate of virus infection was lower by an average of 32% in counties that were part of states covered by the Medicaid expansion under the Affordable Care Act, though they found no association with mortality rates.

The findings suggested, according to the researchers, that "High levels of income inequality harm population health ... irrespective of racial/ethnic composition."

No matter how they analyzed the data, Liao said, "two things emerged. One is the racial and ethnic dimension, the other is the income inequality dimension. They're always there, always strong."

"Many studies have concluded that COVID-19 has revealed the fault lines of inequality in the United States," the researchers wrote. "This study expands that picture by illustrating how county-level income inequality matters, in itself and through its interaction with racial/ethnic composition, to systematically disadvantage Black and Hispanic communities."

They suggested that income inequality, a measure not typically included in county-level public health surveillance, may need to be considered in identifying the places most affected by the virus.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

3-D printed Biomesh minimizes hernia repair complications

image: 3D-printed Biomesh demonstrating its mechanical strength and flexibility.

Image: 
Drs. C. Shin/G. Acharya/Baylor College of Medicine.

Hernias are one of the most common soft tissue injuries. Hernias form when intra-abdominal content, such as a loop of the intestine, squeezes through weak, defective or injured areas of the abdominal wall.

The condition may develop serious complications, therefore hernia repair may be recommended. Repair consists of surgically implanting a prosthetic mesh to support and reinforce the damaged abdominal wall and facilitate the healing process. However, currently used mesh implants are associated with potentially adverse postsurgical complications.

"Although hernia mesh implants are mechanically strong and support abdominal tissue, making the patient feel comfortable initially, it is a common problem that about three days after surgery the implant can drive inflammation that in two to three weeks will affect organs nearby," said Dr. Crystal Shin, assistant professor of surgery at Baylor College of Medicine and lead author of this study looking to find a solution to postsurgical hernia complications.

Mesh implants mostly fail because they promote the adhesion of the intestine, liver or other visceral organs to the mesh. As the adhesions grow, the mesh shrinks and hardens, potentially leading to chronic pain, bowel obstruction, bleeding and poor quality of life. Some patients may require a second surgery to repair the unsuccessful first. "Inflammation is also a serious concern," said Dr. Ghanashyam Acharya, associate professor of surgery at Baylor. "Currently, inflammation is controlled with medication or anti-inflammatory drugs, but these drugs also disturb the healing process because they block the migration of immune cells to the injury site."

"To address these complications, we developed a non-pharmacological approach by designing a novel mesh that, in addition to providing mechanical support to the injury site, also acts as an inflammation modulating system," Shin said.

Opposites attract

"A major innovation to our design is the development of a Biomesh that can reduce inflammation and, as a result, minimize tissue adhesion to the mesh that leads to pain and failure of the surgery," Shin said.
Inflammatory mediators called cytokines appear where the mesh is implanted a few days after the surgery. Some of the main cytokines in the implant, IL1-β, IL6 and TNF-α, have a positive surface charge due to the presence of the amino acids lysine and arginine.

"We hypothesized that Biomesh with a negative surface charge would capture the positively charged cytokines, as opposite electrical charges are attracted to each other," Acharya said. "We expected that trapping the cytokines in the mesh would reduce their inflammatory effect and improve hernia repair and the healing process."

To test their new idea, the researchers used a 3-D-bioprinter to fabricate Biomesh of a polymer called phosphate crosslinked poly (vinyl alcohol) polymer (X-PVA). Through thorough experimentation, they optimized the mechanical properties so the mesh would withstand maximal abdominal pressure repeatedly without any deterioration of its mechanical strength for several months. They also showed that their Biomesh did not degrade or reduce its elastic properties over time and was not toxic to human cells.

Shin, Acharya and their colleagues have confirmed in the lab that this Biomesh can capture positively charged cytokines. Encouraged by these results, the researchers tested their Biomesh in a rat model of hernia repair, comparing it with a type of mesh extensively used clinically for surgical hernia repair.

Newly designed 3-D printed Biomesh minimizes postsurgical complications of hernia repair in an animal model

The newly designed Biomesh effectively minimized postsurgical complications of hernia repair in an animal model. The researchers examined the Biomesh for four weeks after it was implanted. They found that the newly designed Biomesh had captured about three times the amount of cytokines captured by the commonly used mesh. Cytokines are short-lived in the body. As they degrade, they enable the mesh to capture more cytokines.

Importantly, no visceral tissues had adhered to the newly designed Biomesh, while the level of tissue adhesion was extreme in the case of the commonly used mesh. These results confirmed that the new Biomesh is effective at reducing the effects of the inflammatory response and in preventing visceral adhesions. In addition, the new mesh did not hinder abdominal wall healing after surgical hernia repair in animal models.

"This Biomesh is unique and designed to improve outcomes and reduce acute and long-term complications and symptoms associated with hernia repair. With more than 400,000 hernia repair surgeries conducted every year in the U.S., the new Biomesh would fulfill a major unmet need," Shin said. "There is no such multifunctional composite surgical mesh available, and development of a broadly applicable Biomesh would be a major advancement in the surgical repair of hernia and other soft tissue defects. We are conducting further preclinical studies before our approach can be translated to the clinic. Fabricating the Biomesh is highly reproducible, scalable and modifiable."

"This concept of controlling inflammation through the physicochemical properties of the materials is new. The mesh was originally designed for mechanical strength. We asked ourselves, can we create a new kind of mesh by making use of the physical and chemical properties of materials?" said Acharya. "In the 1950s, Dr. Francis C. Usher at Baylor's Department of Surgery developed the first polypropylene mesh for hernia repair. We have developed a next-generation mesh that not only provides mechanical support but also plays a physiological role of reducing the inflammatory response that causes significant clinical problems." Read the complete study in the journal Advanced Materials.

Credit: 
Baylor College of Medicine

What's in a name? A new class of superconductors

image: Qimiao Si is the Harry C. and Olga K. Wiess Professor of Physics and Astronomy at Rice University and director of the Rice Center for Quantum Materials.

Image: 
Photo by Jeff Fitlow/Rice University

HOUSTON - (Jan. 25, 2021) - A new theory that could explain how unconventional superconductivity arises in a diverse set of compounds might never have happened if physicists Qimiao Si and Emilian Nica had chosen a different name for their 2017 model of orbital-selective superconductivity.

In a study published this month in npj Quantum Materials, Si of Rice University and Nica of Arizona State University argue that unconventional superconductivity in some iron-based and heavy-fermion materials arises from a general phenomenon called "multiorbital singlet pairing."

In superconductors, electrons form pairs and flow without resistance. Physicists cannot fully explain how pairs form in unconventional superconductors, where quantum forces give rise to strange behavior. Heavy fermions, another quantum material, feature electrons that appear to be thousands of times more massive than ordinary electrons.

Si and Nica proposed the idea of selective pairing within atomic orbitals in 2017 to explain unconventional superconductivity in alkaline iron selenides. The following year, they applied the orbital-selective model to the heavy fermion material in which unconventional superconductivity was first demonstrated in 1979.

They considered naming the model after a related mathematical expression made famous by quantum pioneer Wolfgang Pauli, but opted to call it d+d. The name refers to mathematical wave functions that describe quantum states.

"It's like you have a pair of electrons that dance with each other," said Si, Rice's Harry C. and Olga K. Wiess Professor of Physics and Astronomy. "You can characterize that dance by s- wave, p-wave and d-wave channels, and d+d refers to two different kinds of d-waves that fuse together into one."

In the year after publishing the d+d model, Si gave many lectures about the work and found audience members frequently got the name confused with "d+id," the name of another pairing state that physicists have discussed for more than a quarter century.

"People would approach me after a lecture and say, 'Your theory of d+id is really interesting,' and they meant it as a compliment, but it happened so often it got annoying," said Si, who also directs the Rice Center for Quantum Materials (RCQM).

In mid-2019, Si and Nica met over lunch while visiting Los Alamos National Laboratory, and began sharing stories about the d+d versus d+id confusion.

"That led to a discussion of whether d+d might be connected with d+id in a meaningful way, and we realized it was not a joke," Nica said.

The connection involved d+d pairing states and those made famous by the Nobel Prize-winning discovery of helium-3 superfluidity.

"There are two types of superfluid pairing states of liquid helium-3, one called the B phase and the other the A phase," Nica said. "Empirically, the B phase is similar to our d+d, while the A phase is almost like a d+id."

The analogy got more intriguing when they discussed mathematics. Physicists use matrix calculations to describe quantum pairing states in helium-3, and that is also the case for the d+d model.

"You have a number of different ways of organizing that matrix, and we realized our d+d matrix for the orbital space was like a different form of the d+id matrix that describes helium-3 pairing in spin space," Nica said.

Si said the associations with superfluid helium-3 pairing states have helped he and Nica advance a more complete description of pairing states in both iron-based and heavy-fermion superconductors.

"As Emil and I talked more, we realized the periodic table for superconducting pairing was incomplete," Si said, referring to the chart physicists use to organize superconducting pairing states.

"We use symmetries -- like lattice or spin arrangements, or whether time moving forward versus backward is equivalent, which is time-reversal symmetry -- to organize possible pairing states," he said. "Our revelation was that d+id can be found in the existing list. You can use the periodic table to construct it. But d+d, you cannot. It's beyond the periodic table, because the table doesn't include orbitals."

Si said orbitals are important for describing the behavior of materials like iron-based superconductors and heavy fermions, where "very strong electron-electron correlations play a crucial role."

"Based on our work, the table needs to be expanded to include orbital indices," Si said.

The research was supported by a startup grant from Arizona State University, the Department of Energy (DE-SC0018197), the Welch Foundation (C-1411) and the National Science Foundation (PHY-1607611).

RCQM is a multidisciplinary research effort that leverages the strengths and global partnerships of more than 20 Rice research groups.

Credit: 
Rice University

Climate change increases coastal blue carbon sequestration

image: The spatial distribution of tidal wetlands and the observed C accumulation rate (CAR) points. The right panel indicated the arithmetic average CAR at every 10-degree bands of latitude.

Image: 
@Science China Press

Coastal Blue Carbon (BC), which includes mangrove and saltmarsh tidal wetlands, of which was first coined a decade ago to describe the disproportionately large contribution of coastal vegetated ecosystems to global carbon sequestration. The role of BC in climate change mitigation and adaptation has now reached international prominence. Recent studies have reported BC's unique role in mitigating climate change, projected coastal wetlands area change, carbon stocks in response to historical sea level rise fluctuations, and the future roadmap relative to carbon sequestration studies. However, several questions remain unanswered:

Q1. What is the global extent and spatial distribution of BC systems?
Q2. What factors influence BC burial rates?
Q3. How does climate change impact carbon accumulation in mature BC ecosystems?

In a recent publication in National Science Review, Prof. Wang and Prof. Sanders lead an international group to go beyond recent soil C stock estimates, to reveal global tidal wetland C accumulation and predict changes under relative sea level rise, temperature and precipitation. They use data from literature study sites (n=563) and new observations (n=49) spanning wide latitudinal gradients and 20 countries (Figure 1). They found that global tidal wetlands accumulate ~54 Tg C yr-1, which is ~30% of the organic C buried on the ocean floor (Figure 1). Modelling based on current climatic drivers and under projected emissions scenarios revealed an increase of up to ~300 g C m-2 yr-1 by 2100 as an average global C accumulation rate (Figure 2). This rapid increase was found here to be driven by sea-level rise in tidal marshes, and higher temperature and precipitation in mangroves. Their results highlight the feedbacks between climate change and C sequestration in tidal wetlands (Figure 2). The findings in this research show that even though these global tidal wetlands only occupy

Credit: 
Science China Press

Boosting the efficiency of carbon capture and conversion systems

image: Dyes are used to reveal the concentration levels of carbon dioxide in the water. On the left side is a gas-attracting material, and the dye shows the carbon dioxide stays concentrated next to the catalyst.

Image: 
Varanasi Research Group

Systems for capturing and converting carbon dioxide from power plant emissions could be important tools for curbing climate change, but most are relatively inefficient and expensive. Now, researchers at MIT have developed a method that could significantly boost the performance of systems that use catalytic surfaces to enhance the rates of carbon-sequestering electrochemical reactions.

Such catalytic systems are an attractive option for carbon capture because they can produce useful, valuable products, such as transportation fuels or chemical feedstocks. This output can help to subsidize the process, offsetting the costs of reducing greenhouse gas emissions.

In these systems, typically a stream of gas containing carbon dioxide passes through water to deliver carbon dioxide for the electrochemical reaction. The movement through water is sluggish, which slows the rate of conversion of the carbon dioxide. The new design ensures that the carbon dioxide stream stays concentrated in the water right next to the catalyst surface. This concentration, the researchers have shown, can nearly double the performance of the system.

The results are described today in the journal Cell Reports Physical Science in a paper by MIT postdoc Sami Khan PhD '19, who is now an assistant professor at Simon Fraser University, along with MIT professors of mechanical engineering Kripa Varanasi and Yang Shao-Horn, and recent graduate Jonathan Hwang PhD '19.

"Carbon dioxide sequestration is the challenge of our times," Varanasi says. There are a number of approaches, including geological sequestration, ocean storage, mineralization, and chemical conversion. When it comes to making useful, saleable products out of this greenhouse gas, electrochemical conversion is particularly promising, but it still needs improvements to become economically viable. "The goal of our work was to understand what's the big bottleneck in this process, and to improve or mitigate that bottleneck," he says.

The bottleneck turned out to involve the delivery of the carbon dioxide to the catalytic surface that promotes the desired chemical transformations, the researchers found. In these electrochemical systems, the stream of carbon dioxide-containing gases is mixed with water, either under pressure or by bubbling it through a container outfitted with electrodes of a catalyst material such as copper. A voltage is then applied to promote chemical reactions producing carbon compounds that can be transformed into fuels or other products.

There are two challenges in such systems: The reaction can proceed so fast that it uses up the supply of carbon dioxide reaching the catalyst more quickly than it can be replenished; and if that happens, a competing reaction -- the splitting of water into hydrogen and oxygen -- can take over and sap much of the energy being put into the reaction.

Previous efforts to optimize these reactions by texturing the catalyst surfaces to increase the surface area for reactions had failed to deliver on their expectations, because the carbon dioxide supply to the surface couldn't keep up with the increased reaction rate, thereby switching to hydrogen production over time.

The researchers addressed these problems through the use of a gas-attracting surface placed in close proximity to the catalyst material. This material is a specially textured "gasphilic," superhydrophobic material that repels water but allows a smooth layer of gas called a plastron to stay close along its surface. It keeps the incoming flow of carbon dioxide right up against the catalyst so that the desired carbon dioxide conversion reactions can be maximized. By using dye-based pH indicators, the researchers were able to visualize carbon dioxide concentration gradients in the test cell and show that the enhanced concentration of carbon dioxide emanates from the plastron.

In a series of lab experiments using this setup, the rate of the carbon conversion reaction nearly doubled. It was also sustained over time, whereas in previous experiments the reaction quickly faded out. The system produced high rates of ethylene, propanol, and ethanol -- a potential automotive fuel. Meanwhile, the competing hydrogen evolution was sharply curtailed. Although the new work makes it possible to fine-tune the system to produce the desired mix of product, in some applications, optimizing for hydrogen production as a fuel might be the desired result, which can also be done.

"The important metric is selectivity," Khan says, referring to the ability to generate valuable compounds that will be produced by a given mix of materials, textures, and voltages, and to adjust the configuration according to the desired output.

By concentrating the carbon dioxide next to the catalyst surface, the new system also produced two new potentially useful carbon compounds, acetone, and acetate, that had not previously been detected in any such electrochemical systems at appreciable rates.

In this initial laboratory work, a single strip of the hydrophobic, gas-attracting material was placed next to a single copper electrode, but in future work a practical device might be made using a dense set of interleaved pairs of plates, Varanasi suggests.

Compared to previous work on electrochemical carbon reduction with nanostructure catalysts, Varanasi says, "we significantly outperform them all, because even though it's the same catalyst, it's how we are delivering the carbon dioxide that changes the game."

Credit: 
Massachusetts Institute of Technology

Reactive halogen from domestic coal burning aggravates winter air pollution

image: Coal burning from rural households emits reactive bromine gases and particulate halogens. Daytime sunlight-assisted processes, possibly involving nitrate, activate particulate Br to produce HOBr and BrCl. BrCl is also produced by the reaction of HOBr with particulate Cl during day and night. BrCl is photolyzed to Cl and Br atoms in the daytime. VOCs are oxidized by Cl atoms (mainly on alkanes) and Br atoms (mainly on aldehydes) to produce ozone and secondary aerosols. Moreover, Br atoms significantly accelerate the mercury deposition near the source. The background photo shows the nearby village and the location of the measurement site. (Photo Credit: Chenglong Zhang and Pengfei Liu; RCEES, CAS).

Image: 
@Science China Press

Halogen atoms (Cl and Br) strongly influence the atmospheric chemical composition. Since 1970s, scientists discovered that these atoms were responsible for depletion of ozone in the stratosphere and ground-level ozone of the Arctic. In the past decade, there is emerging recognition that halogen atoms also play important roles in tropospheric chemistry and air quality. However, the knowledge of halogen atoms in continental regions is still incomplete.

"In the troposphere, halogen atoms can kick start hydrocarbon oxidation that makes ozone, modify the oxidative capacity, perturb mercury recycling by oxidizing elementary mercury (Hg0) to a highly toxic form (HgII). Moreover, Cl atoms can remove methane, a climate-forcing agent. Most of the previous studies in the continental regions focused on two Cl precursors, ClNO2 and Cl2. However, little is known on the abundance and the role of bromine compounds and other forms of photoliable halogens in the polluted continental troposphere", said Tao Wang, a chair professor at The Hong Kong Polytechnic University (HKPU).

A team of Chinese researchers from HKPU, Fudan University, Research Center of Eco-Environmental Sciences of the Chinese Academy of Sciences, Shandong University and Shandong Jianzhu University, measured a suite of reactive halogen gases and other chemicals in winter 2017 at a polluted rural site in Hebei province which frequently suffers from severe air pollution in winter. The data were analyzed by the team in collaborations with scientists from Colorado State University in the US, Institute of Physical Chemistry Rocasolano of CSIC in Spain, and Univ Lyon, Université Claude Bernard Lyon 1, CNRS, IRCELYON in France.

They found surprisingly high concentrations of bromine chloride (BrCl),and other reactive halogen gases. The maximum concentration of BrCl is 10 time larger than the value previously measured in the Arctic. "To our knowledge, only one prior study observed BrCl in one, out of 50, coal-fired power plant plume in the United States. Apart from that, BrCl had not been reported outside of the polar regions." said Xiang Peng, a graduate student at HKPU. "Accurate measurement of reactive halogen is very challenging" added Weihao Wang, another then graduate student at HKPU, "these compounds are at low concentrations, which requires a sensitive instrument to detect their signals; they are also hard to quantify due to difficulty in making calibration standards and in reducing potential interferences from other co-existing chemicals in the atmosphere and potential artifacts in the sampling inlet." Nonetheless, the team overcame these challenges via various on-site and post-measurement tests.

The research team found the strong evidence that rural coal burning was a major source of the detected reactive halogens by analyzing their relationship with two tracers of coal burning (sulfur dioxide and selenium), the diurnal pattern of air pollutants, and villager's energy use practice. They also found an important daytime chemical process, partially from photolysis of nitrate, which could convert insert halides to reactive halogen gases (HOBr and BrCl) so as to sustain their high daytime concentrations despite short lifetime of BrCl from sunlight photo-dissociation.

The team then built a model containing most up-to-date halogen gas-phase chemistry and simulated the impact of the observed photoliable BrCl, Cl2, ClNO2, Br2 on the oxidative capacity which drives the production of pollutants such ozone and particulates. Their results show that BrCl contributed about 55% of both bromine (Br) and chlorine (Cl) atoms. The halogen atoms (from BrCl and other photoliable halogens) increased the abundance of 'conventional' tropospheric oxidants (OH, HO2, and RO2) by 26-73%, and enhanced oxidation of hydrocarbon by nearly a factor of two and the net ozone production by 55%. "Such large increase in oxidation could boost production of secondary organic and inorganic aerosols, which are the major components of haze-causing PM2.5 in northern China. Br atoms from BrCl could also accelerate the production and deposition of the toxic form of mercury near the source regions." added Tao Wang.

The researchers believe that the significant impact of halogen demonstrated at their site may exist in other areas where uncontrolled coal burning is prevalent, such as other parts of northern China, and in countries like India and Russia which has a large share of coal in their energy mix. They call for more research to better understand the source(s) and the spatial extent of the role of the halogen chemistry in the polluted continental regions. They also suggest the need to control halogens from coal-burning, in addition to well recognized CO2, sulfur, nitrogen, particulate, and mercury.

Credit: 
Science China Press