Tech

Climate-smart ag strategies may cut nitrous oxide emissions from corn production

image: Lead researcher Maria Ponce de Leon, a former graduate student in plant science, carries a nitrous oxide emissions chamber into a corn field at Penn State's Russell E. Larson Agricultural Research Center to measure the amount of the potent greenhouse gas given off by the crop.

Image: 
Heather Karsten, Penn State

For corn, using dairy manure and legume cover crops in crop rotations can reduce the need for inorganic nitrogen fertilizer and protect water quality, but these practices also can contribute to emissions of nitrous oxide -- a potent greenhouse gas.

That is the conclusion of Penn State researchers, who measured nitrous oxide emissions from the corn phases of two crop rotations -- a corn-soybean rotation and a dairy forage rotation -- under three different management regimens. The results of the study offer clues about how dairy farmers might reduce the amount of nitrogen fertilizer they apply to corn crops, saving money and contributing less to climate change.

The results are important because although nitrous oxide accounts for just 7% of U.S. greenhouse gas emissions, it is significantly more potent than carbon dioxide or methane when it comes to driving climate change, according to Heather Karsten, associate professor of crop production/ecology in the College of Agricultural Sciences. Nitrous oxide is almost 300 times more powerful than carbon dioxide and remains in the atmosphere for more than 100 years.

"This research suggests that all nitrogen inputs -- manure, legumes and fertilizer -- contribute to nitrous oxide emissions," she said. "But farmers could reduce nitrous oxide emissions if they could apply manure after the crop is planted, closer to when the corn begins to take up nitrogen.

"And if they could apply manure only when the crop needs it by "side-dressing," she added, "they likely could use less inorganic nitrogen fertilizer. But equipment for side-dressing manure into a growing corn crop is not yet widely available."

Researchers compared the effects of three management treatments for no-till corn and measured nitrous oxide emissions throughout the corn growing season. In the corn-soybean rotation, the team compared nitrous oxide emissions from broadcasting dairy manure, shallow disk manure injection, and the application of inorganic fertilizer in the form of liquid urea ammonium nitrate.

Manure was applied before corn was planted, as most farms do, while in the inorganic fertilizer treatment, fertilizer was applied according to recommended practices -- when the corn was growing and taking up nitrogen.

This better timing for nitrogen application allowed for a reduced total nitrogen application, and the nitrous oxide emissions were lower than with the injected manure treatment. Injecting manure increased nitrous oxide emissions compared to the broadcast manure treatment in one year of the study, indicating that the environmental and nitrogen-conservation benefits of injection should be weighed against the additional emissions when selecting the practice.

The researchers also compared nitrous oxide emissions from corn grown for silage or grain in the no-till, six-year, dairy forage rotation in which corn followed a two-year, mixed alfalfa and orchardgrass forage crop and also a crimson clover cover crop. Manure also was broadcasted before corn planting, and nitrous oxide emissions were compared to the rotation in which corn was planted after soybean with broadcast manure. The nitrous oxide emissions during the corn season didn't differ among the three prior legume treatments.

In both experiments, nitrous oxide emissions peaked a few weeks after manure was applied and for a short period after fertilizer was applied. Since nitrous oxide emissions are influenced by factors that influence microbial processes, the researchers examined what environmental and nitrogen-availability factors were most predictive of nitrous oxide emissions. Increasing temperatures spurring corn growth and factors that influence soil nitrogen availability were important factors in both comparisons.

The study shows that nitrogen availability from organic inputs such as manure and legume cover crops can contribute to nitrous oxide emissions from corn, noted lead researcher Maria Ponce de Leon, former graduate student in Karsten's research group, now a doctoral candidate at the University of California, Davis. Identifying how to time organic nitrogen amendments with corn uptake represents an opportunity, she said, to reduce nitrous oxide emissions from dairy production systems.

Now, dairy farmers apply manure mostly prior to planting corn, and as the manure and the organic legume biomass from the cover crop decompose, the nitrogen content builds in the soil. Some of it can be lost as nitrous oxide emissions or leach into groundwater.

"Until the corn is rapidly taking up nitrogen from the soil, there's potential for both of those environmental losses," Ponce de Leon said. "If we could better synchronize the timing of the manure application to when the corn is growing and taking up nitrogen, we could reduce nitrous oxide missions. That also would help the crop and the farmer better capture the nitrogen that's available in that manure."

Credit: 
Penn State

Uniquely sharp X-ray view

image: Cristian Svetina at the experiment station of the X-ray free-electron laser SwissFEL.

Image: 
Paul Scherrer Institute/Mahir Dzambegovic

Researchers at the Paul Scherrer Institute PSI have succeeded for the first time in looking inside materials using the method of transient grating spectroscopy with ultrafast X-rays at SwissFEL. The experiment at PSI is a milestone in observing processes in the world of atoms. The researchers are publishing their research results today in the journal Nature Photonics.

The structures on microchips are becoming ever tinier; hard disks write entire encyclopedias on magnetic disks the size of a fingernail. Many technologies are currently breaking through the boundaries of classical physics. But in the nanoworld, other laws apply - those of quantum physics. And there are still many unanswered questions: How does heat actually travel through a semiconductor material at the nanoscale? What exactly happens when individual bits are magnetised in a computer hard disk, and how fast can we write? There are still no answers to these and many more questions mainly because current experimental techniques cannot look deeply and precisely enough into the materials and because some processes take place far too quickly for conventional experimental methods. But if we want to push ahead with technical miniaturisation, we need to understand such phenomena at the atomic level

The mix of methods makes the difference

Fresh impetus is now being brought to the matter thanks to a new method devised by PSI researcher Cristian Svetina, together with Jeremy Rouxel and Majed Chergui at EPFL in Lausanne, Keith Nelson at MIT in the USA, Claudio Masciovecchio at Fermi FEL in Italy, and other international partners. "The method is actually not new, though, and it has been used for decades in the optical regime with exceptional results," says Svetina, who is currently setting up the new Furka experiment station on the SwissFEL beamline Athos at PSI. What is special, he says, is the combination and extension of known methods from nonlinear laser physics, but using X-ray light from the new X-ray free-electron laser SwissFEL. This combination is both new and surprising. Several attempts have been made in the past by many groups around the world but without success. It has even been questioned whether such novel experiments could be successfully conducted at all at the high energies of X-rays. The team at PSI has proven: Yes, it can be done.

At its core, this is a method called transient grating spectroscopy. Spectroscopy is a proven set of methods used by physicists to obtain information about a material, such as the chemical elements and compounds it consists of, its magnetic properties, and how atoms move within it. In the particular variant called transient grating spectroscopy, the sample is bombarded with two laser beams that create an interference pattern. A third laser beam is diffracted at this pattern, creating a fourth beam that contains the information about the sample's properties.

Looking beneath the surface

The term laser is always used to describe light in the visible or infrared range of the wavelength spectrum. Therefore lasers can look inside a sample only with resolution limited to hundreds of nanometres. To go beyond this, X-rays are needed. Researchers at PSI have now succeeded for the first time in making transient grating spectroscopy accessible to an X-ray laser, using very hard X-rays with an energy of 7.1 kiloelectronvolts, which corresponds to a wavelength of 0.17 nanometres, or about the diameter of medium-sized atoms. The advantage: For the first time, it is possible to look inside materials with a resolution down to individual atoms as well as with ultrashort exposure times of fractions of femtoseconds (one millionth of a billionth of a second), which even allows videos of atomic processes to be recorded. In addition, the method is element-selective, meaning that one can selectively measure specific chemical elements in a mixture of substances. The method complements well established techniques such as inelastic neutron and X-ray scattering, adding better resolution in terms of both time and energy.

In practice, the experimental setup looks like this: SwissFEL sends a beam with a diameter of 0.2 millimetres, consisting of ultrashort X-ray pulses, onto a transmission phase grating made of diamond, which looks like a fine comb under the microscope. Diamond is used because it is not destroyed even by high-energy X-rays. It was made especially for this experiment by Christian David of the Laboratory for Micro and Nanotechnology at PSI. The spacing between the teeth of the comb is two micrometres, but this can go down to nanometres if needed. They break the X-ray beam into fine partial beams that overlap behind the grating, thus creating the transient grating diffraction pattern. Behind the grating, one-to-one images of the grating can be observed, repeated at regular intervals - so-called Talbot planes. If you place a sample in one of these planes, some atoms within it become excited, just as if it was sitting at the location of the grating. Only the atoms that "see" the X-rays in this periodic modulation are excited, while the neighbours that don't experience the irradiation remain in the ground state. This is the chief attraction of the method, since it enables researchers to selectively excite characteristic domains of interest.

Camera with flash

Excitation of the atoms alone, however, does not provide any information. For this, a kind of camera with a flash is needed to briefly expose the sample. In transient grating spectroscopy, this is done by a laser that targets the sample at an angle and shoots images with a minimal time delay to the X-ray beam from SwissFEL. The information comes out of the back of the sample and hits a detector that records the image. Initial experiments have shown one advantage of the method: It does not produce any unwanted background signal. "If the atoms are excited, you see a signal; if they are not excited, you see nothing," Svetina explains. This is extremely valuable when measuring samples that emit only weak signals and that cannot be seen with other techniques where a background obscures the signal.

The fact that Cristian Svetina and his team have managed to do what other researchers have not is due to the creativity and patience of the protagonists. "We proceeded step by step and did not want to try everything at once," says the physicist. Five years ago the researchers started experimenting at FERMI FEL with optical light and extended it to extreme ultraviolet light before moving on to X-rays at PSI. Here, instead of examining "real" samples right away, they used gold foils to test whether the energy was sufficient to excite atoms. They succeeded in burning the lattice pattern from a Talbot plane into the foil. Svetina: "That's when we knew: If we can even print structures, we can excite atoms with lower intensity." With this the way was clear for the now successful experiment. Using a sample of bismuth germanate, the researchers were able to show that the method fulfilled all their hopes in terms of spatial and temporal resolution, measurement speed, and element selectivity.

Next goal: everything with X-rays

However, the researchers have not yet taken the final step. So far, only the beam that excites the sample is an X-ray beam. The flash of the camera still comes from a laser, so it is visible light. The pinnacle would be reached if that too were an X-ray beam. Svetina: "We want to take this final step in the course of the year." And they have additional support: SLAC's LCLS and the PULSE Institute, both at Stanford in California, the RIKEN SPring-8 centre in Japan, and DESY's FLASH in Germany have joined the collaboration team.

The researchers are publishing their results today in the journal Nature Photonics.

Credit: 
Paul Scherrer Institute

Minimally invasive retinal reattachment procedure leads to superior photoreceptor integrity

image: Dr. Rajeev H. Muni, a vitreoretinal surgeon at St. Michael's Hospital of Unity Health Toronto and researcher at the Li Ka Shing Knowledge Institute.

Image: 
Unity Health Toronto

A minimally invasive retinal reattachment procedure that can be done in an ophthalmologist's office leads to better long-term integrity and structure of the retina's photoreceptors - cells that allow us to see - compared with more invasive operating room procedures, according to new research published April 22.

The study, published in JAMA Ophthalmology and led by researchers at St. Michael's Hospital of Unity Health Toronto, contributes to a growing body of evidence pointing towards pneumatic retinopexy (PnR) as the better first-line retinal reattachment technique to achieve the best visual outcomes.

Retinal detachment is the most common surgical ocular emergency, progressing to loss of vision within hours or days. While there are a number of different treatment options, there is limited high-quality data guiding the choice of surgery.

"Our study shows there is a difference in the long-term integrity of the photoreceptors between different surgical techniques, and these anatomic differences were associated with visual outcomes," said co-principal investigator Dr. Rajeev H. Muni, a vitreoretinal surgeon at St. Michael's and researcher at the Li Ka Shing Knowledge Institute.

Previous studies carried out at St. Michael's Hospital demonstrated that patients had better visual results following the less invasive PnR compared to pars plana vitrectomy (PPV), an alternative operating room procedure. In this study, the researchers determined there was an actual difference in photoreceptor anatomic recovery between the two retinal reattachment techniques.

Using data from a randomized trial conducted at St. Michael's Hospital, researchers compared the retinal scans of 72 patients who had retinal reattachment using PPV and 73 patients who had retinal reattachment using the minimally invasive PnR at 12 months post-operatively.

PnR is a less invasive and less expensive retinal reattachment technique that can be done in an ophthalmologist's office. In PnR, a small gas bubble is used to close the retinal tear and allow the fluid to reabsorb naturally and slowly. PPV is a surgical technique where the fluid under the retina is rapidly aspirated and removed, forcefully bringing the retina back in position.

The imaging showed that discontinuity - an absence of a part of the photoreceptor layer - was more common at 12 months post-operatively among patients who had the PPV surgical reattachment than in patients who had the minimally invasive PnR procedure. Discontinuity indicates damage to cells that are critical for vision and this damage was found to be associated with worse visual outcomes.

"This data provides an objectively determined anatomic basis for the superior functional outcomes seen with pneumatic retinopexy that we have previously reported," said Dr. Roxane J. Hillier, the trial's other co-principal investigator who is now based in the United Kingdom.

The researchers say their findings highlight that closing the retinal tear, doing as little else as possible, and allowing the retina to reattach naturally leads to the best outcomes both from an anatomic and visual perspective.

"This was previously unknown, and in my opinion will be a game-changer in our field," said Dr. Muni.

Credit: 
St. Michael's Hospital

Among COVID-19 survivors, an increased risk of death, serious illness

image: A new study from Washington University School of Medicine in St. Louis shows that even mild cases of COVID-19 increase the risk of death in the six months following diagnosis and that this risk increases with disease severity. The comprehensive study also catalogues the wide-ranging and long-term health problems often triggered by the infection, even among those not hospitalized.

Image: 
Sara Moser

As the COVID-19 pandemic has progressed, it has become clear that many survivors -- even those who had mild cases -- continue to manage a variety of health problems long after the initial infection should have resolved. In what is believed to be the largest comprehensive study of long COVID-19 to date, researchers at Washington University School of Medicine in St. Louis showed that COVID-19 survivors -- including those not sick enough to be hospitalized -- have an increased risk of death in the six months following diagnosis with the virus.

The researchers also have catalogued the numerous diseases associated with COVID-19, providing a big-picture overview of the long-term complications of COVID-19 and revealing the massive burden this disease is likely to place on the world's population in the coming years.

The study, involving more than 87,000 COVID-19 patients and nearly 5 million control patients in a federal database, appears online April 22 in the journal Nature.

"Our study demonstrates that up to six months after diagnosis, the risk of death following even a mild case of COVID-19 is not trivial and increases with disease severity," said senior author Ziyad Al-Aly, MD, an assistant professor of medicine. "It is not an exaggeration to say that long COVID-19 -- the long-term health consequences of COVID-19 -- is America's next big health crisis. Given that more than 30 million Americans have been infected with this virus, and given that the burden of long COVID-19 is substantial, the lingering effects of this disease will reverberate for many years and even decades. Physicians must be vigilant in evaluating people who have had COVID-19. These patients will need integrated, multidisciplinary care."

In the new study, the researchers were able to calculate the potential scale of the problems first glimpsed from anecdotal accounts and smaller studies that hinted at the wide-ranging side effects of surviving COVID-19, from breathing problems and irregular heart rhythms to mental health issues and hair loss.

"This study differs from others that have looked at long COVID-19 because, rather than focusing on just the neurologic or cardiovascular complications, for example, we took a broad view and used the vast databases of the Veterans Health Administration (VHA) to comprehensively catalog all diseases that may be attributable to COVID-19," said Al-Aly, also director of the Clinical Epidemiology Center and chief of the Research and Education Service at the Veterans Affairs St. Louis Health Care System.

The investigators showed that, after surviving the initial infection (beyond the first 30 days of illness), COVID-19 survivors had an almost 60% increased risk of death over the following six months compared with the general population. At the six-month mark, excess deaths among all COVID-19 survivors were estimated at eight people per 1,000 patients. Among patients who were ill enough to be hospitalized with COVID-19 and who survived beyond the first 30 days of illness, there were 29 excess deaths per 1,000 patients over the following six months.

"These later deaths due to long-term complications of the infection are not necessarily recorded as deaths due to COVID-19," Al-Aly said. "As far as total pandemic death toll, these numbers suggest that the deaths we're counting due to the immediate viral infection are only the tip of the iceberg."

The researchers analyzed data from the national health-care databases of the U.S. Department of Veterans Affairs. The dataset included 73,435 VHA patients with confirmed COVID-19 but who were not hospitalized and, for comparison, almost 5 million VHA patients who did not have a COVID-19 diagnosis and were not hospitalized during this time frame. The veterans in the study were primarily men (almost 88%), but the large sample size meant that the study still included 8,880 women with confirmed cases.

To help understand the long-term effects of more severe COVID-19, the researchers harnessed VHA data to conduct a separate analysis of 13,654 patients hospitalized with COVID-19 compared with 13,997 patients hospitalized with seasonal flu. All patients survived at least 30 days after hospital admission, and the analysis included six months of follow-up data.

The researchers confirmed that, despite being initially a respiratory virus, long COVID-19 can affect nearly every organ system in the body. Evaluating 379 diagnoses of diseases possibly related to COVID-19, 380 classes of medications prescribed and 62 laboratory tests administered, the researchers identified newly diagnosed major health issues that persisted in COVID-19 patients over at least six months and that affected nearly every organ and regulatory system in the body, including:

Respiratory system: persistent cough, shortness of breath and low oxygen levels in the blood.

Nervous system: stroke, headaches, memory problems and problems with senses of taste and smell.

Mental health: anxiety, depression, sleep problems and substance abuse.

Metabolism: new onset of diabetes, obesity and high cholesterol.

Cardiovascular system: acute coronary disease, heart failure, heart palpitations and irregular heart rhythms.

Gastrointestinal system: constipation, diarrhea and acid reflux.

Kidney: acute kidney injury and chronic kidney disease that can, in severe cases, require dialysis.

Coagulation regulation: blood clots in the legs and lungs.

Skin: rash and hair loss.

Musculoskeletal system: joint pain and muscle weakness.

General health: malaise, fatigue and anemia.

While no survivor suffered from all of these problems, many developed a cluster of several issues that have a significant impact on health and quality of life.

Among hospitalized patients, those who had COVID-19 fared considerably worse than those who had influenza, according to the analysis. COVID-19 survivors had a 50% increased risk of death compared with flu survivors, with about 29 excess deaths per 1,000 patients at six months. Survivors of COVID-19 also had a substantially higher risk of long-term medical problems.

"Compared with flu, COVID-19 showed remarkably higher burden of disease, both in the magnitude of risk and the breadth of organ system involvement," Al-Aly said. "Long COVID-19 is more than a typical postviral syndrome. The size of the risk of disease and death and the extent of organ system involvement is far higher than what we see with other respiratory viruses, such as influenza."

In addition, the researchers found that the health risks from surviving COVID-19 increased with the severity of disease, with hospitalized patients who required intensive care being at highest risk of long COVID-19 complications and death.

"Some of these problems may improve with time -- for example, shortness of breath and cough may get better -- and some problems may get worse," Al-Aly added. "We will continue following these patients to help us understand the ongoing impacts of the virus beyond the first six months after infection. We're only a little over a year into this pandemic, so there may be consequences of long COVID-19 that are not yet visible."

In future analyses of these same datasets, Al-Aly and his colleagues also plan to look at whether patients fared differently based on age, race and gender to gain a deeper understanding of the risk of death in people with long COVID-19.

Credit: 
Washington University School of Medicine

Can machine learning improve debris flow warning?

Machine learning could provide up an extra hour of warning time for debris flows along the Illgraben torrent in Switzerland, researchers report at the Seismological Society of America (SSA)'s 2021 Annual Meeting.

Debris flows are mixtures of water, sediment and rock that move rapidly down steep hills, triggered by heavy precipitation and often containing tens of thousands of cubic meters of material. Their destructive potential makes it important to have monitoring and warning systems in place to protect nearby people and infrastructure.

In her presentation at SSA, Ma?gorzata Chmiel of ETH Zürich described a machine learning approach to detecting and alerting against debris flows for the Illgraben torrent, a site in the European Alps that experiences significant debris flows and torrential events each year.

Seismic records from stations located in the Illgraben catchment, from 20 previous debris flow events, were used to train an algorithm to recognize the seismic signals of debris flow formation, accurately detecting early flows 90% of the time.

The machine learning system was able to detect all 13 debris flows and torrential events that occurred during a three-month period in 2020. The alarm triggered by the system occurred between 20 minutes and an hour and half earlier than estimated arrival time of the flow at the torrent's first check dam, depending on the flow's velocity.

Debris flow alerts for the Illgraben torrent come from geophones at three check dams and sensors measuring flow height. Thirty check dams were installed in the lower part of the channel after a disastrous event in 1961 that overflowed the channel and destroyed a bridge.

The current system limits debris flow detection to a dam located below the torrent's upper catchment. "However, debris flows usually form in the upper catchment, above check dam one," Chmiel explained. "To improve the current warning system, we would need to detect the torrential events in their initial forming phase before they arrive at check dam one."

The regularity and variability of Illgraben debris flows convinced the researchers that the torrent would be a good place to test out their machine learning model as an alternative warning system.

"We thought that the size of the dataset should be enough to train a machine learning model for robust detection. Moreover, what makes machine learning particularly appealing for Illgraben is that the detector can be improved every year, with data from new events, something that is not possible to obtain with traditional approaches," said Chmiel.

Chmiel said the system works well at distinguishing torrential events from seismic signals produced by human activity, rainfall and earthquakes. The next step, she noted, will be to explore whether the machine learning model can also distinguish between small and larger and potentially more damaging debris flows.

Most debris flows in Illgraben are activated by heavy summer rainstorms, although snowmelt can condition the slope and potentially may trigger some flows in the late spring or early summer, said Chmiel. A large debris flow might threaten the village of Susten, next to the lower part of the torrent, or the area's popular hiking trails around the channel.

Credit: 
Seismological Society of America

3D printed models provide clearer understanding of ground motion

image: 3D printed seismic model in stainless steel.

Image: 
Sunyoung Park

It seems like a smooth slab of stainless steel, but look a little closer, and you'll see a simplified cross-section of the Los Angeles sedimentary basin.

Caltech researcher Sunyoung Park and her colleagues are printing 3D models like the metal Los Angeles proxy to provide a novel platform for seismic experiments. By printing a model that replicates a basin's edge or the rise and fall of a topographic feature and directing laser light at it, Park can simulate and record how seismic waves might pass through the real Earth.

In her presentation at the Seismological Society of America (SSA)'s 2021 Annual Meeting, Park explained why these physical models can address some of the drawbacks of numerical modeling of ground motion in some cases.

Small-scale, complex structures in a landscape can amplify and alter ground motion after an earthquake, but seismologists have a difficult time modeling these impacts, said Park. "Even though we know that these things are very important to ground shaking, the effects of topography, interfaces and edges are hard problems to study numerically."

Incorporating these features in ground motion simulations requires a lot of computational power, and it can be hard to verify these numerical calculations, she added.

To address these challenges, Park began creating 3D models of simple topographical and basin features to explore these effects on ground shaking. Metal is her preferred printing material, "because it can be as rigid as the conditions at the Earth's lower crust," she said.

By controlling the printing parameters, Park can also control the density of the metal as it is laid down by the printer, creating a material with different seismic velocities. The result, in the case of the Los Angeles basin example that she showed at the meeting, is a 20 by 4-centimeter model that represents a 50-kilometer cross-section through the basin.

At a scale of about 1:250,000 for the printed landscape, Park needed to scale down the wavelengths that she used to simulate seismic waves as well, which is where the laser-based source and receiver system comes in. A laser shot at the model mimics a seismic source event, and laser doppler receivers sense the resulting vibrations as the seismic waves interact with the model's features.

Experiments with the models have yielded some intriguing findings. With a shallow basin cross-section, for instance, Park found that some of the high-frequency waves were blocked from traveling across the basin.

"We know that basins are usually amplifying ground motions," she said, "but this suggests we should be thinking about that in terms of different frequency contents as well."

Park said the models might also be useful for studying wave propagation through other seismologically complex features, such as highly damaged rock near a fault, rock layers injected with fluids and gases during oil and gas extraction or carbon sequestration, and features in the deep Earth.

Park will join the Department of Geophysical Sciences at The University of Chicago in June 2021.

Credit: 
Seismological Society of America

Researchers identify predictive factors of delirium in Sub-Saharan Africa

image: Douglas Heimburger, MD, MS, professor of Medicine and core faculty at the Vanderbilt Institute for Global Health.

Image: 
Vanderbilt University Medical Center

Severity of illness, history of stroke, and being divorced or widowed were independently predictive of delirium in hospitalized patients in Zambia, according to a study published in PLOS ONE.

A collaborative team of researchers from Vanderbilt University Medical Center and the University of Zambia Teaching Hospital published the risk factors as a follow-up look at the prevalence and impact of delirium, a form of acute brain dysfunction, in lower-resourced hospitals. Findings published in February showed delirium is widespread in patients admitted to the University Teaching Hospital, and the duration of delirium predicted both mortality and disability at six months after discharge.

The studies represent novel research in lower-resourced hospitals, and the findings highlight the breadth of a serious health problem that has existed off the radar, said Kondwelani Mateyo, MBChB, MMed, the hospital's chief pulmonary and critical care physician.

The next step is to explore interventional therapies while raising awareness about the prevalence and risks of delirium -- especially given that nearly 50% of people had delirium upon admission to the hospital. In the U.S., for instance, delirium more often develops in patients after admission and while in the ICU.

"We have to start the conversation, and you can't do that with the absence of data or evidence. We are making people aware that delirium is here and it's a widespread problem that has, up to this point, not been quantified. With these data we're able to see it's in our city, and we think this is representative of hospitals in the country and our region. Delirium is a factor in mortality and cognitive impairment," said Mateyo.

The findings on risk factors coupled with the recent data on delirium prevalence is a critical step toward finding ways to screen for and treat patients at high risk for delirium in order to drive improvements in long-term survival and functional status, said Justin Banerdt, MD, MPH, internal medicine resident at Yale School of Medicine, and corresponding author who led the study on the ground in Zambia while an MD/MPH student at Vanderbilt University School of Medicine. For example, findings from the study suggest that widespread use of validated severity of illness scores may allow health care providers in low-income countries to triage patients at high risk of delirium for further assessment and care.

"One of the striking findings in our study is that nearly 50% of patients in this acutely ill non-ICU population had delirium at admission. Not only was severity of illness at admission a powerful, independent predictor of delirium, but there was also a substantial burden of critical illness at hospital presentation, suggesting that severe illness is a significant contributor to the high prevalence of delirium seen in this cohort. This speaks to how in the future we must endeavor to understand what is putting these patients at risk in the community before they even arrive at the hospital so that delirium-- and critical illness more generally-- can be identified earlier and managed more effectively across all levels of the health care system. This is an opportunity to look beyond the walls of the hospital to address inequities in a poor and very vulnerable population through health system strengthening and critical care capacity-building in Zambia," said Banerdt.

Delirium is an independent predictor of long-term mortality as well as cognitive and functional disability. Health care costs attributable to delirium have been estimated to range from $143 billion to $152 billion annually in the U.S.

"Dr. Mateyo is a path-breaker in Zambia and a huge advocate for his patients. Our team's results show that health systems need to be expanded and broadened. Since we found that people are coming to the hospital with delirium, we need to find ways to develop a higher consciousness in communities about what delirium is. Dr. Mateyo's expertise and leadership, and that of his colleagues at the University of Zambia and the Zambia Ministry of Health, will be key in showing the global public health community how to positively impact the lives of patients," said Douglas Heimburger, MD, MS, professor of Medicine and core faculty at the Vanderbilt Institute for Global Health. Heimburger leads projects with grant funding from the Fogarty International Center of the National Institutes of Health (NIH).

Credit: 
Vanderbilt University Medical Center

Scientists uncover structure of light-driven enzyme with potential biofuel applications

image: A study using SLAC's LCLS X-ray laser captured how light drives a series of complex structural changes in an enzyme called FAP, which catalyzes the transformation of fatty acids into starting ingredients for solvents and fuels. This drawing captures the starting state of the catalytic reaction. The dark green background represents the protein scaffold. The enzyme's light-sensing part, called the FAD cofactor, is shown at center right with its three rings absorbing a photon coming from bottom left. A fatty acid at upper left awaits transformation. The amino acid shown at middle left plays an important role in the catalytic cycle, and the red dot near the center is a water molecule.

Image: 
Damien Sorigue/Universite Aix-Marseille

Although many organisms capture and respond to sunlight, enzymes - proteins that catalyze biochemical reactions - are rarely driven by light. Scientists have identified only three types of natural photoenzymes so far. The newest one, discovered in 2017, is fatty acid photodecarboxylase (FAP). Derived from microscopic algae, it uses blue light to catalyze the conversion of fatty acids, found in fats and oils, into alkanes and alkenes.

"A growing number of labs envision using FAPs for green chemistry applications, because alkanes and alkenes are important components of solvents and fuels, including gasoline and jet fuels. And the transformation of fatty acids into alkanes or alkenes happens in a single step within the enzyme," says Martin Weik, the leader of a research group at the Institute of Biologie Structurale at the Universite Grenoble Alpes.

Weik is a primary investigator of a new study that has captured the complex sequence of structural changes FAP undergoes in response to light, called a photocycle, which drives this fatty acid transformation. Although researchers previously proposed a FAP photocycle, the fundamental mechanism was not understood. The scientists didn't know how long it took a fatty acid to lose its carboxylate, the chemical group attached to the end of its long chain of hydrocarbons, a critical step in forming alkenes or alkanes.

In collaboration with SLAC scientists, experiments at the Linac Coherent Light Source (LCLS) at the Department of Energy's SLAC National Accelerator Laboratory helped answer many of these outstanding questions. The researchers describe their results in Science.

All the tools in a toolbox

To understand a light-sensitive enzyme like FAP, scientists use many different techniques to study processes that take place over a broad range of time scales - because photon absorption happens in femtoseconds, or millionths of a billionth of a second, while biological responses on the molecular level often happen in thousandths of a second.

"Our international, interdisciplinary consortium, led by Frederic Beisson at the Universite Aix-Marseille, used a wealth of techniques, including spectroscopy, crystallography and computational approaches," Weik says. "It's the sum of these different results that enabled us to get a first glimpse of how this unique enzyme works as a function of time and in space."

The consortium first studied the complex steps of the catalytic process at their home labs using optical spectroscopy methods, which investigate the electronic and geometric structure of atoms in the samples, including chemical bonding and charge. Spectroscopic experiments identified the enzyme's intermediate states accompanying each step, measured their lifetimes and provided information on their chemical nature. These results motivated the need for the ultrafast capabilities of the LCLS.

Next, a structural view of the catalytic process was provided by serial femtosecond crystallography (SFX) with the LCLS X-ray free-electron laser (XFEL). During these experiments, a jet of tiny FAP microcrystals was hit with optical laser pulses to kick off the catalytic reaction, followed by extremely short, ultrabright X-ray pulses to measure the resulting changes in the enzyme's structure.

By integrating thousands of these measurements - acquired using various time delays between the optical and X-ray pulses - the researchers were able to follow structural changes in the enzyme over time. They also determined the structure of the enzyme's resting state by probing without the optical laser.

Surprisingly, the researchers found that in the resting state, the enzyme's light-sensing part, called the FAD cofactor, has a bent shape. "This cofactor acts like an antenna to capture photons. It absorbs blue light and initiates the catalytic process," Weik says. "We thought the starting point of the FAD cofactor was planar, so this bent configuration was unexpected."

The bent shape of the FAD cofactor was actually first discovered by X-ray crystallography at the European Synchrotron Radiation Facility, but the scientists suspected this bend was an artifact of radiation damage, a common problem for crystallographic data collected at synchrotron light sources. Only SFX experiments could confirm this unusual configuration because of their unique ability to capture structural information before damaging the sample, Weik says.

"These experiments were complemented by computations," he adds, "Without the high-level quantum calculations performed by Tatiana Domratcheva of Moscow State University, we wouldn't have understood our experimental results."

Next steps

Despite the improved understanding of FAP's photocycle, unanswered questions remain. For example, researchers know carbon dioxide is formed during a certain step of the catalytic process at a specific time and location, but they don't know its state as it leaves the enzyme.

"In future XFEL work, we want to identify the nature of the products and to take pictures of the process with a much smaller step size so as to resolve the process in much finer detail," says Weik. "This is important for fundamental research, but it can also help scientists modify the enzyme to do a task for a specific application."

Credit: 
DOE/SLAC National Accelerator Laboratory

A new method for fighting 'cold' tumors

Not all cancerous tumors are created equal. Some tumors, known as "hot" tumors, show signs of inflammation, which means they are infiltrated with T cells working to fight the cancer. Those tumors are easier to treat, as immunotherapy drugs can then amp up the immune response.

"Cold" tumors, on the other hand, have no T-cell infiltration, which means the immune system is not stepping in to help. With these tumors, immunotherapy is of little use.

It's the latter type of tumor that researchers Michael Knitz and radiation oncologist and University of Colorado Cancer Center member Sana Karam, MD, PhD, address in new research published this week in the Journal for ImmunoTherapy of Cancer. Working with mouse models in Karam's specialty area of head and neck cancers, Knitz and Karam studied the role of T cells in tumor treatment.

"What we found is that the cells that normally tell the T cell, 'Hey, here's a tumor -- come and attack it,' are being silenced," Karam says.

She and her team found that regulatory T cells (Tregs), a specialized T cell type that suppresses immune response, are essentially telling the T cells to stop fighting the cancer.

"Tregs normally serve as an important balance in a healthy immune system," Knitz says. "They prevent autoimmune disease and put the brakes on the T cells when needed. However, in many tumors, Tregs are too numerous or overly suppressive, bringing the T cell response to a halt."

Using medication that deactivates the Tregs can help boost the immune response in patients with cold tumors, the researchers found, as can radiation treatment that causes enough injury that the immune cells known as dendritic cells work to put the regular T cells into fight mode.

But this is only part of the story. The T cells need to know what to attack. "You need the radiation to create injury and bring in the immune cells so that the tumor can be recognized and targeted," says Karam, also an associate professor of radiation oncology at the University of Colorado School of Medicine. "That way, the dendritic cells trigger the immune system to produce a lot of T cells, similar to what a vaccine does. Those T cells then go back to the tumor to kill cancer cells. The pieces are already in place; they just need the proper signals. Activating the dendritic cells is a crucial step in allowing radiation to heat up these cold tumors."

Importantly, Karam and her team, which includes post-doctorate fellow Thomas Bickett, found that the radiation must be administered in a specific way.

"A specific dosing is needed," Karam says. "You have to pulse it. You can't just give one dose. You have to give it again and combine it with things that remove the suppression -- the Tregs -- while simultaneously keeping those antigen-presenting dendritic cells active and on board."

Karam says the next step in her research is clinical trials she hopes will eventually change the treatment paradigm from surgery and weeks of chemotherapy and radiation to just three sessions of radiation and immunotherapy, then surgery. She is driven to change the standard of care for cold tumors, she explains, because of the horrendous effects they have on patients.

"These tumors resemble those in patients who are heavy smokers," she says. "They're very destructive to bone and muscle, infiltrating the tongue, jaw, gum, and lymph nodes. It's horrible. We have very high failure rates with them, and the treatment often involves removing the tongue and weeks of radiation and chemotherapy, only for the patient to fail. I'm confident that we can do better for our patients."

Credit: 
University of Colorado Anschutz Medical Campus

Scientists unmask new neutralizing antibody target on SARS-CoV-2 spike protein

Researchers have identified another potential target for neutralizing antibodies on the SARS-CoV-2 Spike protein that is masked by metabolites in the blood. As a result of this masking, the target may be inaccessible to antibodies, because they must compete with metabolite molecules to bind to the otherwise open region, the study authors speculate. This competitive binding activity may represent another method of immune evasion by the SARS-CoV-2 virus. Although further validation work is needed, the findings suggest that strategies to unmask this region - thus making it more visible and accessible to antibodies - may help lead to new vaccine designs. To date, the majority of neutralizing antibodies characterized in COVID-19 patients are those that bind the receptor binding domain of the SARS-CoV-2 Spike protein, while much less is known about the structure of - and antibody interactions with - the Spike's N-terminal domain (NTD). Using cryo-electron microscopy and X-ray crystallography, Annachiara Rosa and colleagues mapped a deep cleft of the N-terminal domain, showing that a specific pocket in the cleft binds the blood metabolite biliverdin with high affinity. This activity leads to stabilization of the NTD structure and "hides" the Spike protein site from binding and neutralization by a subset of human anti-Spike protein antibodies. Successful binding of antibodies to this Spike region requires conformational changes in the NTD that are inhibited by biliverdin binding. Addition of excess biliverdin to sera isolated from SARS-CoV-2-infected and convalescent individuals reduced the reactivity of the immune sera by as much as 50%. The results highlight the importance of this small pocket in the NTD for the stimulation of antibody immunity against SARS-CoV-2.

Credit: 
American Association for the Advancement of Science (AAAS)

Machine learning model generates realistic seismic waveforms

image: SeismoGen, a machine learning technique developed at the Laboratory, is capable of generating high-quality synthetic seismic waveforms. The technique could save tedious and intensive manual labeling effort and help improve earthquake detection.

Image: 
Los Alamos National Laboratory

LOS ALAMOS, N.M., April 22, 2021--A new machine-learning model that generates realistic seismic waveforms will reduce manual labor and improve earthquake detection, according to a study published recently in JGR Solid Earth.

"To verify the e?cacy of our generative model, we applied it to seismic ?eld data collected in Oklahoma," said Youzuo Lin, a computational scientist in Los Alamos National Laboratory's Geophysics group and principal investigator of the project. "Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms."

Quickly and accurately detecting earthquakes can be a challenging task. Visual detection done by people has long been considered the gold standard, but requires intensive manual labor that scales poorly to large data sets. In recent years, automatic detection methods based on machine learning have improved the accuracy and efficiency of data collection; however, the accuracy of those methods relies on access to a large amount of high?quality, labeled training data, often tens of thousands of records or more.

To resolve this data dilemma, the research team developed SeismoGen based on a generative adversarial network (GAN), which is a type of deep generative model that can generate high?quality synthetic samples in multiple domains. In other words, deep generative models train machines to do things and create new data that could pass as real.

Once trained, the SeismoGen model is capable of producing realistic seismic waveforms of multiple labels. When applied to real Earth seismic datasets in Oklahoma, the team saw that data augmentation from SeismoGen?generated synthetic waveforms could be used to improve earthquake detection algorithms in instances when only small amounts of labeled training data are available.

Credit: 
DOE/Los Alamos National Laboratory

Study paves the way for new photosensitive materials

image: Scott G. Sayres is a researcher in the Biodesign Center for Applied Structural Discovery and ASU's School of Molecular Sciences.

Image: 
The Biodesign Institute at Arizona State University

Photocatalysts are useful materials, with a myriad of environmental and energy applications, including air purification, water treatment, self-cleaning surfaces, pollution-fighting paints and coatings, hydrogen production and CO2 conversion to sustainable fuels.

An efficient photocatalyst converts light energy into chemical energy and provides this energy to a reacting substance, to help chemical reactions occur.

One of the most useful such materials is knows as titanium oxide or titania, much sought after for its stability, effectiveness as a photocatalyst and non-toxicity to humans and other biological organisms.

In new research appearing in the Journal of Physical Chemistry Letters, Scott Sayres and his research group describe their investigations into the molecular dynamics of titania clusters.

Such research is a basic step toward the development of more efficient photocatalysts.

The key to such advances is the ability to extend the time that electrons within the material persist in an excited state, as this fleeting duration is when titania can act as an efficient photocatalyst.

Probing the behavior of a photocatalyst in fine detail, however, is a tricky endeavor. The clusters are a nanometer or less in size (or 1/100,000th the width of a human hair) and the movements of electrons within the molecules under study take place on astonishingly brief time scales, measured in femtoseconds (or one millionth of a billionth of a second).

The new study explores neutral (uncharged) clusters of titania for the first time, tracking the subtle movements of energy using a femtosecond laser and a technique known as pump-probe spectroscopy. "We treat our lasers like cameras," Sayres says. "We take pictures of where the energy is flowing over time."

Sayres, a researcher in the Biodesign Center for Applied Structural Discovery, describes the significance of the current study:

"We've examined the smallest possible building blocks of titania to understand the relationship of how small changes in the material's atomic structure influences the excited state lifetimes and flow of energy. Learning about how this happens can help redesign better photocatalysts in the future."

Credit: 
Arizona State University

Silver ions hurry up, then wait as they disperse

image: Chemists at Rice University and the University of Duisburg-Essen, Germany quantified the release of silver ions from gold-silver nanoparticle alloys. At top, transmission electron microscope images show the change in color as silver (in blue) leaches out of a nanoparticle over several hours, leaving gold atoms behind. The bottom hyperspectral images show how much a nanoparticle of silver and gold shrank over four hours as the silver leached away.

Image: 
Rice University

HOUSTON - (April 22, 2021) - There's gold in them thar nanoparticles, and there used to be a lot of silver, too. But much of the silver has leached away, and researchers want to know how.

Gold-silver alloys are useful catalysts that degrade environmental pollutants, facilitate the production of plastics and chemicals and kill bacteria on surfaces, among other applications. In nanoparticle form, these alloys could be useful as optical sensors or to catalyze hydrogen evolution reactions.

But there's an issue: Silver doesn't always stay put.

A new study by scientists at Rice University and the University of Duisburg-Essen, Germany, reveals a two-step mechanism behind silver's dissipation, a discovery that could help industry fine-tune nanoparticle alloys for specific uses.

The team led by Rice chemists Christy Landes and Stephan Link and graduate student Alexander Al-Zubeidi and Duisburg-Essen chemist Stephan Barcikowski employed sophisticated microscopy to show how gold might retain enough silver to stabilize the nanoparticle.

Their study appears in the American Chemical Society journal ACS Nano.

The researchers used a hyperspectral dark-field imaging microscope to study gold-silver alloy nanoparticles containing an excess of silver in an acidic solution. The technique allowed them to trigger plasmons, ripples of energy that flow across the surface of particles when lit. These plasmons scatter light that changes with the alloy's composition.

"The dependence of the plasmon on alloy composition allowed us to record silver ion leaching kinetics in real time," said Al-Zubeidi, lead author of the study.

Al-Zubeidi noted films of gold and silver alloy have been in use for decades, often as antibacterial coatings, because silver ions are toxic to bacteria. "I think the silver release mechanism has been implied from studies of alloy films, but it's never been proven in a quantitative way," he said.

Initially, silver ions leach quickly from nanoparticles, which literally shrink as a result. As the process continues, the gold lattice in most instances releases all the silver over time, but about 25% of particles behave differently and silver leaching is incomplete.

Al-Zubeidi said what they observed suggests gold could be manipulated to stabilize the alloy nanoparticles.

"Usually silver leaching would last about two hours under our conditions," he said. "Then in the second stage, the reaction no longer happens on the surface. Instead, as the gold lattice rearranges, the silver ions have to diffuse through this gold-rich lattice to reach the surface, where they can be oxidized. That slows the reaction rate a lot.

"At some point, the particles passivate and no more leaching can happen," Al-Zubeidi said. "The particles become stable. So far, we've only looked at particles with a silver content of 80%-90%, and we found that a lot of the particles stop leaching silver when they reach a silver content of about 50%.

"That could be an interesting composition for applications like catalysis and electrocatalysis," he said. "We'd like to find a sweet spot around 50%, where the particles are stable but still have a lot of their silver-like properties."

Understanding such reactions could help researchers build a library of gold-silver catalysts and electrocatalysts for various applications.

Link said the Rice team welcomed the opportunity to work with Barcikowski, a leader in the field of nanoparticle synthesis via laser ablation. "This makes it possible to create alloy nanoparticles with various compositions and free of stabilizing ligands," he said.

"From our end, we had the perfect technique to study the process of silver ion leaching from many single-alloy nanoparticles in parallel via hyperspectral imaging," Landes added. "Only a single-particle approach was able to resolve the intra- and interparticle geometry."

"This effort will enable a new approach to generate nanostructured catalysts and new materials with unique electrochemical, optical and electronic properties," said Robert Mantz, program manager for electrochemistry at the Army Research Office, an element of the U.S. Army Combat Capabilities Command's Army Research Laboratory. "The ability to tailor catalysts is important to achieve the goal of reducing soldier-borne weight associated with power storage and generation and enable novel material synthesis."

Credit: 
Rice University

Army-funded research paves way for improved lasers, communications

image: Army-funded researchers designed and built two-dimensional arrays of closely packed micro-lasers that have the stability of a single micro-laser but can collectively achieve power density orders of magnitude higher, paving the way for improved lasers, high-speed computing and optical communications for the Army.

Image: 
Courtesy University of Pennsylvania

RESEARCH TRIANGLE PARK, N.C. -- New photonics research paves the way for improved lasers, high-speed computing and optical communications for the Army.

Photonics has the potential to transform all manners of electronic devices by storing and transmitting information in the form of light, rather than electricity. Using light's speed and the way information can be layered in its various physical properties can increase the speed of communication while reducing wasted energy; however, light sources such as lasers need to be smaller, stronger and more stable to achieve that, researchers said.

"Single-mode, high power lasing is used in a wide range of applications that are important to the Army and help support the warfighter including optical communications, optical sensing and LIDAR ranging," said Dr. James Joseph, program manager, ARO, an element of the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. "The research results out of UPenn mark a significant step towards creating more efficient and fieldable laser sources."

The way information can be layered with this technology could also have important implications for photonic computers and communication systems.

Army-funded researchers designed and built two-dimensional arrays of closely packed micro-lasers that have the stability of a single micro-laser but can collectively achieve power density orders of magnitude higher, paving the way for improved lasers, high-speed computing and optical communications for the Army.

In order to preserve the information manipulated by a photonic device, its lasers must be exceptionally stable and coherent. So-called single-mode lasers eliminate noisy variations within their beams and improve their coherence, but as a result, are dimmer and less powerful than lasers that contain multiple simultaneous modes.

Researchers from the University of Pennsylvania and Duke University, with Army funding, designed and built two-dimensional arrays of closely packed micro-lasers that have the stability of a single micro-laser but can collectively achieve power density orders of magnitude higher. They published a study in the peer-reviewed journal Science demonstrating the super-symmetric micro-laser array.

Robots and autonomous vehicles that use LiDAR for optical sensing and ranging, manufacturing and material processing techniques that use lasers, are some of many other potential applications of this research.

"One seemingly straightforward method to achieve a high-power, single-mode laser is to couple multiple identical single-mode lasers together to form a laser array," said Dr. Liang Feng, associate professor in the departments of Materials Science and Engineering and Electrical and Systems Engineering at University of Pennsylvania. "Intuitively, this laser array would have an enhanced emission power, but because of the nature of complexity associated with a coupled system, it will also have multiple super-modes. Unfortunately, the competition between modes makes the laser array less coherent."

Coupling two lasers produces two super-modes, but that number increases quadratically as lasers are arrayed in the two-dimensional grids eyed for photonic sensing and LiDAR applications.

"Single mode operation is critical because the radiance and brightness of the laser array increase with number of lasers only if they are all phase-locked into a single super-mode," said Xingdu Qiao, doctoral candidate at University of Pennsylvania. "Inspired by the concept of supersymmetry from physics, we can achieve this kind of phase-locked single-mode lasing in a laser array by adding a dissipative super-partner."

In particle physics, super-symmetry is the theory that all elementary particles of the two main classes, bosons and fermions, have a yet undiscovered super-partner in the other class. The mathematical tools that predict the properties of each particle's hypothetical super-partner can also be applied to the properties of lasers.

Compared to elementary particles, fabricating a single micro-laser's super-partner is relatively simple. The complexity lies in adapting super-symmetry's mathematical transformations to produce an entire super-partner array that has the correct energy levels to cancel out all but the desired single mode of the original.

Prior to this research, super-partner laser arrays could only have been one-dimensional, with each of the laser elements aligned in a row. By solving the mathematical relationships that govern the directions in which the individual elements couple to one another, this new study demonstrates an array with five rows and five columns of micro-lasers.

"When the lossy super-symmetric partner array and the original laser array are coupled together, all the super-modes except for the fundamental mode are dissipated, resulting in single-mode lasing with 25 times the power and more than 100 times the power density of the original array," said Dr. Zihe Gao, a post-doctoral fellow in Feng's program, "We envision a much more dramatic power scaling by applying our generic scheme for a much larger array even in three dimensions. The engineering behind it is the same."

The study also shows that the technique is compatible with the team's earlier research on vortex lasers, which can precisely control orbital angular momentum, or how a laser beam spirals around its axis of travel. The ability to manipulate this property of light could enable photonic systems encoded at even higher densities than previously imagined.

"Bringing super-symmetry to two-dimensional laser arrays constitutes a powerful toolbox for potential large-scale integrated photonic systems," Feng said.

Credit: 
U.S. Army Research Laboratory

Law professor argues for removing police from traffic enforcement

image: Jordan Blair Woods, University of Arkansas

Image: 
University of Arkansas

University of Arkansas law professor Jordan Blair Woods challenges the conventional wisdom that only police can enforce traffic laws.

In "Traffic Without Police," to be published in Stanford Law Review, Woods articulates a new legal framework for traffic enforcement, one that separates it from critical police functions, such as preventing and deterring crime, conducting criminal investigations and responding to emergencies.

If not the police, who then would enforce traffic laws? As Woods explains, jurisdictions would delegate most traffic enforcement to newly created traffic agencies. These public offices would operate independently from police departments and would hire their own traffic monitors to conduct and oversee traffic enforcement, including stops. Police officers would become involved in traffic stops only for serious violations that are a criminal offense or public threat.

"Traffic stops are the most frequent interaction between police and civilians today," Woods said. "And because we know traffic enforcement is a common gateway for funneling over-policed and marginalized communities into the criminal justice system, these stops are a persistent source of racial and economic injustice."

Previous research has shown that Black and Latinx motorists are disproportionately stopped by police for traffic violations. Compared to white motorists, these minority groups are also disproportionately questioned, frisked, searched, cited and arrested during traffic stops.

Many of these stops and intrusions are considered "pretextual," according to legal definition, meaning that they enable officers to initiate contact with motorists and to then search for evidence of non-traffic crime without reasonable suspicion or probable cause. In this sense, the traffic stop has functioned as a gateway unfairly targeting Black and Latinx motorists. Pretextual stops sometimes also lead to police mistreatment and abuse.

So far, there is one example of the reorganization that Woods articulates. In July 2020, as part of a comprehensive plan to make structural police reforms, the city of Berkeley, California, voted in favor of a proposal that removes police from conducting traffic stops. The proposal directs the city to create a transportation department staffed by unarmed civil servants who would be in charge of enforcing traffic laws. Other municipalities are considering similar reforms that would remove police from traffic enforcement to varying degrees.

In addition to the social benefits mentioned above, especially for minority communities, removing police from traffic enforcement and adopting traffic law reforms that Woods proposes could put an end to unfair and often subjective reliance on traffic ticket revenue to fund state and local budgets. Likewise, such reform could reduce or eliminate financial and professional incentives that contribute to aggressive and biased traffic enforcement, namely prohibiting the issuing of traffic tickets as a measure of professional performance.

Credit: 
University of Arkansas