Tech

Microbiome diversity builds a better mouse model

The path to building a better mouse model starts with the microorganisms that colonize it. According to a new study, lab mice born with natural microbiota and pathogens may provide greater translational research value for immunology than widely used, traditional laboratory animals - something the study authors demonstrated in two preclinical drug studies involving their engineered mice. In many ways, the tiny laboratory mouse is the quintessential hero of biomedical research; generations of mice have served as invaluable models in the quest to better understand human health and immunology, and untold millions owe their health to medical treatments that began as a simple test in a tiny mouse. However, as a translational model, the laboratory mouse is far from perfect. Most successful preclinical studies in mice fail to transition into human bedside practice, Stephan Rosshart and colleagues emphasize. Recent studies have shown that inbred strains of laboratory mice lack the "wild" range of microbiome diversity, which could greatly distort an accurate portrayal of natural immune system function. To address these shortcomings and build a more relevant mouse model, Rosshart and colleagues implanted lab-strain embryos into wild mice to create "wildlings" - mice that harbor the same diverse bacterial, viral and fungal communities as wild mice, but also maintain the genetic uniformity of laboratory animals. To evaluate the translational potential of the wildling model, Rosshart et al. recreated two preclinical drug studies using wildling mice where lab mice had previously failed. The results show that wildling mice more accurately predicted human response to the drugs and could have prevented failed clinical trials in both cases. "Due to the resemblance of the wildling microbiome to that of humans, wildling mice will likely provide a an elegant and more robust preclinical model to predict success in human clinical trials," write Samuek Nobs and Eran Elinav in a related Perspective.

Credit: 
American Association for the Advancement of Science (AAAS)

Light for the nanoworld

image: By bombarding thin molybdenum sulfide layers with helium ions, physicists at the Technical University of Munich (TUM) succeeded in placing light sources in atomically thin material layers with an accuracy of just a few nanometers. The new method allows for a multitude of applications in quantum technologies.

Image: 
Christoph Hohmann / MCQST

An international team headed up by Alexander Holleitner and Jonathan Finley, physicists at the Technical University of Munich (TUM), has succeeded in placing light sources in atomically thin material layers with an accuracy of just a few nanometers. The new method allows for a multitude of applications in quantum technologies, from quantum sensors and transistors in smartphones through to new encryption technologies for data transmission.

Previous circuits on chips rely on electrons as the information carriers. In the future, photons which transmit information at the speed of light will be able to take on this task in optical circuits. Quantum light sources, which are then connected with quantum fiber optic cables and detectors are needed as basic building blocks for such new chips.

An international team headed up by TUM physicists Alexander Holleitner and Jonathan Finley has now succeeded in creating such quantum light sources in atomically thin material layers and placing them with nanometer accuracy.

First step towards optical quantum computers

"This constitutes a first key step towards optical quantum computers," says Julian Klein, lead author of the study. "Because for future applications the light sources must be coupled with photon circuits, waveguides for example, in order to make light-based quantum calculations possible."

The critical point here is the exact and precisely controllable placement of the light sources. It is possible to create quantum light sources in conventional three-dimensional materials such as diamond or silicon, but they cannot be precisely placed in these materials.

Deterministic defects

The physicists then used a layer of the semiconductor molybdenum disulfide (MoS2) as the starting material, just three atoms thick. They irradiated this with a helium ion beam which they focused on a surface area of less than one nanometer.

In order to generate optically active defects, the desired quantum light sources, molybdenum or sulfur atoms are precisely hammered out of the layer. The imperfections are traps for so-called excitons, electron-hole pairs, which then emit the desired photons.

Technically, the new helium ion microscope at the Walter Schottky Institute's Center for Nanotechnology and Nanomaterials, which can be used to irradiate such material with an unparalleled lateral resolution, was of central importance for this.

On the road to new light sources

Together with theorists at TUM, the Max Planck Society, and the University of Bremen, the team developed a model which also describes the energy states observed at the imperfections in theory.

In the future, the researchers also want to create more complex light source patterns, in lateral two-dimensional lattice structures for example, in order to thus also research multi-exciton phenomena or exotic material properties.

This is the experimental gateway to a world which has long only been described in theory within the context of the so-called Bose-Hubbard model which seeks to account for complex processes in solids.

Quantum sensors, transistors and secure encryption

And there may be progress not only in theory, but also with regard to possible technological developments. Since the light sources always have the same underlying defect in the material, they are theoretically indistinguishable. This allows for applications which are based on the quantum-mechanical principle of entanglement.

"It is possible to integrate our quantum light sources very elegantly into photon circuits," says Klein. "Owing to the high sensitivity, for example, it is possible to build quantum sensors for smartphones and develop extremely secure encryption technologies for data transmission."

Credit: 
Technical University of Munich (TUM)

Physicists make graphene discovery that could help develop superconductors

image: Left: This image, taken with a scanning tunneling microscope, shows a moiré pattern in "magic angle" twisted bilayer graphene. Right: Scanning tunneling charge spectroscopy, a technique invented by Professor Eva Andrei's group, reveals correlated electrons as shown by the alternating positive (blue) and negative (red) charge stripes that formed in the "magic angle" twisted bilayer graphene seen in the image at left.

Image: 
Yuhang Jiang/Rutgers University-New Brunswick

When two mesh screens are overlaid, beautiful patterns appear when one screen is offset. These "moiré patterns" have long intrigued artists, scientists and mathematicians and have found applications in printing, fashion and banknotes.

Now, a Rutgers-led team has paved the way to solving one of the most enduring mysteries in materials physics by discovering that in the presence of a moiré pattern in graphene, electrons organize themselves into stripes, like soldiers in formation.

Their findings, published in the journal Nature, could help in the search for quantum materials, such as superconductors, that would work at room temperature. Such materials would dramatically reduce energy consumption by making power transmission and electronic devices more efficient.

"Our findings provide an essential clue to the mystery connecting a form of graphene, called twisted bilayer graphene, to superconductors that could work at room temperature," said senior author Eva Y. Andrei, Board of Governors professor in Rutgers' Department of Physics and Astronomy in the School of Arts and Sciences at Rutgers University-New Brunswick.

Graphene - an atomically thin layer of the graphite used in pencils - is a mesh made of carbon atoms that looks like a honeycomb. It's a great conductor of electricity and much stronger than steel.

The Rutgers-led team studied twisted bilayer graphene, created by superimposing two layers of graphene and slightly misaligning them. This creates a "twist angle" that results in a moiré pattern which changes rapidly when the twist angle changes.

In 2010, Andrei's team discovered that in addition to being pretty, moiré patterns formed with twisted bilayer graphene have a dramatic effect on the electronic properties of the material. This is because the moiré pattern slows down the electrons that conduct electricity in graphene and zip past each other at great speeds.

At a twist angle of about 1.1 degrees - the so-called magic angle - these electrons come to an almost dead stop. The sluggish electrons start seeing each other and interact with their neighbors to move in lock-step. As a result, the material acquires amazing properties such as superconductivity or magnetism.

Using a technique invented by Andrei's group to study twisted bilayer graphene, the team discovered a state where the electrons organize themselves into stripes that are robust and difficult to break.

"Our team found a close resemblance between this feature and similar observations in high-temperature superconductors, providing new evidence of the deep link underlying these systems and opening the way to unraveling their enduring mystery," Andrei said.

Credit: 
Rutgers University

Storytelling bots learn to punch up their last lines

PITTSBURGH--Nothing disappoints quite like a good story with a lousy finish. So researchers at Carnegie Mellon University who work in the young field of automated storytelling don't think they're getting ahead of themselves by devising better endings.

The problem is that most algorithms for generating the end of a story tend to favor generic sentences, such as "They had a great time," or "He was sad." Those may be boring, but Alan Black, a professor in CMU's Language Technologies Institute, said they aren't necessarily worse than a non sequitur such as "The UFO came and took them all away."

In a paper presented Thursday, Aug. 1, at the Second Workshop of Storytelling in Florence, Italy, Black and students Prakhar Gupta, Vinayshekhar Bannihatti Kumar and Mukul Bhutani presented a model for generating endings that will be both relevant to the story and diverse enough to be interesting.

One trick to balancing these goals, Black said, is to require the model to incorporate some key words into the ending that are related to those used early in the story. At the same time, the model is rewarded for using some rare words in the ending, in hopes of choosing an ending that is not totally predictable.

Consider this bot-generated story: "Megan was new to the pageant world. In fact, this was her very first one. She was really enjoying herself, but was also quite nervous. The results were in and she and the other contestants walked out." Existing algorithms generated these possible endings: "She was disappointed the she couldn't have to learn how to win," and "The next day, she was happy to have a new friend." The CMU algorithm produced this ending: "Megan won the pageant competition."

None of the selections represent deathless prose, Black acknowledged, but the endings generated by the CMU model scored higher than the older models both when scored automatically and by three human reviewers.

Researchers have worked on conversational agents for years, but automated storytelling presents new technical challenges.

"In a conversation, the human's questions and responses can help keep the computer's responses on track," Black said. "When the bot is telling a story, however, that means it has to remain coherent for far longer than it does in a conversation."

Automated storytelling might be used for generating substories in videogames, Black said, or for generating stories that summarize presentations at a conference. Another application might be to generate instructions for repairing something or using complicated equipment that can be customized to a user's skill or knowledge level, or to the exact tools or equipment available to the user.

Credit: 
Carnegie Mellon University

BU study: Youth empowerment program can prevent childhood obesity

First-of-its-kind study had Worchester youths create their own narratives about reducing sugary drink consumption, successfully leading to behavior changes and preventing excess weight gain.

A new pilot study led by Boston University School of Public Health (BUSPH) researchers is the first to use youth-produced narratives to empower youth to reduce sugary drink consumption and obesity risk. In the study, published in the International Journal of Behavioral Nutrition and Physical Activity, participants in the pilot program at the Boys and Girls Club (BGC) of Worcester and their parents consumed fewer sugary drinks and more water over a six-month period than children and parents at a demographically-similar BGC in a nearby city.

"Youth created their own narratives around why it was important for them--not their parents, teachers, or researchers like myself--to change the types of beverages they were drinking," says study lead author Dr. Monica Wang, assistant professor of community health sciences at BUSPH. "This type of empowerment strategy recognizes youth as experts in their own lives, and may be particularly engaging for youth of color."

After a training from Wang and her colleagues, BGC staff in the pilot study led an ethnically diverse group of nine- to twelve-year-old youths in activities that promoted replacing sugar-sweetened beverages with water, including blind taste tests of flavored water, a corner store scavenger hunt, and role play skits about ways to drink water and what to do when tempted by sugary drinks. The staff also guided the participants in creating written, audio, and video narratives to promote replacing sugar-sweetened beverages with water and provide strategies for doing so. The youths then taught their parents or guardians what they had learned each week, shared their narratives, and led a culminating BGC community event at the end of the six-week program.

"Most obesity prevention programs target multiple behaviors, but we found that a youth empowerment program targeting one dietary behavior could prevent obesity risk among youth," Wang says. "Reducing sugary drinks through youth empowerment may be a promising starting point for families to engage in additional healthy eating efforts down the road."

Wang notes that 12 BGCs have expressed interest in the program for a future, larger-scale study.

Credit: 
Boston University School of Medicine

Artificial intelligence could help air travelers save a bundle

image: Industrial and enterprise systems engineering professor Lavanya Marla and collaborators used artificial intelligence to design a customized pricing model for airline customers.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- Researchers are using artificial intelligence to help airlines price ancillary services such as checked bags and seat reservations in a way that is beneficial to customers' budget and privacy, as well as to the airline industry's bottom line.

When airlines began unbundling the costs of flights and ancillary services in 2008, many customers saw it as a tactic to quote a low base fare and then add extras to boost profits, the researchers said. In a new study, the researchers use unbundling to meet customer needs while also maximizing airline revenue with intelligent, individualized pricing models offered in real time as a customer shops.

The results of the study will be presented at the 2019 Conference on Knowledge Discovery and Data Mining on Aug. 6 in Anchorage, Alaska.

Airlines operate on very slim margins, the researchers said. While they earn a considerable portion of their revenue on ancillary purchases, unbundling can provide cost-saving opportunities to customers, as well. Customers don't have to pay for things they don't need, and discounts offered to customers who may otherwise pass on the extras can help convert a "no sale" into a purchase.

"Most airlines offer every customer the same price for a checked bag," said Lavanya Marla, a professor of industrial and enterprise systems engineering and study co-author. "However, not every customer has the same travel and budget needs. With AI, we can use information gathered while they shop to predict a price point at which they will be comfortable."

To hit that sweet spot, the pricing models use a combination of AI techniques - machine learning and deep neural networks - to track and assign a level of demand on an individual customer's flight preferences, the researchers said. The models consider various price factors such as flight origin, destination, the timing of travel and duration of a trip to assign a value on demand.

"For example, a customer who is traveling for a few days may not be motivated to pay for a checked bag," Marla said. "But, if you discount it to them at the right price - where convenience outweighs cost - you can complete that sales conversion. That is good for the customer and good for the airline."

In the study, the University of Illinois and Deepair Solutions team collaborated with a European airline over a period of approximately six months to gather data and test their models. While shopping, customers logged in to a pricing page where a predetermined percentage of customers are offered discounts on ancillary services.

"We started by offering the AI-modeled discounts to 5% of the customers who logged in," said Kartik Yellepeddi, a co-founder of Deepair Solutions and study co-author. "The airline then allowed us to adjust this percentage, as well as to experiment with various AI techniques used in our models, to obtain a robust data set."

The airline began to see an uptick in ancillary sales conversions and revenue per customer, and allowed the researchers to offer discounts to all of the customers who logged in.

"Because of the unique nature of personalized pricing, we built a high level of equity and privacy into our models," Yellepeddi said. "There is a maximum price not to be exceeded, and we do not track customer demographics information like income, race, gender, etc., nor do we track a single customer during multiple visits to a sale site. Each repeat visit is viewed as a separate customer."

With an increase seen in ancillary sales conversions and ancillary revenue per offer - up by 17% and 25%, respectively, according to the study - the team said AI can help the airline industry move away from the concept of the "average customer" and tailor their offers to "individual travelers."

"In recent years, the airline industry has felt that it has been losing touch with its customer base," Marla said. "The industry is eager to find new ways to meet customer needs and to retain customer loyalty."

Deepair Solutions is an artificial intelligence company serving the travel industry. The company is headquartered in London and has an office in Dallas. Lavanya Marla reports no conflicts of interest related to this research at the time of publication.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Hubble uncovers a 'heavy metal' exoplanet shaped like a football

image: This artist's illustration shows an alien world that is losing magnesium and iron gas from its atmosphere. The observations represent the first time that so-called "heavy metals" -- elements more massive than hydrogen and helium -- have been detected escaping from a hot Jupiter, a large gaseous exoplanet orbiting very close to its star.The planet, known as WASP-121b, orbits a star brighter and hotter than the Sun. The planet is so dangerously close to its star that its upper atmosphere reaches a blazing 4,600 degrees Fahrenheit, about 10 times greater than any known planetary atmosphere. A torrent of ultraviolet light from the host star is heating the planet's upper atmosphere, which is causing the magnesium and iron gas to escape into space. Observations by Hubble's Space Telescope Imaging Spectrograph have detected the spectral signatures of magnesium and iron far away from the planet.The planet's "hugging" distance from the star means that it is on the verge of being ripped apart by the star's gravitational tidal forces. The powerful gravitational forces have altered the planet's shape so that it appears more football shaped.The WASP-121 system is about 900 light-years from Earth.

Image: 
NASA, ESA, and J. Olmsted (STScI)

How can a planet be "hotter than hot?" The answer is when heavy metals are detected escaping from the planet's atmosphere, instead of condensing into clouds.

Observations by NASA's Hubble Space Telescope reveal magnesium and iron gas streaming from the strange world outside our solar system known as WASP-121b. The observations represent the first time that so-called "heavy metals"--elements heavier than hydrogen and helium--have been spotted escaping from a hot Jupiter, a large, gaseous exoplanet very close to its star.

Normally, hot Jupiter-sized planets are still cool enough inside to condense heavier elements such as magnesium and iron into clouds.

But that's not the case with WASP-121b, which is orbiting so dangerously close to its star that its upper atmosphere reaches a blazing 4,600 degrees Fahrenheit. The temperature in WASP-121b's upper atmosphere is about 10 times greater than that of any known planetary atmosphere. The WASP-121 system resides about 900 light-years from Earth.

"Heavy metals have been seen in other hot Jupiters before, but only in the lower atmosphere," explained lead researcher David Sing of the Johns Hopkins University in Baltimore, Maryland. "So you don't know if they are escaping or not. With WASP-121b, we see magnesium and iron gas so far away from the planet that they're not gravitationally bound."

Ultraviolet light from the host star, which is brighter and hotter than the Sun, heats the upper atmosphere and helps lead to its escape. In addition, the escaping magnesium and iron gas may contribute to the temperature spike, Sing said. "These metals will make the atmosphere more opaque in the ultraviolet, which could be contributing to the heating of the upper atmosphere," he explained.

The sizzling planet is so close to its star that it is on the cusp of being ripped apart by the star's gravity. This hugging distance means that the planet is football shaped due to gravitational tidal forces.

"We picked this planet because it is so extreme," Sing said. "We thought we had a chance of seeing heavier elements escaping. It's so hot and so favorable to observe, it's the best shot at finding the presence of heavy metals. We were mainly looking for magnesium, but there have been hints of iron in the atmospheres of other exoplanets. It was a surprise, though, to see it so clearly in the data and at such great altitudes so far away from the planet. The heavy metals are escaping partly because the planet is so big and puffy that its gravity is relatively weak. This is a planet being actively stripped of its atmosphere."

The researchers used the observatory's Space Telescope Imaging Spectrograph to search in ultraviolet light for the spectral signatures of magnesium and iron imprinted on starlight filtering through WASP-121b's atmosphere as the planet passed in front of, or transited, the face of its home star.

This exoplanet is also a perfect target for NASA's upcoming James Webb Space Telescope to search in infrared light for water and carbon dioxide, which can be detected at longer, redder wavelengths. The combination of Hubble and Webb observations would give astronomers a more complete inventory of the chemical elements that make up the planet's atmosphere.

The WASP-121b study is part of the Panchromatic Comparative Exoplanet Treasury (PanCET) survey, a Hubble program to look at 20 exoplanets, ranging in size from super-Earths (several times Earth's mass) to Jupiters (which are over 100 times Earth's mass), in the first large-scale ultraviolet, visible, and infrared comparative study of distant worlds.

The observations of WASP-121b add to the developing story of how planets lose their primordial atmospheres. When planets form, they gather an atmosphere containing gas from the disk in which the planet and star formed. These atmospheres consist mostly of the primordial, lighter-weight gases hydrogen and helium, the most plentiful elements in the universe. This atmosphere dissipates as a planet moves closer to its star.

"The hot Jupiters are mostly made of hydrogen, and Hubble is very sensitive to hydrogen, so we know these planets can lose the gas relatively easily," Sing said. "But in the case of WASP-121b, the hydrogen and helium gas is outflowing, almost like a river, and is dragging these metals with them. It's a very efficient mechanism for mass loss."

Credit: 
NASA/Goddard Space Flight Center

US infrastructure unprepared for increasing frequency of extreme storms

WASHINGTON - Current design standards for United States hydrologic infrastructure are unprepared for the increasing frequency and severity of extreme rainstorms, meaning structures like retention ponds and dams will face more frequent and severe flooding, according to a new study.

Extreme weather events are on the rise, but U.S. water management systems use outdated design guidelines. New research, published in the AGU journal Geophysical Research Letters, analyzed data from multiple regions throughout the U.S. and found the rising number of extreme storms combined with outdated building criteria could overwhelm hydrologic structures like stormwater systems.

The new study is particularly timely in light of recent storms and flash floods along the East Coast.

"The take-home message is that infrastructure in most parts of the country is no longer performing at the level that it's supposed to, because of the big changes that we've seen in extreme rainfall," said Daniel Wright, a hydrologist at the University of Wisconsin-Madison and lead author of the new study.

Engineers often use statistical estimates called IDF curves to describe the intensity, duration, and frequency of rainfall in each area. The curves, published by the National Oceanic and Atmospheric Administration (NOAA), are created using statistical methods that assume weather patterns remain static over time.

"Design engineers at cities, consulting companies, and counties use this for different purposes, like infrastructure design management, infrastructure risk assessment and so forth. It has a lot of engineering applications," said Amir Aghakouchak, a hydrologist at the University of California, Irvine who was not involved with the new study.

But climate change is causing extreme rainfall events to occur more often in many regions of the world, something IDF curves don't take into account. One measure of extreme rainfall is the 100-year storm, a storm that has a one percent chance of happening in a given year, or a statistical likelihood of happening once in 100 years on average.

Wright and his colleagues wanted to know how existing IDF curves compare with recent changes in extreme rainfall. They analyzed records from more than 900 weather stations across the U.S. from 1950 to 2017 and recorded the number of times extreme storms, like 100-year storms, exceeded design standards. For example, in the eastern United States, extreme rainstorm events are happening 85 percent more often in 2017, than they did in 1950. In the western U.S., these storms are appearing 51 percent more often now than they once did.

The scientists found that in most of the country the growing number of extreme rainstorms can be linked to warming temperatures from climate change, although natural events, such as El Niño, also occasionally affect the Southeast's climate.

By comparing the number of storms that actually happened against the number predicted by IDF curves, the researchers also showed the potential consequences for U.S. infrastructure. In some regions, for example, infrastructure designed to withstand extreme rainstorms could face these storms every 40 years instead of every 100 years.

"Infrastructure that has been designed to these commonly-used standards is likely to be overwhelmed more often than it is supposed to be," Wright said.

The researchers hope the findings will encourage climate scientists, hydrologists, and engineers to collaborate and improve U.S. hydrologic infrastructure guidelines.

"We really need to get the word out about just how far behind our design standards are from there they should be," Wright said.

Credit: 
American Geophysical Union

Police officers' exposure to peers accused of misconduct shapes their subsequent behavior

Police officers who had a greater proportion of colleagues previously named in use-of-force complaints were more likely to
be named in subsequent complaints

Authors recommend police departments consider how assigning officers with histories of using excessive force affects the
behavior of other officers

EVANSTON, Ill. --- A new Northwestern University study investigated how Chicago police officers' exposure to peers who had been accused of misconduct shaped their involvement in subsequent excessive force cases.

"We found that officers who were involved in complaints related to this type of force were more likely to work with officers with a history of such behaviors, suggesting that officers' peers may serve as social conduits through which misconduct is learned and transmitted," said Andrew V. Papachristos, senior author of the study and professor of sociology in the Weinberg College of Arts and Sciences at Northwestern and faculty fellow at the University's Institute for Policy Research.

Researchers examined the records of more than 8,000 Chicago police officers named in multiple complaints from 2007 to 2015 to determine the role played by social networks in officers' misconduct.

Prior research on this topic has analyzed individual or departmental factors that may be associated with police officers' problematic behaviors.

This study is one of the first to analyze police officers' work networks -- specifically, their involvement with other officers in misconduct complaints -- to determine how misconduct may be socially transmitted across deviant officers.

The study classified complaints as use of force if they entailed excessive force (use of a firearm, use of a conductive energy device) or unnecessary physical contact, or if they involved an act that resulted in injury or death.

The researchers found that police officers who had a greater proportion of colleagues previously named in use-of-force complaints were more likely to be named in subsequent use-of-force complaints.

"These findings held even after controlling for officers' characteristics and for the opportunity of being named in future use-of-force complaints," said Papachristos, also director of the Northwestern Neighborhood & Network Initiative.

The study's authors suggest that exposure to such behavior in their networks may lessen officers' perceptions of the risks associated with engaging in misconduct. Thus, not only do officers learn patterns of deviance from their colleagues, but the networks in which they associate with those colleagues alter their perceptions of the risks connected to misconduct, normalizing behaviors that otherwise would be considered deviant or against training and regulations.

The authors recommend that police departments consider how assigning officers with histories of using force excessively affects the behavior of other officers.

"Temporarily removing officers named in complaints of this kind from the field until problematic behaviors are addressed might limit the negative consequences of exposure," Papachristos said.

The authors acknowledge the study's limitations.

First, the number of complaints analyzed likely underestimates the full scope of deviant behavior in a police department because of under-reporting. Second, because the researchers did not have information on officers' beats or other geographic assignments, they could not determine whether complaints varied based on assignments to different kinds of neighborhoods (e.g., high-crime areas). Third, the study captured officers' networks of others involved in prior misconduct, so it likely underestimated the broader social structure in which they operated by emphasizing only relationships with a potentially negative influence. And finally, the study's findings are based on one agency, the Chicago Police Department, so they cannot be applied to other police departments.

"Network exposure and excessive use of force: Investigating the social transmission of police misconduct" will publish Aug. 1 in the journal Criminology & Public Policy. In addition to Papachristos, co-authors include Marie Quellet, Georgia State University; Sadaf Hashimi, Rutgers University; and Jason Gravel, University of Pennsylvania Injury Science Center.

'Misconduct networks'

In another related working paper, Papachristos and co-authors George Wood of Northwestern and Daria Roithmayr of the University of Southern California Gould School of Law, recreate police "misconduct networks" -- defining the demographics of misconduct and examining what contributes to misconduct such as gender, race and age/tenure.

"Perhaps one of the biggest takeaways from our 'Network Structure' study is the concentration of misconduct is a major feature of police networks," Papachristos said. "The modal number of civilian complaints is zero and average is something like 1.3. This means that, on average, cops get less than two complaints over a 10-year period. And less than 3% of all officers are named in about 27% of all complaints."

Additional findings of "The Network Structure of Police Misconduct" include:

Officers receiving complaints tend to be males, 25 to 45 years old

76% of officers named in at least one civilian complaint have been co-named alongside another officer

Pairing more experienced officers with less experienced officers makes them less likely to engage in misconduct

The researchers said there are actionable steps to improve complaints, including:

Pay attention to staffing for officers assigned to high-crime communities (race, gender and age); civilians prefer "mixed race" pairings of officers

Increase the percentage of officers from underrepresented groups

Credit: 
Northwestern University

Study casts doubt on evidence for 'gold standard' psychological treatments

LAWRENCE -- A paper appearing today in a special edition of the Journal of Abnormal Psychology questions much of the statistical evidence underpinning therapies designated as "Empirically Supported Treatments," or ESTs, by Division 12 of the American Psychological Association.

For years, ESTs have represented a "gold standard" in research-supported psychotherapies for conditions like depression, schizophrenia, eating disorders, substance abuse, generalized anxiety and post-traumatic stress disorder. But recent concerns about the replicability of research findings in clinical psychology prompted the re-examination of their evidence.

The new study, led by researchers at the University of Kansas and University of Victoria, concluded that while underlying evidence for a small number of empirically supported treatments is strong, "power and replicability estimates were concerningly low across almost all ESTs, and individually, some ESTs scored poorly across multiple metrics."

"By some accounts, there are over 600 approaches to psychotherapy, and some are going to be more effective than others," said co-lead author Alexander Williams, program director of psychology and director of the Psychological Clinic for KU's Edwards Campus. "Since the 1970s, people have been trying to figure out which are most effective using clinical trials just like in medicine, where some subjects are assigned to a therapy and some to a control group. Division 12 of the APA has a list of therapies with strong scientific evidence from these trials, called ESTs. Ours is the first attempt anyone has made using this broad suite of statistical tools to evaluate the EST literature."

The researchers analyzed 78 ESTs with "strong" or "modest" research support, as determined by the APA's Society of Clinical Psychology Division 12, from more than 450 published articles. Four types of evidential value were assessed -- rates of misreported statistics, power, R-index and Bayes factors. Among the key conclusions:

56% (44 of 78) of ESTs fared poorly across most metric scores.

19% (15 of 78) of ESTs fared strongly across most metric scores.

52% (26 of 50) of ESTs deemed by Division 12 of the APA as having Strong Research Support fared poorly across most metric scores.

22% (11 of 50) of ESTs deemed by Division 12 of the APA as having Strong Research Support fared strongly across most metric scores.

64% (18 of 28) of ESTs deemed by Division 12 of the APA as having Modest Research Support fared poorly across most metric scores.

14% (4 of 28) of ESTs deemed by Division 12 of the APA as having Modest Research Support fared strongly across most metric scores.

"Our findings don't mean that therapy doesn't work, they don't mean that anything goes or everything is the same," said co-lead author John Sakaluk, assistant professor in the University of Victoria's Department of Psychology, who earned his doctorate at KU. "But based on this evidence, we don't know if most therapies designated as ESTs do actually have better science on their side compared to alternative, research-supported forms of therapy."

According to Williams, the field of clinical psychology may be ripe for a broad-scale reassessment of therapies that were thought to be supported by rigorous scientific evidence until now.

"Medical researchers coined a term called 'medical reversal,'" the KU researcher said. "Sometimes these are medical practices that doctors use across the country, but they are discontinued after it's found they don't work or aren't more effective than less-costly alternatives -- or they're actually harmful. Pending replications of our results, we may need broad systems-level psychotherapy reversals. Some of these ESTs are widely implemented in big systems like the Veterans Health Administration. If we find evidence for them isn't as strong as believed, it may be worth looking at. Let's say, hypothetically, there are two therapies for depression, and people have said, 'Well, Therapy A has stronger evidence for it than Therapy B.' But we know Therapy B works, too, and it's less costly. Today, if we find the evidence for Therapy A isn't actually stronger, it may be time to promote Therapy B."

Further, Williams advised clinicians and patients to continually evaluate progress in therapy and adjust therapeutic approaches based more on patient progress than research evidence of a given therapy's effectiveness.

"For clinicians and clients, this speaks to the importance of frequently assessing how well a client is doing in therapy," he said. "Routine outcome monitoring is always a good thing to be doing, but it may be a particularly good idea based on new evidence that we don't know if some therapies are effective. So, if I'm a patient, I want to assess how I'm doing -- and there are different measures for doing that. This study suggests it's even more important than previously believed."

For the research community, the authors recommended a reassessment of the size and power of clinical trials and more collaborations between labs to increase the precision of analyses, along with fresh approaches to how research is appraised, published and evaluated.

"One of the things that becomes really obvious when you look at the literature is researchers are collecting and analyzing their data in ways that are extremely flexible," Sakaluk said. "If you don't follow certain rules of statistical inference, you can inadvertently trick yourself into claiming effects that aren't really there. For EST research, it may become important to define in advance what researchers are going to do -- like how they'll analyze data -- and go on record in a way that restricts what they're going to do. This would coincide with a movement to encourage researchers to propose what they'd like to do and get reviewers and journal editors to weigh in before -- not after -- scientists do research, and to publish it irrespective of what they find."

Williams said studies supporting the power of clinical treatments should improve over time with more exacting approaches to statistical data.

"This is a system-level issue that will get better as our field begins to grapple with replication," he said. "We think you'll see improvement in study design going forward. There wasn't a fieldwide appreciation for these problems until a decade ago. It takes time for the field to improve. We think our results will complement ongoing efforts by Division 12 to increase the quality of EST research and evaluation."

Credit: 
University of Kansas

From greenhouse gas to fuel

image: From left to right: Former UD postdoctoral associate Qi Lu, current assistant professor Bingjun Xu, and current postdoctoral associate Xiaoxia Chang recently revealed a new approach that utilizes a series of catalytic reactions to electrochemically reduce carbon dioxide to methane.

Image: 
Joy Smoker

A growing number of scientists are looking for fast, cost-effective ways to convert carbon dioxide gas into valuable chemicals and fuels.

Now, an international team of researchers has revealed a new approach that utilizes a series of catalytic reactions to electrochemically reduce carbon dioxide to methane, the main ingredient in natural gas, eliminating an intermediate step usually needed in the reduction process.

"We want to supply renewable electricity and take carbon dioxide from the atmosphere and convert it to something else in one step," said Bingjun Xu , a University of Delaware assistant professor of chemical and biomolecular engineering. "This is a key contribution to this vision."

The team's results were published in the journal Nature Communications on July 26, 2019. Two of the study authors are based at UD: Xu and postdoctoral associate Xiaoxia Chang. Another study author, Qi Lu of Tsinghua University in China, was formerly a postdoctoral associate in the Department of Chemical and Biomolecular Engineering at UD.

The paper's authors also include Haochen Zhen from Tsinghua University, Jingguang Chen from Columbia University, William Goddard III from the California Institute of Technology and Mu-Jeng Cheng from National Cheng Kung University in Taiwan.

A one-pot system

To convert carbon dioxide into valuable fuels, you have to start with a surface made of copper, the metal famous for its use in pennies and electrical wiring. Copper can be used to reduce carbon dioxide into carbon monoxide, which can then be further transformed into substances such as methane. This process is relatively simple, but it requires two reactors and costly separation and purification steps.

The research team used computations and experiments to design a one-pot catalysis system. Add carbon dioxide, and a series of chemical reactions will happen without the need to stop and add more chemicals.

To do this, the team added special nanostructured silver surfaces, which were developed by Lu when he was a postdoctoral associate at UD from 2012 to 2015, to the copper surfaces. The silver portion attracts carbon monoxide molecules, which then migrate to the copper portion and reduce to methane. The system yields a higher concentration of methane than copper-only systems.

"In this work the primary novelty is to combine these two in a configuration so that several steps of reaction could occur in one system," said Xu. "We systematically modified the composition, the silver-to-copper ratio in the structure. Those are key to the selectivity and ability to combine the reactions."

Previous attempts to combine copper with precious metal in this way have failed, but the group developed a special type of electrode structure that enabled the system. The research was the result of a collaborative effort with research groups contributing spectroscopy, computation, and studies of the reactivity of materials.

Credit: 
University of Delaware

Toxic chemicals hindering the recovery of Britain's rivers

Toxic chemicals from past decades could be hindering the recovery of Britain's urban rivers, concludes a recent study by scientists from Cardiff University, the University of Exeter, and the Centre for Ecology and Hydrology.

During the 1970s, over 70% of the rivers in the South Wales valleys were classified as grossly polluted, by a combination of poor sewage treatment, colliery waste and industrial discharge. Since then, industry has declined, deep mining has ceased and sewage treatment has improved to the point that clean water species such as salmon and otters have returned to rivers such as the Taff.

However, Welsh rivers in urban locations still have damaged food chains and fewer species of invertebrates in comparison to more rural rivers. According to the researchers, these effects might be explained by the higher concentrations of former industrial pollutants such as PCBs (Polychlorinated Biphenyls) and flame-retardant chemicals (PBDEs) that persist in these rivers despite being phased out.

Dr Fred Windsor, a doctoral student at Cardiff University, explained: "Despite major success in controlling sewage pollution in South Wales' rivers over the last three decades, something appears to be holding back biological recovery. Our investigations show that persistent contaminants might be responsible as they still occur widely in invertebrates, particularly in urban river environments."

Professor Charles Tyler, from the University of Exeter's School of Biosciences, added: "These apparent effects of what we call 'legacy' pollutants - PCBs, flame retardants, organochlorine pesticides and other complex organic chemicals that have now been largely discontinued from production and use - are yet another reminder that we continue to live with problems caused by toxic chemicals from past decades. These chemicals still occur widely in rivers, lakes and seas in Britain and beyond, and still affect a wide range of animals."

Professor Steve Ormerod of Cardiff University's School of Biosciences and Water Research Institute concluded: "Urban river ecosystems in Britain have been on an improving trajectory since at least 1990, but there is still a way to go before we can say that they've wholly recovered from well over a century of industrial and urban degradation.

"The ecological pressures on our rivers are multiple, ranging from combined sewer overflows to engineering modifications, and this research adds a new dimension to understanding why they're not yet at their best.

"The slow degradation of some pollutants means that we may have to wait a long time before these chemicals disappear. Perhaps one of the lessons is that we should avoid ecosystem damage in the first place rather than try to solve problems after they occur."

Credit: 
Cardiff University

A novel graphene-matrix-assisted stabilization method will help unique 2D materials to become a part

image: 2D copper oxide material inside the two-layer graphene matrix

Image: 
Skoltech

Scientists from Russia and Japan found a way of stabilizing two-dimensional copper oxide (CuO) materials by using graphene. Along with being the main candidates for spintronics applications, these materials may be used in forthcoming quantum computers. The results of the study were published in The Journal of Physical Chemistry C.

The family of 2D materials has recently been joined by a new class, the monolayers of oxides and carbides of transition metals, which have been the subject of extensive theoretical and experimental research. These new materials are of great interest to scientists due to their unusual rectangular atomic structure and chemical and physical properties, and in particular, a unique 2D rectangular copper oxide cell which does not exist in crystalline (3D) form, as opposed to most of the 2D materials, whether well-known or discovered lately, which have a lattice similar to that of their crystalline (3D) counterparts. The main hindrance for practical use of monolayers is their low stability.

A group of scientists from MISiS, the Institute of Biochemical Physics of RAS (IBCP), Skoltech, and the National Institute for Materials Science in Japan (NIMS) discovered 2D copper oxide materials with an unusual crystal structure inside the two-layer graphene matrix using experimental methods.

"Finding that a rectangular-lattice copper-oxide monolayer can be stable under given conditions is as important as showing how the binding of copper oxide and a graphene nanopore and formation of a common boundary can lead to creation of a small stable 2D copper oxide cluster with a rectangular lattice. In contrast to the monolayer, the small copper oxide cluster's stability is driven to a large extent by the edge effects (boundaries) that lead to its distortion and, subsequently, destruction of the flat 2D structure. Moreover, we demonstrated that binding bilayered graphene with pure copper, which never exists in the form of a flat cluster, makes the 2D metal layer more stable," says Skoltech Senior Research Scientist Alexander Kvashnin.

The preferability of the copper oxide rectangular lattice forming in a bigraphene nanopore was confirmed by the calculations performed using the USPEX evolutionary algorithm developed by Professor at Skoltech and MIPT, Artem Oganov.

The studies of the physical properties of the stable 2D materials indicate that they are good candidates for spintronics applications.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Distant 'heavy metal' gas planet is shaped like a football

image: This artist's illustration shows WASP-121b, a distant world that is losing magnesium and iron gas from its atmosphere. The observations represent the first time that heavy metals have been detected escaping from a "hot Jupiter," which is a large, gaseous exoplanet that orbits very close to its host star. WASP-121b's orbit is so close that the star's gravity is nearly ripping the planet apart, giving the planet an oblique football shape.

Image: 
NASA, ESA, and J. Olmsted (STScI)

The scorching hot exoplanet WASP-121b may not be shredding any heavy metal guitar riffs, but it is sending heavy metals such as iron and magnesium into space. The distant planet's atmosphere is so hot that metal is vaporizing and escaping the planet's gravitational pull. The intense gravity of the planet's host star has also deformed the sizzling planet into a football shape.

The new observations, made by an international team of astronomers using NASA's Hubble Space Telescope, describe the first known instance of heavy metal gas streaming away from a "hot Jupiter," which is a nickname for large, gaseous exoplanets that orbit very close to their host stars. A research paper describing the results, co-authored by University of Maryland Astronomy Professor Drake Deming, was published in the August 1, 2019 issue of the Astronomical Journal.

"This planet is a prototype for ultra-hot Jupiters. These planets are so heavily irradiated by their host stars, they're almost like stars themselves," Deming said. "The planet is being evaporated by its host star to the point that we can see metal atoms escaping the upper atmosphere where they can interact with the planet's magnetic field. This presents an opportunity to observe and understand some very interesting physics."

Normally, hot Jupiter planets are still cool enough inside to condense heavier elements such as magnesium and iron into clouds that remain in the planet's atmosphere. But that's not the case with WASP-121b, which is orbiting so close to its host star that the planet's upper atmosphere reaches a blazing 4,600 degrees Fahrenheit. The planet is so close, in fact, that it is being ripped apart by the star's gravity, giving the planet an oblique football shape. The WASP-121 star system resides about 900 light-years from Earth. 

"Heavy metals have been seen in other hot Jupiters before, but only in the lower atmosphere," explained lead researcher David Sing of Johns Hopkins University. "With WASP-121b, we see magnesium and iron gas so far away from the planet that they're not gravitationally bound. The heavy metals are escaping partly because the planet is so big and puffy that its gravity is relatively weak. This is a planet being actively stripped of its atmosphere."

The researchers used Hubble's Space Telescope Imaging Spectrograph to search for ultraviolet light signatures of magnesium and iron. These signatures can be observed in starlight filtering through WASP-121b's atmosphere, as the planet passes in front of its host star.

The observations of WASP-121b add to the developing story of how planets lose their primordial atmospheres. When planets form, they gather an atmosphere made of gas from the disk that gave rise to both the planet and its host star. These young atmospheres consist mostly of hydrogen and helium, the most plentiful elements in the universe. As the planet moves closer to its star, much of this early atmosphere burns off and escapes to space.

"The hot Jupiters are mostly made of hydrogen, and Hubble is very sensitive to hydrogen, so we know these planets can lose the gas relatively easily," Sing said. "But in the case of WASP-121b, the hydrogen and helium gas is outflowing, almost like a river, and is dragging these metals with them. It's a very efficient mechanism for mass loss."

According to the researchers, WASP-121b will be a perfect target for NASA's James Webb Space Telescope, scheduled for launch in 2021. The Webb telescope will enable researchers to search for water and carbon dioxide, which can be detected at longer, redder wavelengths of infrared light. The combination of Hubble and Webb observations should give astronomers a more complete inventory of the chemical elements that make up the planet's atmosphere.

"Hot Jupiters this close to their host star are very rare. Ones that are this hot are even rarer still," Deming added. "Although they're rare, they really stand out once you've found them. We look forward to learning even more about this strange planet."

Credit: 
University of Maryland

From Japanese basket weaving art to nanotechnology with ion beams

image: The traditional Japanese basket weaving pattern (kago-mé: basket with eyes) served as an inspiration for an array of fluxon traps produced with a helium-ion microscope in a high-temperature superconductor. The anchored fluxons are represented by blue figures (based on the symbol Φ0 for the flux quantum), the purple fluxons are trapped by their neighbors like in a cage.

Image: 
© Bernd Aichner, University of Vienna

The properties of high-temperature superconductors can be tailored by the introduction of artificial defects. An international research team around physicist Wolfgang Lang at the University of Vienna has succeeded in producing the world's densest complex nano arrays for anchoring flux quanta, the fluxons. This was achieved by irradiating the superconductor with a helium-ion microscope at the University of Tübingen, a technology that has only recently become available. The researchers were inspired by a traditional Japanese basket weaving art. The results have been published recently in "ACS Applied Nanomaterials" a journal of the renowned "American Chemical Society".

Superconductors can carry electricity without loss if they are cooled below a certain critical temperature. However, pure superconductors are not suitable for most technical applications, but only after controlled introduction of defects. Mostly, these are randomly distributed, but nowadays the tailored periodic arrangement of such defects becomes more and more important.

Traps and cages for magnetic quantum objects in superconductors

A magnetic field can only penetrate in quantized portions into a superconductor, the so-called fluxons. If superconductivity is destroyed in very small regions, the fluxons are anchored at exactly these places. With periodic arrays of such defects, two-dimensional "fluxon crystals" can be generated, which are a model system for many interesting investigations. The defects serve as traps for the fluxons and by varying easily accessible parameters numerous effects can be investigated. "However, it is necessary to realize very dense defect arrangements so that the fluxons can interact with each other, ideally at distances below 100 nanometers, which is a thousand times smaller than the diameter of a hair," explains Bernd Aichner from the University of Vienna.

Particularly interesting for the researchers are complex periodic arrangements, such as the quasi-kagomé defect pattern investigated in the current study, which was inspired by a traditional Japanese basket weaving art. The bamboo stripes of the kagomé pattern are replaced by a chain of defects with 70 nanometers spacings. The peculiarity of this artificial nanostructure is that not only one fluxon per defect can be anchored, but approximately circular fluxon chains are formed, which in turn hold a still free fluxon trapped in their midst. Such fluxon cages are based on the mutual repulsion of fluxons and can be opened or locked by changing the external magnetic field. They are therefore regarded as a promising concept for the realization of low-loss and fast superconducting circuits with fluxons.

Nanostructuring of high-temperature superconductors with the helium-ion microscope

This research has been made possible by a novel device at the University of Tübingen - the helium-ion microscope. Although it has a similar operating principle as the scanning electron microscope, the helium-ion microscope offers a previously unmatched resolution and depth of field because of the much smaller wavelength of the helium ions. "With a helium-ion microscope, the superconducting properties can be tailored without removing or destroying the material, which enables us to produce fluxon arrays in high-temperature superconductors with a density that is unrivalled worldwide," emphasizes Dieter Koelle from the Eberhard Karls University of Tübingen. The scientists are now planning to further develop the method for even smaller structures and to test various theoretically proposed concepts for fluxon circuits.

Credit: 
University of Vienna