Tech

Exposure to air pollutants from power plants varies by race, income and geography

Many people take electricity for granted -- the power to turn on light with the flip of a switch, or keep food from spoiling with refrigeration. But generating electricity from fossil fuels exacts a toll on health and the environment. Scientists estimate that these power plant emissions cause tens of thousands of premature deaths in the U.S. each year. Now, researchers report in ACS' Environmental Science & Technology that pollutant exposure varies with certain demographic factors.

According to the U.S. Environmental Protection Agency, fine particulate matter -- also known as PM2.5 -- is the largest environmental health risk in the U.S. Scientists have linked these air pollutants to increased death rates from cardiovascular disease, chronic obstructive pulmonary disease and lung cancer. Power plants, particularly coal-fired ones, emit PM2.5 directly, as well as sulfur dioxide and nitrogen oxides that can react with ammonia in the atmosphere to form additional PM2.5. Julian Marshall and colleagues wondered if certain demographic groups showed disparities in exposure to the harmful air pollutants.

To find out, the researchers used an air quality model to predict PM2.5 concentrations in different geographical areas across the U.S in 2014. They correlated the air quality data with location-specific information on race, income and mortality. The team estimated that in 2014, about 16,700 people in the U.S. died prematurely because of power plant emissions. At the national level, African Americans had the highest mortality rates from power-plant PM2.5 (6.6 deaths per 100,000 people), followed by white non-Latinos (5.9 deaths per 100,000 people), with lower rates for Asian Americans, Native Americans and Latinos. However, exposures for each racial/ethnic group varied by state. Estimated mortality rates from PM2.5 were higher for lower-income than higher-income households, but the differences by race were larger than those by income. Because wind can carry PM2.5 long distances from the source, health risks in many states could be attributed to out-of-state emissions. The researchers say that they hope these findings will help communities, scientists and policymakers better understand and address air pollution exposures and their disparities.

Credit: 
American Chemical Society

Prior exposure to pollutants could underlie increased diabetes risk of Indian immigrants

In 2004, the United Nations Stockholm Convention banned the production and use of many persistent organic pollutants (POPs), such as dichlorodiphenyltrichloroethane (DDT) and polychlorinated biphenyls (PCBs). However, POP production and use continue in some nations that did not ratify the treaty, including India and other South Asian countries. Now, researchers reporting in ACS' Environmental Science & Technology have linked high levels of DDT in Indian immigrants in the U.S. with risk factors for diabetes.

Asian Indians have a higher risk of diabetes than other populations, and this risk extends to Indian immigrants in the U.S., Europe and elsewhere. Previous studies have found DDT in samples taken from the environment, food and people of the Indian subcontinent. Michele La Merrill and colleagues wondered whether prior exposure to DDT and other POPs could influence Asian Indians' diabetes risk, even after they had immigrated to the U.S. Based on results from animal studies, the researchers hypothesized that POPs could contribute to diabetes by causing excess fat deposition in the liver, which in turn can lead to insulin resistance.

To test their hypothesis, the researchers examined the levels of 20 environmental pollutants in blood plasma samples from 147 Asian Indian participants, 45 to 84 years old, living in the San Francisco Bay area. The researchers detected levels of numerous POPs that were much higher than levels previously found in other populations in the U.S. In particular, people with higher levels of DDT in their blood were more likely to be obese, have excess fat in their livers and show increased insulin resistance compared to people with lower levels. Although more research is needed to establish a causal relationship, these findings could help explain the increased diabetes risk for Indian immigrants and have public health implications for the approximately 1.8 billion South Asians throughout the world, the researchers say.

Credit: 
American Chemical Society

Ensembling improves machine learning model performance

OAK BROOK, Ill. - Ensembles created using models submitted to the RSNA Pediatric Bone Age Machine Learning Challenge convincingly outperformed single-model prediction of bone age, according to a study published in the journal Radiology: Artificial Intelligence.

Ensemble learning is a method in machine learning in which different models designed to accomplish the same task are combined into a single model.

Model heterogeneity is an important aspect of ensemble learning. Ensembles tend to perform best when each of the individual models performs well in their own right, and the correlation among individual model predictions is relatively low.

Because ensembles benefit from low correlation between model predictions, the greater the underlying differences in approach, the greater the improvement, as long as they achieve similar performance. In this respect, a competition, in which participants are encouraged to submit their best models, provides an ideal setting from which to ensemble high-performing models that use different techniques.

"Competitions provide a unique opportunity to study the effects of combining predictions from heterogenous models," said study author Ian Pan, a medical student at The Warren Alpert Medical School of Brown University in Providence, R.I.

To investigate improvements in performance for automatic bone age estimation that can be gained through model ensembling, Pan and colleagues used 48 submissions from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge.

Participants were provided with 12,611 pediatric hand X-rays with bone ages determined by a pediatric radiologist to develop models for bone age determination. The final results were determined using a test set of 200 X-rays labeled with the weighted average of 6 ratings. The researchers evaluated the mean pairwise model correlation and performance of all possible model combinations for ensembles of up to 10 models using the mean absolute deviation (MAD). To estimate the true generalization MAD, they conducted a bootstrap analysis using the 200 test X-rays.

The estimated generalization MAD of a single model was 4.55 months. The best performing ensemble consisted of four models with a MAD of 3.79 months. The mean pairwise correlation of models within this ensemble was 0.47. In comparison, the lowest achievable MAD by combining the highest-ranking models based on individual scores was 3.93 months using eight models with a mean pairwise model correlation of 0.67.

"Our results call attention to a concept that has substantial practical implications, as computer vision and other machine learning algorithms begin to move from research to the clinical environment," Pan said. "Namely, that the best results are likely to be achieved by combining multiple accurate and diverse models rather than from single models alone."

Thus, practitioners aiming to incorporate machine learning algorithms into their workflow would benefit from having predictions obtained from different models, similar to how the accuracy of a radiological interpretation can be bolstered with multiple readers.

Pan added that the findings also highlight the importance of open competitions like the 2017 RSNA Pediatric Bone Age Machine Learning Challenge, as they provide a standardized use case, a common training set, and an objective assessment method applied equally to all models.

"Machine learning competitions within radiology should be encouraged to spur development of heterogeneous models whose predictions can be combined to achieve optimal performance," he said.

For the 2019 RSNA Intracranial Hemorrhage Detection and Classification Challenge, researchers worked to develop algorithms that can identify and classify subtypes of hemorrhages on head CT scans. The data set, which comprises more than 25,000 head CT scans contributed by several research institutions, is the first multiplanar dataset used in an RSNA artificial intelligence challenge.

Credit: 
Radiological Society of North America

Obesity embargo alert for December 2019 issue

Editors' Choice 1 - Anti-obesity Drug Prescriptions: Updated Analysis of Patterns, David R. Saxon, david.saxon@cuanschutz.edu, Sean J. Iwamoto, Christie J. Mettenbrink, Emily McCormick, David Arterburn, Matthew F. Daley, Caryn E. Oshiro, Corinna Koebnick, Michael Horberg, Deborah R. Young, and Daniel H. Bessesen
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22581) - online now, not embargoed

Also see accompanying Commentary by William H. Dietz (http://onlinelibrary.wiley.com/doi/10.1002/oby.22641) - online now, not embargoed

Editors' Choice 2 - IBT Plus Liraglutide Improves Eating Disorder Psychopathology, Ariana M. Chao, arichao@nursing.upenn.edu, Thomas A. Wadden, Olivia A. Walsh, Kathryn A. Gruber, Naji Alamuddin, Robert I. Berkowitz, and Jena S. Tronieri
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22653)

Editors' Choice 3 - Novel Adipokine May Regulate Tissue Remodeling in Weight Loss, Robert M. Jackson, Beth A. Griesel, Kevin R. Short, David Sparling, Willard M. Freeman, and Ann Louise Olson, ann-olson@ouhsc.edu
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22652)

Editors' Choice 4 - School Programs Promoting Intake of Water: Cheap and Effective?, Erica L. Kenney, ekenney@hsph.harvard.edu, Angie L. Cradock, Michael W. Long, Jessica L. Barrett, Catherine M. Giles, Zachary J. Ward, and Steven L. Gortmaker
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22615)

ADDITIONAL EMBARGOED RESEARCH

Population?Based Study of Traffic?Related Air Pollution and Obesity in Mexican Americans, Xueying Zhang, Hua Zhao, Wong-Ho Chow, Moira Bixby , Casey Durand , Christine Markham , Kai Zhang, kai.zhang@uth.tmc.edu
(http://onlinelibrary.wiley.com/doi/10.1002/oby.22697) - Embargo lifts Dec. 4, 2019, at 3:00 a.m. (EDT).

Scroll down to find abstracts for each of the above papers. To request the full text of any of these studies and agree to the embargo policy, or to arrange an interview with a study's author or an obesity expert, please contact communications@obesity.org.

Editors' Choice Abstracts

Editors' Choice 1 - Anti-obesity Medication Use in 2.2 Million Adults Across Eight Large Health Care Organizations: 2009-2015

Objective: The aim of this study was to examine the prescribing patterns and use of anti-obesity medications in a large cohort of patients using data from electronic health records.

Methods: Pharmacy- and patient-level electronic health record data were obtained on 2,248,407 adults eligible for weight-loss medications from eight geographically dispersed health care organizations.

Results: A total of 29,964 patients (1.3% of total cohort) filled at least one weight-loss medication prescription. This cohort was 82.3% female, with median age 44.9 years and median BMI 37.2 kg/m2. Phentermine accounted for 76.6% of all prescriptions, with 51.7% of prescriptions being filled for ? 120 days and 33.8% filled for ? 360 days. There was an increase of 32.9% in medication days for all medications in 2015 compared with 2009. Higher prescription rates were observed in women, black patients, and patients in higher BMI classes. Of 3,919 providers who wrote at least one filled prescription, 23.8% (n = 863) were "frequent prescribers" who wrote 89.6% of all filled prescriptions.

Conclusions: Weight-loss medications are rarely prescribed to eligible patients. Phentermine accounted for > 75% of all medication days, with a majority of patients filling it for more than 4 months. Less than one-quarter of prescribing providers accounted for approximately 90% of all prescriptions.

Editors' Choice 2 - Effects of Liraglutide and Behavioral Weight Loss on Food Cravings, Eating Behaviors, and Eating Disorder Psychopathology

Objective: This exploratory analysis examined the effects of intensive behavioral therapy (IBT) for obesity ("IBT-alone"), IBT plus liraglutide 3.0 mg/d ("IBT-liraglutide"), and IBT plus liraglutide 3.0 mg/d plus 12 weeks of a portion-controlled diet that provided 1,000 to 1,200 kcal/d ("Multicomponent") on changes in food cravings, eating behaviors, and eating disorder psychopathology at 24 and 52 weeks post randomization.

Methods: Adults with obesity (mean age = 47.6 ± 11.8 years and BMI = 38.4 ± 4.9 kg/m2; 79.3% female; 54.0% non-Hispanic white; 44.7% black) were randomized to IBT-alone (n = 50), IBT-liraglutide (n = 50), or Multicomponent (n = 50).

Results: At weeks 24 and 52, liraglutide-treated groups reported significantly larger declines in weight concern relative to the IBT-alone group. At week 24, compared with IBT-alone, liraglutide-treated groups reported significantly greater reductions in dietary disinhibition, global eating disorder psychopathology, and shape concern. The Multicomponent group had significantly greater reductions in binge eating at week 24 relative to the IBT-alone group. However, differences among groups were no longer significant at week 52. Groups did not differ in total food cravings at week 24 or 52.

Conclusions: The combination of liraglutide and IBT was associated with greater short-term improvements in dietary disinhibition, global eating disorder psychopathology, and shape concern than IBT alone.

Editors' Choice 3 - Weight Loss Results in Increased Expression of Anti-Inflammatory Protein CRISPLD2 in Mouse Adipose Tissue

Objective: Obesity is a major risk factor for cardiovascular disease, metabolic syndrome, and type 2 diabetes mellitus, whereas weight loss is associated with improved health outcomes. It is therefore important to learn how adipose contraction during weight loss contributes to improved health. It was hypothesized that adipose tissue undergoing weight loss would have a unique transcriptomic profile, expressing specific genes that might improve health.

Methods: This study conducted an RNA-sequencing analysis of the epididymal adipose tissue of mice fed either a high-fat diet (HFD) or a regular rodent chow diet (RD) ad libitum for 10 weeks versus a cohort of mice fed HFD for the first 5 weeks before being swapped to an RD for the remainder of the study (swapped diet [SWAP]).

Results: The swapped diet resulted in weight loss, with a parallel improvement in insulin sensitivity. RNA sequencing revealed several transcriptomic signatures distinct to adipose tissue in SWAP mice, distinguished from both RD and HFD adipose tissue. The analysis found a unique upregulated mRNA that encodes a secreted lipopolysaccharide-binding glycoprotein (CRISPLD2) in adipose tissue. Whereas cellular CRISPLD2 protein levels were unchanged, plasma CRIPSLD2 levels increased in SWAP mice following weight loss and could correlate with insulin sensitivity.

Conclusions: Taken together, these data demonstrate that CRISPLD2 is a circulating adipokine that may regulate adipocyte remodeling during weight loss.

Editors' Choice 4 - Cost-Effectiveness of Water Promotion Strategies in Schools for Preventing Childhood Obesity and Increasing Water Intake

Objective: This study aimed to estimate the cost-effectiveness and impact on childhood obesity of installation of chilled water dispensers ("water jets") on school lunch lines and to compare water jets' cost, reach, and impact on water consumption with three additional strategies.

Methods: The Childhood Obesity Intervention Cost Effectiveness Study (CHOICES) microsimulation model estimated the cost-effectiveness of water jets on US childhood obesity cases prevented in 2025. Also estimated were the cost, number of children reached, and impact on water consumption of the installation
of water jets and three other strategies.

Results: Installing water jets on school lunch lines was projected to reach 29.6 million children (95% uncertainty interval [UI]: 29.4 million-29.8 million), cost $4.25 (95% UI: $2.74-$5.69) per child, prevent 179,550 cases of childhood obesity in 2025 (95% UI: 101,970-257,870), and save $0.31 in health care costs per dollar invested (95% UI: $0.15-$0.55). In the secondary analysis, installing cup dispensers next to existing water fountains was the least costly but also had the lowest population reach.

Conclusions: Installing water jet dispensers on school lunch lines could also save almost half of the dollars needed for implementation via a reduction in obesity-related health care costs. School-based interventions to promote drinking water may be relatively inexpensive strategies for improving child health.

ADDITIONAL EMBARGOED RESEARCH

Population-Based Study of Traffic-Related Air Pollution and Obesity in Mexican Americans

Objective: The purpose of this study was to assess the cross?sectional association between residential exposure to traffic?related air pollution and obesity in Mexican American adults.

Methods: A total of 7,826 self?reported Mexican Americans aged 20 to 60 years old were selected from the baseline survey of the MD Anderson Mano?a?Mano Cohort. Concentrations of traffic?related particulate matter with a diameter of?

2.5 μm were modeled at geocoded residential addresses using dispersion models. The residential proximity to the nearest major road was calculated using a Geographic Information System. Linear and logistic regression models were used to estimate the adjusted associations between exposure and obesity, defined as a BMI of over 30.

Results: More than half (53.6%) of the study participants had a BMI over 30, with a higher prevalence in women (55.0%) than in men (48.8%). Overall higher traffic?related air pollution exposures were associated with lower BMI in men but higher BMI in women. By stratifying for those who lived in a 0? to 1,500?m road buffer, the one?interquartile?range (685.1 m) increase of distance to a major road had a significant association with a 0.58 kg/m2 lower BMI (95% CI: ?0.92 to ?0.24) in women.

Conclusions: Exposure to intensive traffic is associated with increased risk of obesity in Mexican American women.

Credit: 
The Obesity Society

DDT linked to higher risk of diabetes among Asian Indian immigrants to US

Previous exposure to the pollutant DDT may contribute to the risk of diabetes among Asian Indian immigrants to the United States, according to a study from the University of California, Davis.

The study, published today in the American Chemical Society's journal Environmental Science & Technology, linked high levels of DDT, or dichlorodiphenyltrichloroethane, in Indian immigrants with risk factors for metabolic disease.

"Our findings evoke a new interpretation of Rachel Carson's famous book Silent Spring, in that the high DDT exposures of South Asian immigrants in the U.S. currently fall on deaf ears in the U.S.," said lead author Michele La Merrill, an associate professor in the UC Davis Department of Environmental Toxicology. "Although DDT remains in use in other nations and migration globalizes these exposures, people in the U.S. often mistakenly regard DDT exposure as no longer relevant to our society due to its ban in this country nearly 50 years ago."

La Merrill said that high exposure levels in these immigrants may be causing their increased risk of obesity and other metabolic diseases, but medical doctors are often not aware of that possible link.

DIABETES AND DDT

Asian Indians have a higher risk of diabetes than other populations, and this risk extends to Indian immigrants in the U.S., Europe and elsewhere.

In 2004, the United Nations Stockholm Convention banned the production and use of many persistent organic pollutants, or POPs, such as DDT and polychlorinated biphenyls, or PCBs. However, POP production and use continue in some nations that did not ratify the treaty, including India and other South Asian countries. Previous studies have found DDT in samples taken from the environment, food and people of the Indian subcontinent.

La Merrill and colleagues wondered whether prior exposure to DDT and other POPs could influence Asian Indians' diabetes risk, even after they had immigrated to the U.S. Based on results from animal studies, the researchers hypothesized that POPs could contribute to diabetes by causing excess fat deposition in the liver, which in turn can lead to insulin resistance.

POP TEST

To test their hypothesis, the researchers examined the levels of 30 environmental pollutants in blood plasma samples from 147 Asian Indian participants, 45 to 84 years old, living in the San Francisco Bay Area. The researchers detected levels of numerous POPs that were much higher than levels previously found in other U.S. populations.

In particular, people with higher levels of DDT in their blood were more likely to be obese, store excess fat in their livers and show increased insulin resistance compared to people with lower levels.

Although more research is needed to establish a causal relationship, these findings could help explain the increased diabetes risk for Indian immigrants and have public health implications for the approximately 1.8 billion South Asians throughout the world, the researchers said.

Credit: 
University of California - Davis

Breast cancer recurrence score has different implications for men

image: Senior author is Xiao-Ou Shu, MD, PhD, MPH, Ingram Professor of Cancer Research and associate director for Global Health and co-leader of the Cancer Epidemiology Research Program at VICC

Image: 
Vanderbilt University Medical Center

The TAILORx study published last year offered good news for women with early-stage ER-positive breast cancer who scored at intermediate risk for recurrence according to a genetic assay test. The study indicated that chemotherapy after surgery provided little advantage in overall survival for these women, so they could forgo the treatment.

This conclusion may not directly apply to male patients with the same type of breast cancer. A new study by Vanderbilt-Ingram Cancer Center (VICC) researchers published in Clinical Cancer Research, a journal of the American Association for Cancer Research, indicates that a lower threshold is needed for male patients to predict mortality using the genetic assay, Oncotype DX®, a commercial diagnostic test. The study's lead author is Fei Wang, MD, PhD, a visiting research fellow at Vanderbilt University, and its senior author is Xiao-Ou Shu, MD, PhD, MPH, Ingram Professor of Cancer Research and associate director for Global Health and co-leader of the Cancer Epidemiology Research Program at VICC.

"The recurrence score is associated with overall mortality in male breast cancer patients at a much lower threshold than that for female patients," the article stated. "Studies are needed to establish specific guidelines for recurrence scores for male breast cancer patients."

The researchers analyzed data of 848 male and 110,898 female breast cancer patients from the National Cancer Database, comparing overall mortality associated with recurrence scores. Breast cancer is rare in men, accounting for approximately 1% of all breast cancers.

"The observed differences in distribution as well as the prognosis predictive utility of recurrence score between men and women suggest that male breast cancer may have distinct biology and different prognostic factors compared to female patients," the authors wrote. "Studies have suggested that pathogenic mutations and epigenetic alterations involved in male breast carcinogenesis do not exactly overlap with those of women."

Credit: 
Vanderbilt University Medical Center

New water-based optical device revolutionizes the field of optics research

image: Extracting light modulation using the interfacial Pockels effect.

Image: 
Prof Eiji Tokunaga, Tokyo University of Science

Light is versatile in nature. In other words, it shows different characteristics when traveling through different types of materials. This property has been explored in various technologies, but the way in which light interacts with materials needs to be manipulated to get the desired effect. This is done using special devices called light modulators, which have the ability to modify the properties of light. One such property, called the Pockels effect, is seen when an electric field is applied to the medium through which light travels. Normally, light "bends" when it hits any medium, but under the Pockels effect, the refractive index of the medium (a measure of how much the light bends) changes proportionally to the applied electric field. This effect has various applications in optical engineering, for example, in optical communication, displays, and electric sensors. But, exactly how this effect occurs in different materials is not very clear yet, making it difficult to fully explore its potential.

In a breakthrough study published in OSA Continuum, a team of scientists led by Prof Eiji Tokunaga at the Tokyo University of Science and including Daisuke Hayama, Keisuke Seto, Kyohei Yamashita, Shunpei Yukita (all Tokyo University of Science), and Takayoshi Kobayashi (The University of Electro-Communications and National Chiao-Tung University) shed light on the mechanism of Pockels effect in a new type of light modulator. Until recently, this effect had been observed in only a special type of crystal, which is costly and hence difficult to use. Twelve years ago, Prof Tokunaga and his team observed this effect, for the first time, in the top layer (also called the interfacial layer) of water--something that is not seen in the bulk of water--when it is in contact with an electrode, showing a glimmer of hope for scientists trying to create simple optical devices. Although the Pockels coefficient (a measure of the Pockels effect) was an order of magnitude greater, it turned out that because this effect was generated only in the thin interfacial layer, this called for a highly sensitive detector. What's more, even its mechanism was not clearly understood, further complicating the process. Prof Tokunaga and his team wanted to find a solution, and after many trials and errors, they finally succeeded. Discussing his motivation for the study, Prof Tokunaga says, "It is difficult to measure the electro-optic signal using water as the medium because it occurs in only a thin layer. Therefore, we wanted to find a way to extract a large signal from the medium, which would not require high-sensitivity measurements and would be easier to use."

To do this, the scientists created a setup with a transparent electrode on a glass surface in water, and an electric field was applied to it. The interfacial layer (also called the electric double layer or EDL) is only a few nanometers thick and shows different electrochemical properties than the rest of the water. It is also the only part of water where Pockels effect can be observed under an electric field. The scientists used the concept of total reflection to create a large angle at the interface between water and electrode. They observed that when light travels through the electrode and enters the EDL, changes in the refractive index of both layers can modify the reflected signal. Since the refractive index in the transparent electrode is larger than for both water and glass (1.33 and 1.52, respectively), the amount of light reflected at both ends increases, thereby causing a more enhanced Pockels effect. This was important because a large, more enhanced signal would mean that even low-sensitivity devices could be used to measure it. Moreover, because the experimental setup is not complex, consisting of only a transparent electrode dipped in water containing electrolytes, this method is much simpler to use. Not to mention, water is an inexpensive medium, resulting in a low-cost process overall. Elaborating these findings, Prof Tokunaga says, "Through our technique, we observed light modulation with a maximum intensity change of 50% proportional to the applied AC voltage."

Encouraged by these observations, Prof Tokunaga and his team wanted to verify these results using mathematical calculations. They were surprised to find that the theoretical calculations matched with the experimental results. Moreover, they observed that theoretically, a 100% light intensity modulation could be achieved, which was exciting because this confirmed their findings. Prof Tokunaga says, "The results were surprising, but it was even more surprising when our theoretical analysis showed that they could be perfectly explained by existing optical knowledge." The scientists also say, "The results of this research not only have applicability to unique light modulation elements and interface sensors using water, but the discovered enhancement principle opens up the possibility of using any interface that exists universally."

This new method of modulating light serves as a better alternative to existing ones, especially owing to advantages like low cost and easier detection. Not only this, Prof Tokunaga and his team believe that by uncovering new mechanisms of light modulation, their study will open doors for more advanced research in this field. Prof Tokunaga concludes by saying, "Our unique light modulation technology is unprecedented and has many possible applications because it shows a general way to extract a large Pockels signal from a universally existing interface. In addition, we hope that our study will give birth to a new area of research in optics, thereby revolutionizing the field."

Credit: 
Tokyo University of Science

Carnegie Mellon system locates shooters using smartphone video

video: Researchers at Carnegie Mellon University have developed a system that can accurately locate a shooter based on video recordings from as few as three smartphones.

Image: 
Carnegie Mellon University

PITTSBURGH--Researchers at Carnegie Mellon University have developed a system that can accurately locate a shooter based on video recordings from as few as three smartphones.

When demonstrated using three video recordings from the 2017 mass shooting in Las Vegas that left 58 people dead and hundreds wounded, the system correctly estimated the shooter's actual location -- the north wing of the Mandalay Bay hotel. The estimate was based on three gunshots fired within the first minute of what would be a prolonged massacre.

Alexander Hauptmann, research professor in CMU's Language Technologies Institute, said the system, called Video Event Reconstruction and Analysis (VERA), won't necessarily replace the commercial microphone arrays for locating shooters that public safety officials already use, although it may be a useful supplement for public safety when commercial arrays aren't available.

One key motivation for assembling VERA was to create a tool that could be used by human rights workers and journalists who investigate war crimes, terrorist acts and human rights violations, Hauptmann said.

"Military and intelligence agencies are already developing these types of technologies," said fellow researcher Jay D. Aronson, a professor of history at CMU and director of the Center for Human Rights Science. "We think it's crucial for the human rights community to have the same types of tools. It provides a necessary check on state power."

The researchers presented VERA and released it as open-source code last month at the Association for Computing Machinery's International Conference on Multimedia in Nice, France.

Hauptmann said he has used his expertise in video analysis to help investigators analyze events such as the 2014 Maidan massacre in Ukraine, which left at least 50 antigovernment protesters dead. Inspired by that work -- and the insight of ballistics experts and architecture colleagues from the firm SITU Research -- Hauptmann, Aronson and Junwei Liang, a Ph.D. student in language and information technology, have pulled together several technologies for processing video, while automating their use as much as possible.

VERA uses machine learning techniques to synchronize the video feeds and calculate the position of each camera based on what that camera is seeing. But it's the audio from the video feeds that's pivotal in localizing the source of the gunshots, Hauptmann said. Specifically, the system looks at the time delay between the crack caused by a supersonic bullet's shock wave and the muzzle blast, which travels at the speed of sound. It also uses audio to identify the type of gun used, which determines bullet speed. VERA can then calculate the shooter's distance from the smartphone.

"When we began, we didn't think you could detect the crack with a smartphone because it's really short," Hauptmann said. "But it turns out today's cell phone microphones are pretty good."

By using video from three or more smartphones, the direction from which the shots were fired -- and the shooter's location -- can be calculated based on the differences in how long it takes the muzzle blast to reach each camera.

With the proliferation of mass protests occurring in places such as Hong Kong, Egypt and Iraq, identifying where a shot originated can be critical to determining whether protesters, police or other groups might be responsible when a shooting takes place, Aronson said.

But VERA is not limited to detecting gunshots. It is an event analysis system that can be used to locate a variety of other sounds relevant to human rights and war crimes investigations, he said. He and Hauptmann hope that other groups will add functionalities to the open-source software.

"Once it's open source, the journalism and human rights communities can build on it in ways we don't have the imagination for or time to do," Aronson added.

Credit: 
Carnegie Mellon University

Many patients with anorexia nervosa get better, but complete recovery elusive to most

Three in four patients with anorexia nervosa -- including many with challenging illness -- make a partial recovery. But just 21 percent make a full recovery, a milestone that is most likely to signal permanent remission.

These results, and more, are drawn from an online survey of 387 parents, of whom 83 percent had children with anorexia nervosa, 6 percent with atypical anorexia nervosa -- a variant occurring in patients who are not underweight -- and the remainder with other eating disorders. The findings are reported in a study led by UC San Francisco and publishing in the International Journal of Eating Disorders on Nov. 19, 2019.

"This study reminds us that we need to work harder to help individuals with anorexia nervosa who are not responding to standard treatment," said first author Erin C. Accurso, PhD, clinical director of the UCSF Eating Disorders Program and assistant professor in the Department of Psychiatry. "Full recovery means that patients can find joy in their daily life, free from the physical and psychological effects caused by restrictive dieting."

Partial recovery, she said, was defined as some improvement, but still symptomatic in at least one area: physical health, eating disorder thoughts and behaviors, social functioning or mood.

Full Recovery Predictive of Permanent Recovery

Among the 21 percent (81 patients) who made a complete recovery, 94 percent had managed to maintain their recovery two years later. "Unfortunately, patients who only achieved partial recovery continued to struggle and were much more susceptible to relapse," Accurso noted.

Previous studies have found that around 50 percent of patients with anorexia nervosa made complete recoveries, but this study had a preponderance of patients with refractory illness. In the current study, approximately half had undergone residential therapy, partial hospitalization or intensive outpatient treatment, and two-thirds received three or more types of psychological treatments. More than 60 percent reportedly received family-based treatment, which is recognized as most effective for adolescent anorexia nervosa.

"Anorexia nervosa is a complex condition with the highest mortality rate of any psychiatric disorder," said Accurso. "We know that families are the most important resource in recovery, which is why family-based treatment is the gold standard for adolescent anorexia nervosa.

"However, treatment doesn't work for everyone. Parents are telling us that recovery needs to be approached more holistically, with treatments that extend beyond eating disorder symptoms to target emotional well-being, cognitive flexibility and establishment of a meaningful life."

The authors also noted that parents are challenging the field's definition of recovery.

"Parents are schooling us on how it should be defined," said Accurso, who is affiliated with the UCSF Weill Institute for Neurosciences. "We found that parents have a much broader view of recovery, which included psychological wellbeing and building a life worth living. Researchers are missing the mark in defining recovery by weight and/or eating disorder symptoms in the absence of these other factors."

Parents reinforced clinicians' observations that physical and behavioral recovery, which includes resuming regular eating habits, precede cognitive recovery, in which patients are no longer plagued by extreme fear of weight gain and body image distortion.

Among the patients -- whose average age was 18, with a five-year history of the disorder -- 90 percent were female, 94 percent were white, and 90 percent lived in the United States, Canada, the United Kingdom or Australia.

In a follow-up study, Accurso and colleagues will look at how weight restoration, including the goal weight set by a patient's clinician, impacts the recovery process.

Credit: 
University of California - San Francisco

Non-invasive microscopy detects activation state and distinguishes between cell types

image: Methods of extracting features from label-free immune cell analysis. Multivariate label-free data, composed of both morphological and spectral parameters, are used to identify high-level features at the single-cell level such as cellular type, response to drugs, as well as response differences between specimens.

Image: 
Osaka University

Most analytical methods in biology require invasive procedures to analyze samples, which leads to irreversible changes or even their destruction. Furthermore, the sensitivity of such approaches often stems from the averaging of signals generated by a large number of cells, making it impossible to study the underlying heterogeneity of responses.

In the medical field, X-ray and MRI imaging are highly useful since they allow diagnosis through non-invasive imaging. Similarly, label-free techniques are becoming increasingly popular in microscopy thanks to their non-invasiveness. Quantitative phase1 microscopy, along with Raman spectroscopy2, are label-free techniques used in this study to extract biomarkers based on cellular morphology and intracellular content. These approaches have been previously used to characterize specimens and identify cells from different origins; however, the measurement of finer features than the cell type with these techniques has proven to be challenging.

Assistant Professor Nicolas Pavillon and Associate Professor Nicholas I. Smith of the Immunology Frontier Research Center (IFReC) at Osaka University developed a label-free multimodal imaging platform that enables the study of cell cultures non-invasively without the need of any contrast agent. The pair of researchers showed how the label-free signals can be employed to create models that can detect the activation state of macrophage cells and distinguish between different cell types even in the case of highly heterogeneous populations of primary cells. "We devised specific statistical tools that allow for the identification of the best methods for detecting responses at the single-cell level, and show how these models can also identify different specimens, even within identical experimental conditions, allowing for the detection of outlier behaviors," says Associate Professor Smith.

The findings of this study show that a non-invasive optical approach, which enables the study of live samples without requiring contrast agents, can also achieve high sensitivity at the single-cell level. "In particular," says Assistant Professor Pavillon, "our results show that this method can identify different cell sub-types and their molecular changes during the immune response, as well as outlier behaviors between specimens."

Credit: 
Osaka University

Harvesting energy from walking human body Lightweight smart materials-based energy harvester develop

image: The energy harvester device is extremely light with only 307 grams.

Image: 
The Chinese University of Hong Kong

Specifically, the device can capture biomechanical energy from the motion of the human knee and then convert it to electricity which can be used to power wearable electronics such as pedometers, health monitors, and GPS. This work has been published in Applied Physics Letters and recommended as a featured article by editors.

So far, researchers have developed large devices to use human motion for generating electricity, such as electromagnetic generator-based energy harvesters for capturing energy when people are walking on treadmills or riding bicycles. However, these bulky devices hamper the users' locomotion and at the same time increase their burden, because of the considerable weight and large interaction force between the harvesters and the human body. This considerably restricts the wide use of these devices. To overcome that, a research team led by Professor Liao proposed and developed a lightweight energy harvester employing piezoelectric macro fiber composites integrated with novel mechanical structures.

Piezoelectric macro fiber composites are lightweight materials, which can produce electricity under deformation. The proposed energy harvester employs a bending beam and a slider-crank mechanism to capture the motion of the human knee when walking. Then, the captured motion is used to deform piezoelectric macro fiber composites pieces bonded to the bending beam so that electricity is produced when the human knee flexes or extends.

Professor Liao said, "The human knee joint has a larger range of motion than other lower limb joints such as the ankle and hip, which enables energy harvesters to capture the motion more easily and generate more electricity." The prototype harvester, made by piezoelectric macro fiber composites, can generate an average power of 1.6 mW, when the wearer walks at about 2-6.5 km/h. The generated electricity is efficient to power common wearable electronic devices such as smart bands. Furthermore, the prototype weighs only 307 grams. When walking with it, the wearer's metabolic cost is almost the same as that when walking without the device. Unlike the existing electromagnetic generator-based energy harvesters, the lightweight smart materials-based energy harvester can capture energy from human motion without increasing the wearer's burden. It is expected to significantly promote the use of biomechanical energy harvesters.

Professor Liao stated, "This apparatus will attract much attention from mountaineers and hikers. If they get lost in remote mountains or a wilderness where the power grid is unavailable, the device can derive energy from their motion and convert it to electricity, enabling wearers to continuously monitor their vital signs, know their position, or even send out an SOS signal at any time when they need help. At present, we are focusing on improvement in the performance of the harvester by reducing the weight of the device and increasing energy harvesting efficiency. We plan to commercialise the harvester and market it through cooperating with garment manufacturers to embed the device in sportswear."

Credit: 
The Chinese University of Hong Kong

A super-fast 'light switch' for future cars and computers

image: An optical network including the electro-opto-mechanical switches: Depending on the voltage, the switches either deflect a light beam by 90 degrees (front left) or let it pass through the waveguide (front right).

Image: 
Haffner C, et al 2019

Self-driving cars have become better and more reliable in recent years. Before they might be allowed to drive completely autonomously on our roads in the near future, however, a few hurdles have to be taken. Above all, the need to assess the surroundings at lightning speed and to recognize people and obstacles takes current technologies to its limits. A team of scientists led by Juerg Leuthold at the Institute for Electromagnetic Fields at ETH Zurich, together with colleagues at the National Institute of Standards and Technology (NIST) in the USA and at Chalmers University in Gothenburg (Sweden), has now developed a novel electro-opto-mechanical switch that might be able to elegantly solve both problems in the future.

Plasmonics as a magic ingredient

To achieve this, the researchers used a magic ingredient known as "plasmonics". In this technology, light waves are squeezed into structures that are much smaller than the wavelength of the light - which, according to the laws of optics, should be impossible to do. It can be made possible, however, by guiding the light along the boundary between a metal and a dielectric - a substance, such as air or glass, that hardly conducts electric current.

The electromagnetic waves of the light partially penetrate the metal and cause the electrons inside it to oscillate, which results in a hybrid creature made of a light wave and an electronic excitation - the plasmon. More than ten years ago, some well-known physicists already predicted that optical switches based on plasmons could lead to a revolution in data transmission and data processing, as both can be done much faster with photons than with traditional electronics.

So far, however, real-life commercial applications have failed because of the large losses encountered when transporting photons through plasmonic devices, and because of the high switching voltages needed.

Exploiting the strengths of plasmonics

"We have now solved those problems by exploiting the good properties of plasmonics while minimizing the bad ones", says postdoc Christian Haffner, who led the project and is also first author of the recently published Science paper. The central feature of the electro-opto-mechanical switch developed by Haffner and his colleagues is a gold membrane that is only 40 nanometres thick and a few micrometres wide, and which is separated from a silicon substrate by an aluminium oxide disk.

In this configuration, the size of the gap between the gold membrane and the substrate can be controlled through mechanical forces. When a voltage is applied, the membrane bends slightly and, as a result, the gap becomes smaller.

The size of the gap, in turn, decides whether a light wave simply passes by the gold membrane or is deflected around it. This is where the plasmons come in. In fact, for a certain width of the gap only plasmons having a particular wavelength can be excited on the gold membrane. If the light has a different wavelength, it doesn't couple to the membrane but simply propagates in a straight line inside the silicon waveguide.

Small losses and switching voltage

"Because we only use the plasmons for the short trip around the switching membrane, we have substantially lower losses than those of current electro-optic switches", Haffner explains. "Also, we made the gold membrane very small and thin, so that we can switch it very fast and with a small voltage."

The scientists have already demonstrated that their new switch can be flicked on and off several million times per second with an electric voltage of little more than one volt. This makes the bulky and power-hungry amplifiers typically used for electro-optical switches superfluous. In the future, the scientists plan to improve their switch further by making the gap between gold and silicon smaller still. This will make it possible to significantly reduce both the light losses and the switching voltage.

Applications from cars to quantum technologies

Possible applications for the new switch are plentiful. For instance, LIDAR systems ("Light Detection and Ranging") for self-driving cars, in which the intensity and direction of propagation of light beams needs to be varied extremely quickly, could benefit from the fast and compact switches.

Moreover, the pattern recognition necessary for steering the cars could also be accelerated with such switches. To that end, the switches could be used in optical neural networks that mimic the human brain. There, they would be employed as weighting elements with which the network "learns" to recognize certain objects - practically at the speed of light.

Such optical implementations of circuits that normally work with electric current are also hot topics in other areas. Optical quantum circuits are also intensively studied, for instance, for the realization of quantum technologies. Until now, optical quantum circuits have been supported by classical optical switches. Those switches are typically based on a variation in the refractive index of a material when it is heated, which changes the degree to which light beams are bent by it.

However, this is a slow process and, in the long run, incompatible with the low temperatures at which other quantum elements such as the quantum bits or "qubits" of a quantum computer (corresponding to the classical bits that represent "0" and "1") typically work. A fast switch that practically doesn't heat up at all should, therefore, be a welcome addition to such applications, too. 

Credit: 
ETH Zurich

Artyom Yurov, IKBFU physicist: 'Can quantum effects occur at mega-scale?'

image: Prof. Artyom Yurov

Image: 
Immanuel Kant Baltic Federal University

An article of the IKBFU Director of Institute of Physics, Mathematics and Informational Technology, Artyom Yurov and the Institute's Associate Professor, Valerian Yurov was recently published in European Physical Journal. The scientists have released their calculations, according to which the Universe may have quantum properties.

Artyom Yurov explained:

"To begin with, let's remember what quantum physics is. Perhaps this is the most amazing phenomenon known to people. When scientists started studying atoms for the first time, they noticed that everything works "upside down" in the microcosm. For example, according to quantum theory, an electron may present in several places simultaneously.

Try to imagine your cat simultaneously lying on the sofa and eating from its bowl that is in the other corner of the room. The cat is not either here or there, but in both places simultaneously. But the cat is there only BEFORE you look at it. The moment you start staring at it, it changes the position to EITHER the bowl OR the sofa. You may ask, of course, that if the cat acts so weird only when not observed by us, so how do we know that it actually acts this way? The answer is simple: math! If we are to try and gather statistical information about us looking at the cat (needed to estimate the number of cases when the cat was on the sofa and when - near the bowl), we won't have any information. This proves to be impossible if we consider the cat being EITHER near the bowl OR on the sofa. Well, it doesn't work like that with cats, but works fine for electrons.

When we observe this particle, it really appears in one place and we can record that, but when we do not observe it, it must be in several places at once. For example, this is what they mean in chemistry classes when they talk about electron clouds. No wonder poor children never understand this. They just memorize ... "

Decoherence Effect

Yes, the cat is not some electron, but why? Cats consist of elementary particles, like electrons, protons, and neutrons. All the particles act the same when measured on the quantum level. So why a cat can't be in two places simultaneously?

And the other question is: what is so magical about our ability to "observe"? Because when we don't "observe", the object is being "smeared" all over the universe, but the moment we look at it - it is gathered in one place! Well, physicists don't say "gathered", they say "wave function collapsed", but those smart words actually mean "gathered" in one place as a result of observation! How are we able to do that?

"Firstly, the answer to these complex questions appeared at the end of the last century, when such a phenomenon as decoherence was discovered. It turns out that indeed, any object is located in several places at once, in very many places. It seems to be spread throughout the universe. But if the object comes into interaction with the environment, even collides with one atom of a photon, he immediately "collapses". So there is no mystical abilities to cause quantum collapse by observation - this is due to interaction with the environment, and we are simply part of this environment.

Secondly, there is no absolute collapse as such. The collapse happens in the following way: if before interacting with the environment the object was "smeared" over two places, (we use "two places" to simplify, in reality it might be smeared over hundreds of thousands of places) but in fact, the object presents 99.9999% (and many, many nines after) of the time in one place, and a small remaining part of time in the second. And we observe it as being in one and only place! Everything happens in no time and the bigger an object is, the faster the "collapse". We cannot realize it or somehow register, as such devices simply do not exist. And they cannot be created".

According to Artyom Yurov, a long time ago his friend and co-author from Madrid, professor Pedro Gonzalez Diaz (unfortunately, long deceased) has presented an idea of the Universe having quantum properties.

Prof. Yurov said: "Back in the days I was skeptical about the idea. Because it is known that the bigger an object is the faster it collapses. Even a bacteria collapses extremely fast, and here we are talking about the Universe. But here Pedro asked me: "What the Universe interacts with?" and I answered nothing. There is nothing but the Universe and there is nothing it can interact with. Which, theoretically allows as to think of it as of a quantum object".

A human being and the facsimiles.

However, the impetus for writing a scientific article about the quantum nature of the Universe was not so much the idea of Pedro Gonzalez Diaz as the one that came out in 2007 scientific publication by Hall, Deckert, and Wiseman, who described those quantum miracles in the language of classical mechanics, adding some "quantum forces" in it.

That is, each location of the object is described as a separate "world", but it is believed that these "worlds" act on each other with real "forces". I must say that the idea of "many worlds" has existed for a long time and belongs to Hugh Everett. The idea of describing quantum effects by introducing additional forces also exists for a long time and belongs to David Bohm, but Hall, Deckert, and Wiseman were able to combine these ideas and build a meaningful mathematical model.

"And when Valerian and I saw this work in 2007," says Artyom Yurov, "it seemed to us that the mathematical formalism used in it allows us to look very differently at what Pedro said at the time. The essence of our work is that we took the equation that cosmologists use to describe the Friedmann-Einstein Universe, added "quantum forces" according to the HDV scheme, and investigated the solutions obtained. We managed to get some amazing results, in particular, it is possible that some puzzles of cosmology can receive unexpected coverage from this side. But the most important thing is that such a model is testable. "

It is too early to talk about what such a formulation of the question may lead to. The theory must also be confirmed by experiments (i.e., observations). But now it's obvious that scientists have come close to what can fundamentally change our understanding of the universe.

Credit: 
Immanuel Kant Baltic Federal University

Designer lens helps see the big picture

image: Quantitative phase images reveal more details than classical microscopy images. The KAUST technique captures both bright-field images (top) and phase images (bottom) in a single measurement.

Image: 
© 2019 KAUST

Microscopes have been at the center of many of the most important advances in biology for many centuries. Now, KAUST researchers have shown how a standard microscope can be adapted to provide even more information.

In its simplest form, microscopy creates an image of an object by measuring the intensity of light passing through it. This requires a sample that scatters and absorbs light in different ways. Many living cells, however, absorb very little visible light, meaning that there is only a small difference between light and dark regions, known as the contrast. This makes it difficult to see the finer detail.

But the light passing through the sample changes not only its intensity, but also its phase: the relative timing of the peaks in the optical wave. "Phase-contrast microscopy converts phase into larger amplitude variations and hence allows the viewing of fine, detailed transparent structures," explains KAUST Ph.D. student Congli Wang.

Measuring the phase of light is trickier than measuring its intensity. Most phase-contrast microscopes must include a component that converts the phase change to a measurable intensity change. But this conversion is not precise; it only approximates the phase information.

Wang and his colleagues from the KAUST Visual Computing Center, under the supervision of Wolfgang Heidrich, a professor of computer science, have now developed a new method for quantitative phase and intensity imaging. Crucial to the performance of their microscope was an element known as a wavefront sensor. Wavefront sensors are custom-designed optical sensors that can encode the wavefront, or phase, information into intensity images.

The team designed an innovative high-resolution wavefront sensor, and the team members are now incorporating it into a commercial microscope to improve the performance of microscopy imaging. They then reconstructed the phase-contrast image using a computer algorithm they developed to numerically retrieved quantitative phase from an image pair: a calibration image obtained without the sample and a measurement image obtained with the sample in place.

This approach streamlines several aspects of microscopy. While other methods have achieved quantitative phase imaging in the past, they have required expensive or complicated setups, specialized light sources or a long time to generate the image. "Our method allows snapshot acquisition of high-resolution amplitude bright-field and accurate quantitative phase images via affordable simple optics, common white-light source and fast computations at video rates in real time," says Heidrich. "It is the first time, to our knowledge, that all these advantages are combined into one technique."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Suicides reduced by 17 per cent in new collaborative prevention programme

A new suicide prevention programme which includes swift access to specialist care and 12 months of telephone follow-ups has shown to reduce deaths by 17 per cent.

The programme, called Suicide Prevention by Monitoring and Collaborative Care (SUPREMOCOL) brought together care services and key community agencies to create a cohesive network who worked together with the aim of diminishing preventable deaths by suicide.

Researchers say it has the potential to provide a new effective suicide prevention intervention for people with a high to moderate suicide risk as well as to improve the chain of care.

The study was led by researchers at the University of York in a joint collaboration with Tilburg University, the Netherlands. The network of partners based in the Noord-Brabant region of the Netherlands concentrated on four elements:

recognition of people at risk of suicide by the development and implementation of a monitoring system

swift access to specialist care of people for those at risk

ensuring adequate nurse care managers for collaborative care case management between the different health care agencies

12 months of telephone healthcare follow ups

Lead author, Professor Christina van der Feltz-Cornelis from Hull York Medical School said: "An important issue in effective suicide prevention is that in Noord-Brabant and in the Netherlands as a whole, approximately two-thirds of suicide victims were not receiving mental health care, while they were probably in need of it, as suicide occurs mostly in the context of mental health disorders.

"This was due to a lack of visibility of people at risk, as help-seeking behaviour for suicidality is low, possibly due to stigma and poor suicide literacy.

"A lack of communication between health care providers of different institutions or lack of swift exchange into specialist care due to logistical barriers and waiting lists was also a problem.

"Before we started our research collaboration, there was no single tool, questionnaire or instrument that can predict suicide. Clinical assessment can also be very hard, given the fact that about 45% of patients who died by suicide did meet with a primary care provider in the preceding month."

"Our project looked at all of these issues and tried to tackle them in a more cohesive manner. We were pleased to see a reduction of 17 per cent in suicides. Compared to other studies, this is a large improvement and we believe that rolling out the programme in other parts of Europe could see similar successes."

The long term aim of the project is to provide specialist mental health institutions and chain partners in general health care and in the community with a sustainable and adoptable intervention for suicide prevention.

Figures from the World Health Organisation show that more than 800,000 suicides occur worldwide every year with 56,000 being reported in the European Union. There are indications that for each adult who died by suicide there may have been more than 20 others attempting suicide.

Credit: 
University of York