Tech

Surprising NYC ridesharing study findings have implications for policymakers

Researchers have limited access to information about how people use popular ridesharing services like Uber and Lyft. But recent analysis of aggregate data about ridesharing trips in New York City, conducted by researchers at UConn and published last month in Transportation Research Record: Journal of the Transportation Research Board, sheds new light on use of the service by people in the city's outer borough neighborhoods.

Analyzing available data from New York City's Taxi and Limousine Commission, a local regulator that requires some limited reporting from ridesharing companies about the trips they provide within the city, the researchers found that for-hire trips in New York's five boroughs increased by 46 percent - 82 million rides annually - from 2014 to 2017.

What surprised the researchers, however, was that, even within the limits of available data, the surge in rideshare trips that originated outside of Manhattan was apparent and significant. Rideshare trips starting in the outer boroughs have exploded, increasing to 56 percent of the market in neighborhoods that are typically home to minority and low-income households that do not own vehicles of their own.

"These are really important things that are happening, and they're changing the city," said Carol Atkinson-Palombo, a professor in UConn's Department of Geography, co-director of the Transportation Technology & Society Research Group and the lead author of the study. "We really can't afford to not have more transparency about what's going on, because policymakers can't respond if they don't have a sense of what's happening, and we can't rely on the companies to optimize the public good."

These neighborhoods have typically been underserved by public transit as well as traditional taxi services, Atkinson-Palombo said, and while companies like Uber and Lyft may well be serving a mobility need, and doing so in a way that is convenient to users, the fact that they are companies primarily driven by profit raises significant equity concerns.

"From one side, the service is filling a gap, and that's a really positive thing," said Atkinson-Palombo. "But I think we have some concerns that they are for-profit entities and, at some point, especially now that they've gone public, they might need to charge market rates."

Riders also have no control over weather or traffic conditions that can enable the companies to enact surge pricing, she said, which raises a real vulnerability for users who come to rely on the service.

"Mobility is so important," she said, "and you can't be held to ransom....they're not accountable to anybody and, at the end of the day, their remit is not to provide public transit. Their remit is to make profit."

The increase in ridership also has implications for cities trying to address greenhouse gas emission targets and enact climate action plans, Atkinson-Palombo said.

"All of these trips that are being take are probably something called 'induced travel,' so it's like extra on top," she said, "and there are going to be greenhouse gas emission implications from that. Very few of the cars are electric vehicles."

Increased usage of single-ride vehicles, and the practice known as "deadheading" - where most Uber and Lyft drivers spend a significant portion of time operating without a passenger as they travel from drop-off to pick-up points - all contribute to increased roadway congestion and vehicle tailpipe emissions.

"This has potentially profound impact on climate change and greenhouse gas emission policies," Atkinson-Palombo said. "Especially if you're thinking about the amount of emissions that would be incurred if everybody was moving by Uber and Lyft, because they're lower occupancy."

While Atkinson-Palombo said the study was preliminary work, she said it represented "a really good starting point for asking questions" about the impact of increased ridesharing usage, particularly in areas that are not as densely populated as Manhattan. She said that regulators in New York and in other cities, including Chicago, are starting to tighten up reporting requirements for ridesharing companies because they understand the need for better data and transparency in the industry.

The researchers intend to more closely examine the advertising and marketing being used by ridesharing companies to determine if the surge in outer borough usage can be attributed to business strategies targeting so-called "transit deserts."

"Uber and Lyft, they don't break even," Atkinson-Palombo said. "They're subsidized trips, and so they might be really massively marketing their services at a really heavy discount, but we don't know because we can't see any of the pricing data. But we'll be able to find out from people."

She said her research group is also working to partner with an advocacy organization, the Tri-State Transportation Campaign, to conduct interviews and focus groups in outer borough neighborhoods -starting in the Bronx and then moving to Queens - to learn directly from ridesharing drivers and users why and under what circumstances riders choose to use the services and how policymakers might best meet their mobility needs.

"These patterns have revealed that there's demand in this particular corridor," Atkinson-Palombo said. "Now, can we either do something that's kind of a partnership with transit, or can transit come in and see whether they want to fill that gap."

Credit: 
University of Connecticut

Development of 3D particle model for single particles in battery electrodes

image: Electrochemical phenomena inside a single particle predicted using the model developed.
a) Charged state; b) Electrical potential; c) Lithium concentration; d) Voltage.

Image: 
DGIST

A model that can have a 3D observation of micrometer-sized particles in a cell has been developed. Through the analysis and research of micrometer-sized particles in a cell, this model is expected to enhance energy efficiency of cells.

DGIST announced that Professor Yong Min Lee's team in the Department of Energy Science & Engineering developed 'micron1 single particle electrochemical model' that can estimate the electrochemical properties of a single particle of electrode active materials2 in 3D. The 3D observations of the single particles of electrode active materials, which are difficult to be identified in an experiment, are expected to be applied to research electrochemical phenomena and particle designs that enhance cell efficiency.

Although a secondary cell is commonly used as the power source of electric vehicles, it is still not as efficient as internal combustion engine. Its efficiency can be improved by increasing the energy density of the cells, R&D has not been actively carried out due to the limitations in precise analysis technology.

Professor Lee's team thought that the energy density of a cell can be enhanced through the design optimization of electrode active materials in a cell. Then, they sought a way to examine the micrometer-sized single particles of electrode active materials and developed electrochemical model that can conduct 3D analysis on the single particles.

Unlike the existing model that focused on cell electrode, the model developed by Professor Yong Min Lee's team focused on the single particles of active materials that compose electrode. By doing so, the team took another step closer to a research to fundamentally increase cell efficiency through accurate analysis on the properties and characteristics of 3D single particles in a model. Since it can have 3D analysis of particles, the model is especially expected to be applied widely in research to design the single particles of electrode active materials in a cell.

Regarding this research, Professor Yong Min Lee in the Department of Energy Science and Engineering said "Comparing to previous works, our model can look into what happens within a single particle. As a result, it provides an innovative way in designing micrometer-sized particles. Our next goal is to apply this electrochemical model to improve the cell efficiency of electric vehicles."

Credit: 
DGIST (Daegu Gyeongbuk Institute of Science and Technology)

Journal of Dental Research Centennial July 2019: Fluoride Revolution and Dental Caries

Alexandria, VA, USA - 2019 marks the Centennial of the Journal of Dental Research (JDR). Over the last century the JDR has been dedicated to the dissemination of new knowledge and information on all sciences relevant to dentistry and to the oral cavity and associated structures in health and disease. To celebrate, the JDR is featuring a yearlong, monthly commemorative article and podcast series that highlights topics that have transformed dental, oral and craniofacial research over the past 100 years.

While the global epidemic of dental caries that began about 140 years ago was very largely caused by the rise in sugar consumption, the more recent decline in caries during the last 50 years has been due largely to the use of fluoride. In the second July 2019 issue of the JDR, the article "Fluoride Revolution and Dental Caries: Evolution of Policies for Global Use," by IADR past president Helen Whelton, University College Cork, Ireland, John Spencer, University of Adelaide, Australia, Loc Do, University of Adelaide, Australia and Andrew Rugg-Gunn, Newcastle University, Newcastle upon Tyne, England, focuses on population-level interventions, which have been predominantly through fluoridation of water supplies and the widespread use of fluoride toothpaste.

"Epidemiological studies over 70 years ago provided the basis for the use of fluorides in caries prevention and revealed the clear relation between fluoride exposure in drinking water and the prevalence and severity of dental fluorosis and dental caries," said Spencer." The proposition that cities with water supplies deficient in fluoride might have their level brought up to inhibit caries emerged in 1943 and 1944. This hypothesis was tested in four community fluoridation trials in the United States and Canada. Those findings showed a marked reduction in caries experience, around 50%, in children and adolescents in the fluoridated cities compared to non-fluoridated control cities or the levels of caries in cities before fluoridation."

"Previously drinking water had been the only significant source of fluoride. Now there are additional sources, most notably fluoridated toothpaste. These two fluoride sources have an additive effect. In many countries in the early 1960s, 12-year-olds had on average 5 decayed permanent teeth, by age 15 it had gone up to 9, this figure is now less than one for 12-year-olds," said Whelton. "Policy has remained supportive of the use of both community fluoridation where practicable and supportive of affordable fluoridated toothpaste. Fluoride policies have radically improved oral health, enhancing general health and quality of life for populations across the world."

The seventh JDR Centennial podcast, titled "Fluoride Revolution and Dental Caries: Evolution of Policies for Global Use" features a conversation between Spencer, Whelton and Joy Richman, University of British Columbia, Vancouver, Canada.

Throughout 2019 JDR Associate Editor, Nicholas Jakubovics, Newcastle University, England, shares 'Historical Highlights' and archival excerpts from the rich history of research findings published in the JDR. In the second July issue, Jakubovics highlights the article "Bovine Teeth as Possible Substitutes in the Adhesion Test" (Nakamichi I., Iwaku M., Fusayama T. 1983. Bovine Teeth as Possible Substitutes in the Adhesion Test. J Dent Res. 62:1076-1081). Through the 1950s and 1960s, the consumption of sugar rose to peak levels around the world, while a shortage of dentists left many people without adequate dental care. Intact human enamel was becoming more difficult to obtain for scientific research studies and alternatives needed to be found. This influential paper directly compared human and bovine enamel as substrates for adhesion and found essentially no significant differences between the two. This work paved the way for the widespread use of bovine enamel as a substitute for human enamel and enabled research that otherwise would not have been possible.

Along with the article and podcast series, the legacy of the JDR was honored during a celebration at the 97th General Session of the IADR, held in conjunction with the 48th Annual Meeting of the AADR and the 43rdAnnual Meeting of the Canadian Association for Dental Research, in Vancouver, British Columbia, Canada on June 19-22, 2019. For more information on the JDR Centennial, please visit: http://www.iadr.org/JDRcentennial.

Credit: 
International Association for Dental, Oral, and Craniofacial Research

UK researchers develop ultrafast semiconductors

image: Professor Diana Huffaker, Institute for Compound Semiconductors, Cardiff University.

Image: 
Mike Hall Photography

UK researchers have developed world-leading Compound Semiconductor (CS) technology that can drive future high-speed data communications.

A team from Cardiff University's Institute for Compound Semiconductors (ICS) worked with collaborators to innovate an ultrafast and highly sensitive 'avalanche photodiode' (APD) that creates less electronic 'noise' than its silicon rivals.

APDs are highly sensitive semiconductor devices that exploit the 'photoelectric effect' - when light hits a material - to convert light to electricity.

Faster, supersensitive APDs are in demand worldwide for use in high-speed data communications and light detection and ranging (LIDAR) systems for autonomous vehicles.

A paper outlining the breakthrough in creating extremely low excess noise and high sensitivity APDs is published today in Nature Photonics.

Cardiff researchers led by Ser Cymru Professor Diana Huffaker, Scientific Director of ICS and Ser Cymru Chair in Advanced Engineering and Materials, partnered with the University of Sheffield and the California NanoSystems Institute, University of California, Los Angeles (UCLA) to develop the technology.

Professor Huffaker said: "Our work to develop extremely low excess noise and high sensitivity avalanche photodiodes has the potential to yield a new class of high-performance receivers for applications in networking and sensing.

"The innovation lies in the advanced materials development using molecular beam epitaxy (MBE) to "grow" the compound semiconductor crystal in an atom-by-atom regime. This particular material is rather complex and challenging to synthesize as it combines four different atoms requiring a new MBE methodology. The Ser Cymru MBE facility is designed specifically to realize an entire family of challenging materials targeting future sensing solutions."

Dr. Shiyu Xie, Ser Cymru Cofund Fellow said: "The results we are reporting are significant as they operate in very low-signal environment, at room temperature, and very importantly are compatible with the current InP optoelectronic platform used by most commercial communication vendors.

"These APDs have a wide range of applications. In LIDAR, or 3D laser mapping, they are used to produce high-resolution maps, with applications in geomorphology, seismology and in the control and navigation of some autonomous cars.

"Our findings can change the global field of research in APDs. The material we have developed can be a direct substitute in the current existing APDs, yielding a higher data transmission rate or enabling a much longer transmission distance."

The Ser Cymru Group within ICS is now preparing a proposal with collaborators at Sheffield for funding from UK Research and Innovation to support further work.

Cardiff University Vice-Chancellor, Professor Colin Riordan, added: "The work of Professor Huffaker's Ser Cymru Group plays a vital role in supporting the ongoing success of the wider Compound Semiconductor cluster, CS Connected, which brings together ten industry and academic partners in South Wales to develop 21st Century technologies that create economic prosperity."

Professor Huffaker added: "Our research produces direct benefits for industry. We are working closely with Airbus and the Compound Semiconductor Applications Catapult to apply this technology to future free space optics communication system."

Credit: 
Cardiff University

Genomic 'map' reveals not all fat is equal

image: Garvan and CSIRO researchers have uncovered key differences underlying harmful and non-harmful fat.

Image: 
Garvan Institute of Medical Research

It's not just about how much - the location of where fat is stored in the body can have significant implications for human health.

A new study compared fat cells from under the skin and from the harmful fat inside the abdomen, creating the first comprehensive genomic map that reveals unique features, which appear to 'hard-wire' different types of fat early in cell development.

Led by researchers from the Garvan Institute of Medical Research and the CSIRO, the findings may guide future research to uncover the drivers of harm arising from fat build-up in different parts of the body. The team's findings are published in the journal Scientific Reports.

"Our findings tell us that a cells' epigenome - the secondary code that controls how genes are read - can give us significant insight into how fat cells develop," says co-senior author Professor Susan Clark, Genomics Research Director at the Garvan Institute. "The study gives us a completely new look at the underlying factors that contribute to the development of cells that can present significant health risks."

New insights into fat development

Fat can be harmful or largely benign, depending on where in the body it is located. 'Subcutaneous' fat sits underneath the skin to store energy, and is generally harmless. Meanwhile, accumulation of 'visceral' fat, located in the abdomen, including around the liver, stomach and intestines, promotes inflammation and metabolic disturbances, and is associated with health complications such as type 2 diabetes and heart disease.

"It has been unclear why fat cells, which appear so similar, are associated with such different health outcomes," says author Professor Katherine Samaras, Head of the Clinical Obesity, Nutrition and Adipose Biology lab at Garvan and endocrinologist at St Vincent's Hospital Sydney. "Now we start to understand that the different fat cells are wired differently right from the start."

In their study, the researchers isolated the fat-storing cells from visceral and subcutaneous fat biopsies from three individuals. The team compared the fat cells' epigenomes, the chemical tags attached to DNA that control how genes are read, and their transcriptomes, the genetic output of the cell.

By creating a comprehensive genomic map, the researchers discovered a number of fundamental epigenetic differences linked to changed genetic output, between the cells in subcutaneous and visceral fat.

Further, the team discovered these differences arise early in cell development, and are likely present in the precursor cells from which fat cells arise. This finding indicates that despite the fat cells' similar appearance, fat cells become 'hard-wired' early to be harmful or non-harmful.

"When compared with other cell types in the body, visceral and subcutaneous fat cells are very similar to each other in their function," says lead author Dr Stephen Bradford. "Our analysis revealed epigenetic differences that may control different genes being turned on in subcutaneous and visceral fat cells that could contribute to their different properties and health effects."

"This comprehensive study demonstrated that the epigenome can provide an unprecedented view into the differences of cells that seem apparently very similar," says co-senior author Dr Peter Molloy from the CSIRO. "We believe that such analyses will provide us with further crucial insight not only into the development of fat, but also for other cell types in future."

Credit: 
Garvan Institute of Medical Research

Novel method identifies patients at risk for HIV who may benefit from PrEP strategies

Researchers have demonstrated the effectiveness of using algorithms that analyze electronic health records (EHRs) to help physicians identify patients at risk for HIV who may benefit from preexposure prophylaxis (PrEP), which significantly reduces the risk of getting HIV. The studies, which were supported by the National Institute of Mental Health (NIMH) and the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health, advance a novel method that can help clinicians identify individuals most in need of PrEP. The two studies were published today in The Lancet HIV.

"The development of innovative tools to increase PrEP use and adherence in the United States is crucial to our efforts to end the HIV epidemic," stated Dianne Rausch, Ph.D., director of the NIMH Division of AIDS Research. "Identifying individuals who may benefit from PrEP is a major challenge for clinicians, and this is an important advance that could help improve PrEP delivery and use."

PrEP is a strategy in which healthy people routinely take one or more antiretroviral drugs to reduce their risk of getting HIV. It is highly effective for reducing the risk of HIV acquisition, yet it remains greatly underutilized. The Centers for Disease Control and Prevention estimates that as many as 1.1 million Americans may be candidates for PrEP use--but, in 2016, only an estimated 78,360 (about 7%) were prescribed PrEP medication.

Physicians may underprescribe PrEP due to a lack of time or skills to adequately assess patients for HIV risk. In other instances, physicians may be unfamiliar with PrEP or consider it outside their purview to prescribe.

"The incorporation of automatic screening algorithms into EHRs could help busy clinicians identify and assess patients who may benefit from PrEP more efficiently, and empower them to prescribe PrEP more frequently," said study author Douglas Krakower, M.D., of Beth Israel Deaconess Medical Center and Harvard Medical School.

In two large-scale studies, which used EHRs from large health systems in Massachusetts and California, researchers created and tested algorithms that analyze a rich array of health data and patient information to help clinicians automatically identify those at highest risk for HIV infection and therefore most likely to benefit from PrEP medications.

In the first study, Krakower and colleagues used machine learning to create an HIV prediction algorithm using 2007-2015 EHR data from more than 1 million patients attending Atrius Health, a large healthcare system in Massachusetts. The model used variables in the EHRs such as diagnosis codes for HIV counseling or sexually transmitted infections (STIs), laboratory tests for HIV or STIs, and prescriptions for medications related to treating STIs. The model was subsequently validated using data from 537,257 patients seen by Atrius Health in 2016, as well as 33,404 patients seen by Fenway Health, a community health center in Boston that specializes in providing healthcare for sexual and gender minorities, between 2011 and 2016. In these validation studies, the prediction algorithm was able to successfully distinguish between patients who did or did not acquire HIV, and between patients who did or did not receive a PrEP prescription, with high precision.

The researchers discovered many potential missed opportunities to prescribe PrEP. For example, more than 9,500 people in the 2016 dataset had particularly high risk scores from the prediction algorithm and lacked prior PrEP prescriptions.

According to Krakower, "A striking outcome is that our analysis suggests nearly 40% of new HIV cases could potentially have been averted had clinicians received alerts to discuss and offer PrEP to their patients with the highest 2% of risk scores."

The second study, led by Julia Marcus, Ph.D., of Harvard Medical School and Harvard Pilgrim Health Care Institute, with Krakower and colleagues, scaled-up this prediction approach by using the EHRs of more than 3.7 million patients receiving outpatient services from Kaiser Permanente Northern California. They developed a model to predict HIV incidence using data from patients who entered the Kaiser Permanente system between 2007 and 2014, and they validated the model on data from patients who had entered the Kaiser Permanente system between 2015 and 2017. The model used variables in the EHRs such as high-risk sexual behavior indications, HIV and STI testing frequency, and STI diagnoses and treatments.

"Our model was able to identify nearly half of the incident HIV cases among males by flagging only 2% of the general patient population," Marcus said. "Embedding our algorithm into the Kaiser Permanente EHR could prompt providers to discuss PrEP with patients who are most likely to benefit."

Both studies are among the first to demonstrate that EHR-based prediction algorithms can effectively identify individuals in general populations who are at high risk for HIV and potential candidates for PrEP. These models offer clinicians an important new tool to reduce new HIV infections. Future research will continue the development of these predictive models and discover the best ways to integrate them with healthcare systems to improve PrEP use and prevent HIV infections.

Credit: 
NIH/National Institute of Mental Health

Study: Some stereotypes seem to be universally applied to biracial groups in the US

EVANSTON, Ill. --- Stereotypes often guide our perceptions of members of social groups. However, research has yet to document what stereotypes may exist for the fastest growing youth demographic in the U.S. -- biracial individuals.

Are biracial Black-White individuals perceived to be more similar to Black people or White people? And when people stereotype biracial Black-White individuals, do they stereotype them more like Black people or more like White people?

A new Northwestern University study has found evidence that there are some stereotypes that seem to be universally applied to biracial groups in the U.S.

"When people think of biracial individuals, regardless of their specific racial background, they tend to stereotype them as being attractive or beautiful," said Sylvia Perry, co-lead author of the study and assistant professor of psychology in the Weinberg College of Arts and Sciences at Northwestern. "We also found that biracial individuals tend to be stereotyped as not fitting in or belonging, suggesting that, because they represent a mixture of two racial groups, people perceive that they do not really belong in either racial group."

When the researchers began the study, they thought, for example, Black-White biracial people might just be attributed some of the same stereotypes that are attributed to White people and some of the same stereotypes that are attributed to Black people.

"A lot of the stereotypes of Black-White biracial people were completely different from the ones people have about White people and Black people," Perry said. "This suggests that people might actually think of biracial people as their own racial group, rather than just a combination of their parents' racial groups."

To more fully understand the stereotypes that people hold about biracial people, the researchers asked about six different biracial groups: biracial Black/White individuals; biracial Asian/White individuals; biracial Black/Hispanic individuals; biracial Black/Asian individuals; biracial Hispanic/Asian individuals; and biracial Hispanic/White individuals. They also examined the extent to which having personal contact with biracial individuals is related to how they are stereotyped.

"When we looked at whether biracial people were stereotyped as being more similar to one of their parent's racial groups than the other, we did not find any systematic patterns," Perry said. "Black-White biracial individuals were seen as equally similar to White people and Black people, Hispanic-Asian biracial people were seen as equally similar to Hispanic people and Asian people, and so on.

"And it did not matter whether people had had a lot of personal contact with that specific biracial group or no personal contact with that biracial group -- either way, biracial individuals were generally seen to be an equal mixture of their parents' racial groups."

A finding that came as a surprise to the researchers.

"When we think about how people in the U.S. racially categorize biracial individuals, we often see that biracial Black-White individuals, for example, people like President Barack Obama, tend to be categorized as Black," Perry added. "So we thought that people might also stereotype biracial Black-White individuals more in line with the stereotypes of Black people in the U.S. But our findings indicate that overall that is not the case."

Previous research has examined whether biracial individuals tend to be racially categorized into one or the other of their parent's racial groups. This work has shown that in the U.S., biracial Black-White people tend to be categorized as Black and biracial Asian-White people tend to be categorized as Asian.

"Our studies built upon this work to examine whether stereotypes show these same patterns, and our results suggest that they do not," Perry said. "Another ripe area for future investigation is how different racial groups in the U.S. stereotype biracial individuals. Our work largely represents the societal stereotypes of White people in the U.S. and it is possible that people from other racial groups have stereotypes of biracial individuals that are quite different."

Credit: 
Northwestern University

Natural antioxidant helps improve immune-based therapies by modulating T-cells

image: Long-time Medical University of South Carolina collaborators Dr. Shikhar Mehrotra (left) and Dr. Xue-Zhong Yu (right) author papers showing that a natural antioxidant can modulate T cell activity in cancer immunotherapy and graft-vs.-host disease, respectively.

Image: 
Emma Vought, Medical University of South Carolina

Shikhar Mehrotra, Ph.D. and Xue-Zhong Yu, M.D., National Institutes of Health-funded researchers at the Medical University of South Carolina (MUSC), have discovered a way to improve immune-based treatments, such as adoptive T-cell therapy (ACT) and hematopoietic stem cell transplantation (HSCT), by modulating T-cells with thioredoxin, a powerful, naturally occurring antioxidant molecule.

ACT is a cancer immunotherapy in which the patient's own immune cells (T-cells) are engineered to recognize cancer cell-specific markers. First, the patient's blood is collected, then T-cells are removed and genetically modified to attack cancer cells. Finally, the modified T-cells are re-administered to the patient.

ACT is currently used for patients with leukemia and lymphoma. However, a major downside to the treatment is that the re-administered T cells do not live long, leading to relapse.

HSCT is a classic immune-based treatment that requires a donor to supply stem cells, which are then administered to the patient to help them produce more immune cells to fight blood-related diseases, including blood cancers. A severe side effect of HSCT is graft-versus-host disease (GVHD), which occurs when the donor T-cells attack the recipient's healthy tissues instead of diseased cells.

Though they study different models, Mehrotra and Yu are long-time collaborators. Both are dedicated to understanding T-cell function.

"Our collaboration is a common interest in the biology of T-cells and how to manipulate them to benefit different disease conditions," Yu explains.

Mehrotra is an associate professor in the College of Medicine and co-scientific director of the Center for Cellular Therapy at MUSC Hollings Cancer Center. He and his team recently published a study in the Journal of Biological Chemistry that showed that thioredoxin extends the life of adoptive T-cells by neutralizing toxic reactive oxygen molecules (ROS).

Tumor environments have high concentrations of ROS. Without antioxidants such as thioredoxin, ROS will damage the cell and eventually cause cell death.

"Treating anti-tumor T cells with recombinant thioredoxin before adoptive transfer not only imparted high anti-oxidant capacity," explained Mehrotra.

"It also metabolically programmed these cells to withstand nutrient competition with the tumor - which resulted in better tumor control."

The team at MUSC used a strain of mice that overexpress thioredoxin and performed a standard ACT procedure. They observed increased T-cell viability and antitumor activity from mice overexpressing thioredoxin.

They confirmed the findings by engineering human T-cells to overexpress thioredoxin and again observed prolonged T-cell lifespan at the site of the tumor. The results suggest that treating human T-cells with thioredoxin before administration will increase cell viability and improve the anti-tumor effect of ACT in patients.

Yu is a professor in the College of Medicine and S.C. SmartState Endowed Chair in Cancer Stem Cell Biology and Therapy. Yu and his team at MUSC study the development of graft-versus-host disease (GVHD) in recipients of HSCT.

Using a mouse model, Yu's lab tested the effect of thioredoxin on donor T-cells, and the results were published in the Journal of Clinical Investigation. Like Mehrotra's study with adoptive T-cells, Yu's study found that thioredoxin's antioxidant effect decreased toxic ROS in donor T-cells, made them less reactive to the patient's healthy tissues, and thereby prevented development of GVHD.

"Thioredoxin is a natural product with no toxicity. We can use it to fine tune T-cell activation in a way that will reduce graft-vs-host disease but maintain anti-tumor effect," Yu reports on the new finding.

Mehrotra and Yu plan to continue to work closely to develop this new advancement in T-cell immune therapy.

The next step for both projects is to induce human tumors into mice and test the effect of thioredoxin-treated T-cells in both ACT and HSCT models. This will determine if it can be moved to clinic to be tested on patients.

Credit: 
Medical University of South Carolina

X-rays reveal monolayer phase in organic semiconductor

image: These are structural formulas of the thiophene and dihexyl-quarterthiophene molecules.

Image: 
Elena Khavina/MIPT Press Office

A team of researchers from Russia, Germany, and France featuring materials scientists from the Moscow Institute of Physics and Technology has investigated how the electrical properties of dihexyl-quarterthiophene thin films depend on their structure. This material is an organic semiconductor with prospects for flexible electronics. It turned out that once the thin films undergo a transition from the crystal to the liquid-crystal state, they lose some of their electrical conductivity. The team also discovered a "third phase" that does not occur in bulk material and corresponds to a monomolecular layer of the semiconductor. This structure could be favorable for charge transport across the films, with potential implications for microelectronics design. The research findings were published in Nanoscale Research Letters.

Oligothiophenes are promising organic semiconductors. Their rod-shaped molecules can orient at the surface on which they have been deposited, piling up cycles of hydrocarbons containing a sulfur atom known as thiophenes, like stacks of coins. The "coin edges" in the neighboring stacks form a herringbone pattern. This molecular arrangement enables the charge transfer from one molecule to the other.

As the number of thiophenes in the molecule increased, so does the electrical conductivity, at the cost of the compound's solubility. Four is believed to be the optimal number of these so-called thiophene moieties. To increase solubility, hexyl fragments are grafted to the ends of the conjugated molecular fragment (fig. 1).

The researchers dissolved and evaporated dihexyl-quarterthiophene (DH4T) in a vacuum reactor and deposited the material as thin films on a silicon substrate. They went on to study the crystal structure of the samples using grazing-incidence X-ray diffraction. This technique involves exposing a film to X-rays at a very small glancing angle to maximize the distance the X-ray beam travels in the film, undergoing numerous reflections. Otherwise, the signal from the thin film would be too faint to be distinguishable from the substrate signal. The diffraction measurements allowed the team to identify the molecular arrangement in the material deposited on the substrate.

Initially, DH4T was highly crystalline. Its molecules formed a herringbone pattern and were positioned almost perpendicular to the substrate. However, once heated to 85 degrees Celsius, the material underwent a phase transition: The molecular arrangement changed, forming a liquid crystal phase, and the electrical conductivity of the films dropped.

The sample was further heated to 130 C and subsequently cooled to room temperature. This partly restored the material's crystallinity, and therefore conductivity.

Over the course of heating, a third structure emerged in the X-ray diffraction profile, indicated by weak diffraction maxima not corresponding to the liquid crystal phase. Prior research has correlated such maxima with monolayers of compounds like DH4T. Interestingly, this "third phase" was also observed at 70 C.

The structure of the monolayer discovered by the team is favorable for charge transport along the plane of the film, making it significant for flexible electronics applications. Besides that, the newly observed phase could also occur in the thin films of other compounds whose structure is similar to that of DH4T. Such materials are used in microelectronics. Since charge is predominantly transferred in a very thin layer near the substrate, the researchers' findings point to the need to consider how the material's nanostructure affects its conductivity.

Professor Dimitri Ivanov heads the Laboratory of Functional Organic and Hybrid Materials at MIPT and is also the director of research at the French National Center for Scientific Research (CNRS). He co-authored the study reported in this story and commented on its findings: "Using in situ methods, such as structural analysis, and at the same time measuring sample electrical properties enables us to gain insights into the nature of complex phase transitions in the material and assess its potential for practical applications in organic electronics.

Credit: 
Moscow Institute of Physics and Technology

Structure of brain networks is not fixed, study finds

ATLANTA--The shape and connectivity of brain networks -- discrete areas of the brain that work together to perform complex cognitive tasks -- can change in fundamental and recurring ways over time, according to a study led by Georgia State University.

The interaction and communication among neurons, known as "functionally connectivity," gives rise to brain networks. Researchers have long assumed these networks are spatially static and a fixed set of brain regions contribute to each network. But in a new study published in Human Brain Mapping, Georgia State researchers find evidence that brain networks are spatially and functionally fluid.

The researchers collected functional magnetic resonance imaging (fMRI) brain imaging data to create snapshots of network activity at a granular level over the course of several minutes, and observed rapid changes in the function, size and location of the networks.

"Assuming each brain region is interacting with the rest of the brain in the same way over time is oversimplified," said Armin Iraji, research scientist in the Center for Translational Research in Neuroimaging and Data Science (TReNDS), and lead author of the study. The study's co-authors include Vince Calhoun, Distinguished University Professor of Psychology and director of TReNDS, and Jessica Turner, associate professor of psychology.

Rather, a given brain network's spatial properties change over time as does its relationship with other brain networks, the researchers found.

"You can think of the brain like an organization where employees work together to make the whole system run," said Iraji. "For a long time, we thought brain networks were like departments or offices, where the same people were doing the same job every day. But it turns out that they may be more like coworking spaces, where people move in and out and there are different jobs being performed at any given time."

Ignoring these spatial and functional variations could result in an incorrect and incomplete understanding of the brain, Iraji added.

"Let's say we measure functional connectivity between two regions at different times, and we see some variability," he said. "One view is to say that the strength of connectivity associated with a specific task changes over time. But what if that region is responsible for different tasks at different times? Maybe there are different people in these two offices on different days, so that's why we're seeing the difference in communication."

The researchers' findings build on the concept of the chronnectome -- a model of the brain in which functional connectivity patterns change over time, which was initially proposed by Calhoun in 2014 -- in this work elucidating the "spatial" chronnectome.

The scientists also looked at whether brain networks may differ between patients with schizophrenia and healthy control subjects. While they found contrasts between the two groups, they note that these differences are not present consistently and so it is important to capture these transient changes.

"Most previous studies have looked at average network activity over time," said Iraji. "But when you look at the average, you remove all those tiny fluctuations that could be a differentiator between healthy individuals and those with brain disorders."

Credit: 
Georgia State University

Pairing 'glue' for electrons in iron-based high-temp superconductors studied

Newly published research from a team of scientists led by the U.S. Department of Energy's Ames Laboratory sheds more light on the nature of high-temperature iron-based superconductivity.

Current theories suggest that magnetic fluctuations play a very significant role in determining superconducting properties and even act as a "pairing glue" in iron-based superconductors.

"A metal becomes a superconductor when normal electrons form what physicists call Cooper pairs. The interactions responsible for this binding are often referred to as 'pairing glue.' Determining the nature of this glue is the key to understanding, optimizing and controlling superconducting materials," said Ruslan Prozorov, an Ames Laboratory physicist who is an expert in superconductivity and magnetism.

The scientists, from Ames Laboratory, Nanjing University, University of Minnesota, and L'École Polytechnique, focused their attention on high quality single crystal samples of one widely studied family of iron-arsenide high-temperature superconductors. They sought an experimental approach to systematically disrupt the magnetic, electronic and superconducting ordered states; while keeping the magnetic field, temperature, and pressure unchanged.

They chose a not-so-obvious direction-- deliberately inducing disorder in the crystal lattice, but in a controlled and quantifiable way. This was performed at the SIRIUS electron accelerator at École Polytechnique. The scientists bombarded their samples with swift electrons moving at ten percent of the speed of light, creating collisions that displaced atoms, and resulting in desired "point-like" defects. The method, adopted at Ames Laboratory in the early stages of iron superconductivity research, is a way to poke or nudge the system and measure its response. "Think about it as another 'knob' that we can turn, leaving other important parameters unchanged," said Prozorov.

In previous and related research published in Nature Communications in 2018, and using a similar approach of probing the system by disorder, the team looked at the coexistence and interplay of superconductivity and charge-density wave (CDW), another quantum order competing with superconductivity. There they found an intricate relationship in which CDW competes for the same electronic states, but also helps superconductivity by softening the phonon modes that play the role of a superconducting glue in that case (an NbSe2 superconductor).

In the present work itinerant magnetism (spin-density wave) also competes with superconductivity for the electronic states, but offers magnetic fluctuations as a glue.

The team found that the added disorder resulted in a substantial suppression of both magnetic order and superconductivity, pointing to a nontrivial role of magnetism in high-temperature superconductivity.

The research is further discussed in the paper, "Interplay between superconductivity and itinerant magnetism in underdoped Ba1-xKxFe2As2 (x = 0.2) probed by the response to controlled point-like disorder," authored by R. Prozorov, M. Ko?czykowski, M.A. Tanatar, H. H. Wen, R. M. Fernandes, and P. C. Canfield; and published in Nature Quantum Materials.

Credit: 
DOE/Ames National Laboratory

Just the tonic! How an afternoon tipple made from peas could help save the rainforest

image: The process of creating gin from peas confers many environmental benefits.

Image: 
Professor Williams, Trinity College Dublin

It's the season for a cold, refreshing gin and tonic. We may question the health impact of one too many, but what is the environmental footprint of that classically delicious aperitif?

An international team of researchers teamed up with a pioneering distillery manager to answer this very question in a study recently published in the scientific journal Environment International. What they discovered may lead to a new method for producing gin and other alcoholic drinks, as well as creating greener biofuels. Their findings may even do their bit in the fight to save the world's rainforests.

The footprint of gin production

Processes arising throughout the life cycle of gin production - including cultivation of wheat, production of enzymes, heat, electricity, packaging materials and transport - give rise to greenhouse gas (GHG) emissions of 2.3 kg CO2 equivalent (eq.) per 70 cl bottle of gin, or 160 g CO2 eq. per large measure (50 ml).

Lead author of the study, Theophile Lienhardt, puts this into context. He said: "In terms of climate change impact, sipping a large measure of gin is similar to consuming a small serving (150 ml) of milk, or to driving one km in a petrol car."

But what if that gin was made from peas? Working with the team of researchers, experts at Arbikie Distillery in Scotland have run trials in which the kernels of dried, de-hulled peas (Pisum sativum L.) are milled and fermented in place of mashed wheat grain.

The study was part of the pan-European project, TRansition paths to sUstainable legume based systems in Europe: TRUE, led by Dr Pietro Iannetta, who is a molecular ecologist at the James Hutton Institute.

Dr Iannetta said: "We found that the environmental footprint of pea gin was significantly lower than for wheat gin across 12 of 14 environmental impacts evaluated, from climate change, through water and air pollution, to fossil energy consumption."

Professor Mike Williams, a botanist from Trinity College Dublin's School of Natural Sciences, was part of the research team.

He added: "Peas - working with specialised bacteria in their roots - are able to convert nitrogen from the atmosphere into biological fertiliser. As a result, they don't require applications of polluting synthetic nitrogen fertilisers, which are widely and heavily used in industrial agriculture. Furthermore, pea hulls and distillery co-products provide protein-rich animal feeds that can replace soybean imported from Latin America, where cultivation is driving deforestation."

Co-products from one litre of pea gin substitute up to 0.66 kg of soybean animal feed, which is twice as much as can be gained from the production of wheat gin. In fact, when the potential avoidance of GHG emissions from substituted soybean cultivation, deforestation, processing and transport are also taken into account, the environmental footprint saving can exceed the GHG emissions arising from production - effectively making pea gin carbon neutral.

And if we were to make another adjustment to our gastronomic decision-making, we could do even more for the planet.

Lecturer in life cycle assessment at Bangor University and NUI Galway, Dr David Styles, added: "Of course, if we wanted to more dramatically shrink our environmental footprint and reduce deforestation, we could eat those peas directly to provide our protein and fibre requirements - instead of drinking gin and eating beef fed on the co-products."

But for those moments when we simply can't resist an afternoon G&T, the combined efforts of the research team and Arbikie Distillery mean a responsible tipple needn't cost the earth. Nor must those partaking make any sacrifices in flavour.

Manager of the Arbikie Distillery, Kirsty Black, said: "Following two distillations plus an infusion with botanical ingredients including juniper and coriander, the final gin retains the same sumptuous, aromatic flavour as if made from cereal grain."

Credit: 
Trinity College Dublin

Scientists discover origin of cell mask that hides stomach cancer

image: The red dotted line indicates epithelium of low grade atypia (ELA) covering the surface of gastric cancer tissue in upper image. ELA (Red) and cancerous tissues (Blue) extracted by Laser Microdissection in lower image.

Image: 
Hiroshima University

A layer of cells that look like normal stomach lining on top of sites of stomach cancer can make it difficult to spot after removal of a Helicobacter pylori infection. In a recent study, researchers from Hiroshima University have uncovered the origin of this layer of cells: it is produced by the cancer tissue itself.

Helicobacter pylori (H. pylori) is a type of bacteria that lives in people's stomachs. To survive the harsh environment these bacteria can neutralize stomach acid. H. pylori is the leading cause of stomach cancer, one of the most common types of cancer which can have a low survival rate. The bacteria cause inflammation by injecting a toxin-like substance into mucosal cells that line the stomach. This destruction and regeneration of these cells can lead to the development of stomach cancer.

In this study Professor Kazuaki Chayama, from Hiroshima University Hospital, and his team found the origins of a strange layer of cells that was present on stomach cancer sites after treatment of H. pylori. This layer, called ELA (epithelium with low-grade atypia), resembled normal mucosal cells that line the stomach and acted like a mask to hide stomach cancer. Up to now, researchers were not sure where this layer came from.

"It was very interesting scientifically to find that that cancer reoccurs even after eradicating causal bacteria." says Chayama.

A H. pylori infection is cured after a course of antibiotics that leave reddish depression in the stomach.

"H. pylori eradication affects the regeneration of gastric mucosa. After eradication there are many reddish depressions in the stomach, most of them are not cancer. It is difficult to identify the ELA mucosa from amongst the regular mucosa." explains Chayama.

The research group conducted a preliminary study on 10 patients after gastric operations and looked for this layer of cells. The ELA cells' DNA was intensively studied and was found to be identical to stomach cancer cells. ELA was concluded to come from the stomach cancer tissue itself.

These findings could mean that even after getting rid of H. pylori there is still a risk of stomach cancer for some patients. Stomach cancer can be difficult to spot due to its location and the fact that the disease can progress slowly. This is not helped by ELA that masks cancer after the causal factor is removed.

Chayama stresses that clinicians should be aware of this layer, so they don't miss potential sites of stomach cancer and that it is important for patients to continue having check-ups even after finishing treatment for H. pylori.

Credit: 
Hiroshima University

Most powerful and mildest reagents obtained based on eco-friendly iodine

image: Young scientists from the Research School of Chemistry & Applied Biomedical Sciences, Tomsk Polytechnic University.

Image: 
Tomsk Polytechnic University

An international collaboration of chemists from Tomsk Polytechnic University, USA, Great Britain, Canada, Belgium, and France has developed a line of polyvalent iodine-based reagents for organic synthesis. This is an eco-friendly replacement of conventional reagents based on toxic compounds such as vanadium and nitrous oxide. The line includes both the most powerful reagent and the mildest one. They are promising for the synthesis of new polymers and in more extent for the pharmaceutical industry that uses reagents based on heavy metals in producing medications. As reported by the Press Office of the Ministry of Science and Higher Education of the Russian Federation, the latest results were published in the journal Chemical Communications of the Royal Society of Chemistry.

It is polyvalent iodine proposed by TPU scientists and their foreign colleagues that can replace toxic heavy and transition platinum metals in reagents. Compared to a normal state whereby iodine forms a bond with only one carbon atom in organic synthesis, in a polyvalent state it can form a bond with a few atoms, i.e. it becomes more active.

Project supervisor Mekhman Yusubov, who is also TPU First Vice-Rector for Science, says:

'Chemical Communications published a whole series of articles authored by scientists of our collaboration. Moreover, they were featured as an independent entry on Chemistry World of the Royal Society of Chemistry.

To expand further prospects for applying reagents based on polyvalent iodine, we purposefully derived a whole line of reagents with different activity ranging from the mildest and selective to the most powerful ones. In our opinion, they have an unmatched advantage that they are non-toxic when taken separately, do not produce harmful by-products and allow the reaction to take place under very simple conditions. If synthesis with common reagents needs high temperature of about 350-500 ° C and therefore special conditions, polyvalent iodine makes it possible to work at room temperature.'

The mildest reagent in the series is called tosylate derivative of 2-iodoxybenzoic acid, and the most powerful is 2-iodoxybenzoic acid ditriflate.

'It was a non-trivial challenge to synthesize them. In the first case, polyvalent iodine was combined with a triflate group, and in the second - with a tosylate group. This was difficult to do because these groups themselves are very powerful acids. When we managed to combine them with iodine, they became 'mild', they do not cause any side processes during the reaction,' explains the scientist.

As a result, the most powerful reagent allows the synthesis, for example, of fluorinated alcohols. They are widely used to obtain biologically active compounds that are the basis for perfluorinated polymers. Previously they could only be synthesized with the use of agents based on toxic vanadium oxide and nitric oxide.

According to the authors, theoretically, it is possible to create an even more powerful reagent. The international collaboration will develop this direction as well.

'The mildest reagent is suitable for oxidizing natural compounds such as complex organic compounds which are part of living bodies. The reagent does not damage initial compounds nor does it cause any side processes.

In addition, the entire reaction takes no more than 5 minutes at room temperature. This is a high rate for organic synthesis,' notes Professor Mekhman Yusubov.

Credit: 
Tomsk Polytechnic University

Molecular energy machine as a movie star

image: Tobias Weinert, biochemist at PSI, with the experimental set-up for the 'excitation query' crystallography at the SLS: An injector produces a 50 micrometer (like a hair) thin stream of a toothpaste-like mass with the protein crystals grown in it. A small laser diode, comparable to a conventional laser pointer, is guided over mirrors and lenses and focused to the same point where the X-ray beam of the SLS hits (not in the picture). For the photo, the laser was made visible by liquid nitrogen. In the experiment, the laser is then activated for a short moment, followed by the X-rays for the molecular film.

Image: 
Paul Scherrer Institute/Markus Fischer

Researchers at the Paul Scherrer Institute PSI have used the Swiss Light Source SLS to record a molecular energy machine in action and thus to reveal how energy production at cell membranes works. For this purpose they developed a new investigative method that could make the analysis of cellular processes significantly more effective than before. They have now published their results in the journal Science.

In all living things, structural changes in proteins are responsible for many biochemically controlled functions, for example energy production at cell membranes. The protein bacteriorhodopsin occurs in microorganisms that live on the surface of lakes, streams, and other bodies of water. Activated by sunlight, this molecule pumps positively charged particles, protons, from the inside to the outside through the cell membrane. While doing this, it is constantly changing its structure.

PSI researchers were already able to elucidate one part of this process at free-electron X-ray lasers (FELs) such as SwissFEL. Now they have also managed to record the still unknown part of the process in a kind of molecular movie. For this they took a method that had previously been usable only at FELs and further developed it for use at the Swiss Light Source SLS. The study underlines the synergy between the analytical options at these two large-scale research facilities at PSI. "With the new method at SLS, we can now follow the last part of the movement of bacteriorhodopsin, where the steps are in the millisecond range", explains Tobias Weinert, first author of the paper. "With measurements at FELs in the USA and Japan, we had already measured the first two sub-processes before SwissFEL was commissioned", Weinert says. "These take place very fast, within femtoseconds to microseconds." A femtosecond is one-trillionth of a second.

To be able to observe such processes, the researchers use so-called "pump-probe" crystallography. With this method, they can take snapshots of protein movements that can then be assembled into movies. For the experiments, proteins are brought into crystal form. A laser beam, imitating sunlight, triggers the sequence of movements in the protein. X-rays that hit the sample afterwards produce diffraction images, which are recorded by a high-resolution detector. From these, computers generate an image of the protein structure at each point in time.

The movie created from the measurements at SLS shows how the structure of the bacteriorhodopsin molecule changes in the next 200 milliseconds after it is activated by light. With that, a complete so-called "photocycle" of the molecule has now been elucidated.

Bacteriorhodopsin functions as a biological machine that pumps protons from inside the cell through the membrane to the outside. This creates a concentration gradient at the cell membrane. On its outer side, there are more protons than on its inner side. The cell uses this gradient to gain energy for its metabolism by allowing protons elsewhere to balance out the externally and internally different concentrations. In doing so, the cell produces ATP, a universal energy source in living things. Subsequently, bacteriorhodopsin restores the concentration gradient.

"In the new study, we were now able to see the largest real-time structural changes in a molecule ever" - by "large" the scientist means nine angstroms, that is, one-millionth of the thickness of a human hair. Through these structural changes, a gap opens up in the protein in which a chain of water molecules forms, and this is responsible for the proton transport through the cell membrane. "Before us, no one had ever observed this water chain directly", the biochemist notes happily.

These observations were made possible only by the modification of the method previously employed at SwissFEL for use at SLS, and thanks to the new high-resolution and fast "Eiger" detector at SLS. Weinert is certain the new method for investigation by means of synchrotrons like SLS will inspire research worldwide. "Researchers can use the new method and become much more efficient, since worldwide there are many more synchrotrons than free-electron lasers. Besides that, you need fewer protein crystals than are required for experiments at FELs", Weinert adds.

However, for the very fast molecular processes, and to get especially sharp images and precise results, the researchers rely on SwissFEL. "The processes at the beginning of the photocycle take place in a matter of femtoseconds. It is only possible to observe such rapid chemical reactions at FELs." In addition, structures can be recorded with higher resolution at FELs. Because so many photons hit the sample at once at the linear accelerator, the detector can capture an extremely sharp image.

Weinert emphasises the synergy between the two large-scale research facilities: "At SwissFEL, only a small amount of beamtime is available. With the measurements at SLS, we can make sure in advance that our experiment at SwissFEL will be successful. This boosts efficiency."

Credit: 
Paul Scherrer Institute