Tech

Study: Your home's water quality could vary by the room -- and the season

video: Purdue engineers led the most comprehensive study of water quality in a single home.

Image: 
Purdue University/Erin Easterling

WEST LAFAYETTE, Ind. -- Is the water in your home actually safe, given that water utility companies in the U.S. aren't required by law to monitor the water that specifically enters a building at its service line?

A study has found that the water quality of a home can differ in each room and change between seasons, challenging the assumption that the water in a public water system is the same as the water that passes through a building's plumbing at any time of the year.

"This study reveals that drinking water in the service line water is clearly not the same quality at your faucet," said Andrew Whelton, a Purdue University associate professor of civil engineering and environmental and ecological engineering. Researchers from Purdue, the University of Memphis and Michigan State University conducted the study, published in the journal Building and Environment.

The study is so far the largest and most intensive investigation of water quality over time and throughout a single house. The researchers collected data 58 times at the house over the course of a year, logging more than 222,000 hours and 2.4 billion records. A YouTube video of the work is available at https://youtu.be/U1dZ2HkrobE.

While more studies at this scale are needed before generalizing findings to other American homes, the results are concerning: 10% of the time, disinfectant was not found in the water entering the house studied, meaning that the water was not properly protected from bacteria growth once it entered the home.

There were also increases in water pH inside the house and large fluctuations in organic carbon. Either can indicate drastic changes in drinking water chemistry.

The study took place at a three-bedroom house in West Lafayette, Indiana, that also functions as a living laboratory for developing technologies that would make homes more sustainable. The home, called the Retrofitted Net-zero Energy, Water and Waste (ReNEWW) House, is funded by Whirlpool Corp. and is the first lived-in retrofitted net-zero energy, water and waste home. Whirlpool engineers also were actively involved in the study.

"Environmental sustainability is an essential part of Whirlpool Corporation's heritage of innovative, efficient products, and our long-term commitment to our communities," said Ron Voglewede, global sustainability director of Whirlpool Corp. "ReNEWW House leverages world class facilities in collaboration with Purdue researchers to accelerate the development of sustainable innovations, which advance our constant pursuit of improving life at home."

The Environmental Protection Agency (EPA) also supported this study.

Federal and state law does require water utilities to report their drinking water's chemical quality where it enters a buried water distribution network and at select locations throughout that network. But that water may not be representative of the water quality in a building.

And even when utility companies check for lead and copper at a building faucet, they are not required to check faucets throughout the entire house or between seasons.

"We found that the water chemical quality varied significantly through water fixtures due to water temperature, plumbing fixture and different water uses," said Maryam Salehi, an assistant professor of civil engineering at the University of Memphis, who was previously a postdoctoral research associate at Purdue.

Spatially, there was no disinfectant exiting the house's water heater more than 85% of the time. While the team found other chemicals to be higher than others between seasons or throughout the house, most of these levels aren't harmful according to national guidelines.

The level of lead in the ReNEWW house did, however, exceed the American Academy of Pediatrics recommended exposure limit for children. No children were living in the home.

Some of the house's lead-free plumbing components also were found to leach lead in a bench-scale test.

These findings call for a closer look at both how often utility companies should monitor lead concentration in a home and if governments should more broadly provide financial support to homeowners for testing their own water quality.

Climate change may also exacerbate seasonal variability in water quality.

"It's known that warmer temperatures allow microorganisms to persist in source water for longer periods of time. Heavier precipitation can also result in combined sewer overflows in some locations, while droughts affect water quantity and available source water," said Jade Mitchell, an associate professor of biosystems and agricultural engineering at Michigan State University.

Some of the study's findings could already have implications for other houses in the U.S., given that the ReNEWW house has a similar square footage and plumbing design compared to the average American home.

"After water enters a home it continues to age. Older water is more likely to have contaminants that are problematic. Because the quality of water delivered to a single home can vary significantly, and building plumbing can change the water too, predicting drinking water safety at every building faucet is currently not possible," Whelton said.

The researchers note that different plumbing materials, a varying number of house occupants and other factors could affect the water quality of a home. These factors warrant future study.

More studies like this one could inform the development of new technology for preventing people from encountering unsafe water in their home.

In the meantime, there are several precautions consumers could take to improve their home's water quality.

"Choose plumbing designs that minimize the amount of water and time that water sits still. This should help limit microbial growth and lessen the chance that chemicals leaching from the plumbing exceed unacceptable levels. If you have an existing home, flush the faucet before taking a drink to get rid of the old water. Flushing can help bring in new, fresher water from the building entry point," Whelton said.

Credit: 
Purdue University

New machine learning method could supercharge battery development for electric vehicles

image: Using machine learning, a Stanford-led research team has slashed battery testing times - a key barrier to longer-lasting, faster-charging batteries for electric vehicles. (Image credit: Cube3D)

Image: 
Cube3D

Battery performance can make or break the electric vehicle experience, from driving range to charging time to the lifetime of the car. Now, artificial intelligence has made dreams like recharging an EV in the time it takes to stop at a gas station a more likely reality, and could help improve other aspects of battery technology.

For decades, advances in electric vehicle batteries have been limited by a major bottleneck: evaluation times. At every stage of the battery development process, new technologies must be tested for months or even years to determine how long they will last. But now, a team led by Stanford professors Stefano Ermon and William Chueh has developed a machine learning-based method that slashes these testing times by 98 percent. Although the group tested their method on battery charge speed, they said it can be applied to numerous other parts of the battery development pipeline and even to non-energy technologies.

"In battery testing, you have to try a massive number of things, because the performance you get will vary drastically," said Ermon, an assistant professor of computer science. "With AI, we're able to quickly identify the most promising approaches and cut out a lot of unnecessary experiments."

The study, published by Nature on Feb. 19, was part of a larger collaboration among scientists from Stanford, MIT and the Toyota Research Institute that bridges foundational academic research and real-world industry applications. The goal: finding the best method for charging an EV battery in 10 minutes that maximizes the battery's overall lifetime. The researchers wrote a program that, based on only a few charging cycles, predicted how batteries would respond to different charging approaches. The software also decided in real time what charging approaches to focus on or ignore. By reducing both the length and number of trials, the researchers cut the testing process from almost two years to 16 days.

"We figured out how to greatly accelerate the testing process for extreme fast charging," said Peter Attia, who co-led the study while he was a graduate student. "What's really exciting, though, is the method. We can apply this approach to many other problems that, right now, are holding back battery development for months or years."

A smarter approach to battery testing

Designing ultra-fast-charging batteries is a major challenge, mainly because it is difficult to make them last. The intensity of the faster charge puts greater strain on the battery, which often causes it to fail early. To prevent this damage to the battery pack, a component that accounts for a large chunk of an electric car's total cost, battery engineers must test an exhaustive series of charging methods to find the ones that work best.

The new research sought to optimize this process. At the outset, the team saw that fast-charging optimization amounted to many trial-and-error tests - something that is inefficient for humans, but the perfect problem for a machine.

"Machine learning is trial-and-error, but in a smarter way," said Aditya Grover, a graduate student in computer science who co-led the study. "Computers are far better than us at figuring out when to explore - try new and different approaches - and when to exploit, or zero in, on the most promising ones."

The team used this power to their advantage in two key ways. First, they used it to reduce the time per cycling experiment. In a previous study, the researchers found that instead of charging and recharging every battery until it failed - the usual way of testing a battery's lifetime -they could predict how long a battery would last after only its first 100 charging cycles. This is because the machine learning system, after being trained on a few batteries cycled to failure, could find patterns in the early data that presaged how long a battery would last.

Second, machine learning reduced the number of methods they had to test. Instead of testing every possible charging method equally, or relying on intuition, the computer learned from its experiences to quickly find the best protocols to test.

By testing fewer methods for fewer cycles, the study's authors quickly found an optimal ultra-fast-charging protocol for their battery. In addition to dramatically speeding up the testing process, the computer's solution was also better - and much more unusual - than what a battery scientist would likely have devised, said Ermon.

"It gave us this surprisingly simple charging protocol - something we didn't expect," Ermon said. Instead of charging at the highest current at the beginning of the charge, the algorithm's solution uses the highest current in the middle of the charge. "That's the difference between a human and a machine: The machine is not biased by human intuition, which is powerful but sometimes misleading."

Wider applications

The researchers said their approach could accelerate nearly every piece of the battery development pipeline: from designing the chemistry of a battery to determining its size and shape, to finding better systems for manufacturing and storage. This would have broad implications not only for electric vehicles but for other types of energy storage, a key requirement for making the switch to wind and solar power on a global scale.

"This is a new way of doing battery development," said Patrick Herring, co-author of the study and a scientist at the Toyota Research Institute. "Having data that you can share among a large number of people in academia and industry, and that is automatically analyzed, enables much faster innovation."

The study's machine learning and data collection system will be made available for future battery scientists to freely use, Herring added. By using this system to optimize other parts of the process with machine learning, battery development - and the arrival of newer, better technologies - could accelerate by an order of magnitude or more, he said.

The potential of the study's method extends even beyond the world of batteries, Ermon said. Other big data testing problems, from drug development to optimizing the performance of X-rays and lasers, could also be revolutionized by the use of machine learning optimization. And ultimately, he said, it could even help to optimize one of the most fundamental processes of all.

"The bigger hope is to help the process of scientific discovery itself," Ermon said. "We're asking: Can we design these methods to come up with hypotheses automatically? Can they help us extract knowledge that humans could not? As we get better and better algorithms, we hope the whole scientific discovery process may drastically speed up."

Credit: 
Stanford University

Future soldiers may get improved helmet padding

image: Sgt. Johnny Bonilla, a gunner and cannon crewmember with the173rd Airborne Brigade, wears a combat helmet first fielded in the 1980s. Army researchers explore 3-D printing and new materials with the goal even greater performance for reducing blunt impact injury.

Image: 
U.S. Army photo by Sgt. Thomas Mort

Army researchers and industry partners recently published a study showing how they developed new materials and manufacturing methods to create higher performing helmet padding that reduces the likelihood of head injury in combat and recreational helmets.

A team from the U.S. Army Combat Capabilities Development Command's Army Research Laboratory and its HRL Laboratories partners used advances in 3-D printing to create new helmet padding that consists of highly-tuned open-cell lattice structures.

"Careful control of the lattice design imparts novel compression characteristics to the padding that reduce peak head acceleration during blunt impact events compared to existing state-of-the-art foam padding," said Dr. Thomas Plaisted, the lab's project lead. "Testing demonstrated a 27% increase in energy attenuation efficiency when inserted into a combat helmet compared to current best-performing foam pads."

A significant challenge for the design of protective padding is providing the highest level of impact protection while minimizing weight and space it occupies inside the helmet, Plaisted said. The padding must be comfortable to allow a Soldier to wear a combat helmet for extended periods.

"Typical multi-impact attenuating materials include expanded polypropylene and vinyl nitrile closed-cell foams, which absorb impact energy through the collapse of internal pores when compressed," he said. "The material is carefully tuned to yield at a threshold force, or acceleration, specific to the tolerance of the head, thereby mitigating injury."

Recent advances in additive manufacturing techniques have enabled the fabrication of cellular materials with architected lattice topology.

"We demonstrated, via design of the cellular architecture, improved control over the collapse process in elastomeric lattices that enables impact-attenuation performance exceeding state-of-the-art foams for both single- and multi-hit scenarios," Plaisted said. "An improvement over state-of-the-art vinyl-nitrile foam helmet pads was achieved during a standard helmet test, leading to lower head acceleration. This breakthrough could pave the way to helmets with improved injury protection. The open cell design of the lattice further aids in comfort and breathability to dissipate heat away from the head."

Researchers recently briefed their transition partners at CCDC Soldier Center on the performance of the new padding materials and helmet suspension technologies to mitigate blunt impact head injury. The laboratory is transitioning this technology to the center for further evaluation and implementation in future helmet systems.

"Building on this work, CCDC SC has initiated its own research efforts to develop and evaluate additive manufactured helmet pads," Plaisted said. "Concurrently, we provided updates on an alternative helmet suspension technology, rate-activated tethers, invented at ARL that have demonstrated even greater performance for reducing blunt impact injury. We are working with the center to identify helmet industry partners to integrate the new rate-activated tether technology."

The Army's fundamental responsibility is to equip, train and field Soldiers with the tools and resources to engage with and destroy the enemy, while providing world-class protection, according to Army officials. As an Army Modernization priority, Soldier Lethality narrows the capability gaps to enhance a Soldier's ability to fight, win and survive through increased lethality, mobility, protection and situational awareness in order to facilitate rapid acquisition of increased capabilities.

Credit: 
U.S. Army Research Laboratory

Scientists develop safer lead-based perovskite solar cell

image: Two laboratory solar cell samples, one (right) with a protective lead-absorbing film applied to the backside.

Image: 
Photo Northern Illinois University

DeKalb, Ill. -- Researchers at Northern Illinois University and the U.S. Department of Energy's (DOE) National Renewable Energy Laboratory (NREL) in Golden, Colorado, are reporting today (Feb. 19) in the journal Nature on a potential breakthrough in the development of hybrid perovskite solar cells.

Considered rising stars in the field of solar energy, perovskite solar cells convert light into electricity. They're potentially cheaper and simpler to produce than traditional silicon-based solar cells and, on a small scale in laboratory settings at least, have demonstrated comparable efficiency levels. But key challenges remain before they can become a competitive commercial technology.

One major challenge is the use of lead. Most top-performing hybrid perovskite solar cells contain water-dissolvable lead, raising concerns over potential leakage from damaged cells.

Led by Tao Xu of NIU and Kai Zhu of NREL, a team of scientists have developed a technique to sequester the lead used to make perovskite solar cells and minimize potential toxic leakage by applying lead-absorbing films to the front and back of the solar cell.

"The lead toxicity issue has been one of the most vexing, last-mile challenges in the perovskite solar cell field," said Xu, an NIU professor of chemistry. "We think we have a highly promising remedy to this problem--and it could be a game-changer.

"In the event of a damaged cell, our device captures the great majority of the lead, preventing it from leaching into groundwater and soils. The films that we use are insoluble in water."

Under conditions of severe solar cell damage in a lab setting, the lead-absorbing films sequestered 96% of lead leakage, the scientists said. Their experiments further indicate the lead-absorbing layers do not negatively impact cell performance or long-term operation stability.

Perovskite solar cells are so named because they use a class of crystal structures similar to that found in the mineral known as perovskite. The perovskite-structured compound within these solar cells is most commonly a hybrid organic-inorganic lead halide-based material.

Scientists began to study these crystal structures for use in solar cells only about a decade ago and have rapidly increased their solar energy conversion efficiency. Whereas traditional silicon solar cells are produced with precise processes using high temperatures, perovskites can be made using room-temperature chemical solutions.

The newly developed "on-device sequestration approach" can be readily incorporated with current perovskite solar cells configurations, Xu said.

A transparent lead-absorbing film is applied to a conducting glass on the front of the solar cell. The sequestration film contains strong lead-binding phosphonic acid groups but does not hinder cell capture of light. A less expensive polymer film blended with lead-chelating agents is used on the back metal electrode, which has no need for transparency.

"The materials are off-the-shelf, but they were never used for this purpose," Xu said. "Light must enter the cell to be absorbed by the perovskite layer, and the front-side film actually acts as an anti-reflection agent, improving transparency just a bit."

Tests for lead leakage included hammering and shattering the front-side glass of 2.5-x-2.5 cm cells, and scratching the backside of the solar cells with a razor blade, before submerging them into water. The films can absorb the vast majority of the lead in severely damaged cells due to water ingress.

"It is worth noting that the demonstrated lead-sequestration approach is also applicable to other perovskite-based technologies such as solid-state lighting, display and sensor applications," said Zhu, a senior scientist at NREL.

Credit: 
Northern Illinois University

Tart cherry juice concentrate found to help improve endurance exercise performance

image: Now, a new first-of-its-kind analysis published in the Journal of the American College of Nutrition found that tart cherries improved endurance exercise performance among study participants.

Image: 
Cherry Industry Administrative Board

Montmorency tart cherry juice has gained a reputation as a recovery drink among elite and recreational exercisers, with research suggesting benefits for reducing strength loss and improving muscle recovery after intensive exercise. Now, a new first-of-its-kind analysis published in the Journal of the American College of Nutrition found that tart cherries improved endurance exercise performance among study participants.

This new meta-analysis examined 10 previously published studies on tart cherries and exercise recovery. The sample sizes ranged from 8-27, and the average ages of study participants ranged from 18.6 to 34.6 years. Most of the participants were endurance-trained individuals, including cyclists, runners and triathletes. The 10 studies totaled 127 males and 20 females.

After pooling results from the 10 published studies, the meta-analysis concluded that tart cherry concentrate in juice or powdered form significantly improved endurance exercise performance when consumed for seven days to 1.5 hours before cycling, swimming or running.

"The recovery benefits of tart cherry concentrate are well researched, yet evidence on performance enhancement is scarce and results have been mixed," said co-author Philip Chilibeck, PhD, professor in the College of Kinesiology at the University of Saskatchewan. "The results of this meta-analysis found that tart cherries did help improve performance, and we gained greater insight into the potential mechanism responsible for this benefit."

Research Methodology

Researchers reviewed existing research related to tart cherries and aerobic endurance sport performance and identified 10 studies that fit the inclusion criteria. To qualify, studies were required to be randomized controlled trials conducted in a healthy adult population and use a placebo as a comparison for tart cherry supplementation (including tart cherry juice, tart cherry concentrate, tart cherry powder and tart cherry powder capsules).

Nine of the 10 studies involved longer-term tart cherry consumption (around two to seven days prior to exercise) and one involved same-day supplementation. Tart cherry dosages varied across studies and included 200 to 500 mg/day in capsule or powder form, 60 to 90 mL/day of tart cherry juice concentrate diluted with 100 to 510 mL water and 300 to 473mL/day of tart cherry juice. The total amount of anthocyanins consumed daily ranged from 66 to 2,760 mg.

Methods of measuring performance differed across studies, and included distance on a shuttle swimming test, time to exhaustion during high-intensity cycling, total work performed during cycling, cycling time trials (time it took to cover 10 km, 15 km and 20 km) and time to complete a full or half marathon. To account for these variations, researchers calculated standardized mean differences and 95% confidence intervals to assess performance changes.

Results

Pooled results across these 10 studies indicated a significant improvement in endurance performance with tart cherry concentrate, with two of the 10 studies reporting significant performance-enhancing effects on their own. While pooled results in the meta-analysis found significant benefits, eight of the 10 studies on recovery did not demonstrate a performance benefit when comparing tart cherry to placebo. This could be related to participant demographics and fitness levels, diet and exercise control, supplementation protocol and measurements of performance. Not all studies used well-trained athletes, and the meta-analysis found the lowest improvement when tart cherry juice was consumed by the lowest trained participants. No dose-response relationship was found between tart cherry concentrate and performance, and further studies are warranted to find an optimal dosing strategy.

Nearly all of the studies on cherries and recovery or performance have been conducted with Montmorency tart cherries, the most common variety of tart cherries grown in the U.S. These home-grown tart cherries are available year-round in dried, frozen, canned, juice and juice concentrate forms. Other varieties of tart cherries may be imported and not grown locally.

Credit: 
Weber Shandwick Chicago

Different tick, same repellents: Study shows how to avoid Asian longhorned tick

image: While the invasive Asian longhorned tick (Haemaphysalis longicornis) has now appeared in 12 states since its first detection in the United States in 2017, new research offers some good news about its potential as a public health threat. The same insect repellents and other personal protective measures recommended to prevent bites from native tick species also appear to be equally effective against the Asian longhorned tick (adult female shown here).

Image: 
James Gathany, CDC Public Health Image Library

Annapolis, MD; February 19, 2020--While the invasive Asian longhorned tick (Haemaphysalis longicornis) has now appeared in 12 states since its first detection in the United States in 2017, new research offers some good news about its potential as a public health threat. The same insect repellents and other personal protective measures recommended to prevent bites from native tick species also appear to be equally effective against the Asian longhorned tick.

Researchers at the U.S. Centers for Disease Control and Prevention (CDC) measured the reaction of Asian longhorned tick nymphs (immatures) to the six kinds of repellents that it generally recommends for tick bite prevention, as well as to permethrin-treated clothing. All six repellents and the clothing were deemed highly effective, repelling more than 92 percent of ticks in laboratory tests. Their findings are published today in the Journal of Medical Entomology.

Little research exists on the Asian longhorned tick compared with native ticks in the U.S., so this new study is an important first step, says Lars Eisen, Ph.D., research entomologist at the CDC's National Center for Emerging and Zoonotic Infectious Diseases and senior author on the study. "This is the first publication about the effects of commercially available repellents on H. longicornis. We suspected that repellents recommended by CDC to prevent tick bites would also be effective against this invasive tick species, but we needed to confirm this in order to provide evidence-based recommendations."

Cases of human bites by the Asian longhorned tick have been reported in the U.S. since its arrival. Although it has not yet been found to transmit human disease-causing pathogens in the U.S., in Asia it is known to transmit severe fever with thrombocytopenia syndrome virus and the Japanese spotted fever bacteria Rickettsia japonica.

In lab tests, Eisen and colleagues at CDC exposed H. longicornis nymphs to six over-the-counter repellent products. They placed the ticks under a petri dish lid over filter paper treated with repellent on one half of the circular area and untreated on the other, and they recorded the ticks' locations every five minutes for 30 minutes. At every observation, no less than 92 percent of the ticks were found on the nontreated side, avoiding the side with repellent. The repellent products tested each contained a different EPA-approved active ingredient:

DEET

picaridin

IR3535

oil of lemon eucalyptus

p-menthane-3,8-diol

2-undecanone

The team also tested fabric treated with permethrin, an insecticide that can be factory-impregnated into garments or applied to clothing by consumers as a spray. Asian longhorned tick nymphs were placed on the treated fabric held at a 45-degree angle, and nearly all were quick to let go and tumble off the fabric rather than cling on--72 percent letting go within 1 minute and 96 percent within 4 minutes.

These results echo tests of the same repellents on native ticks in the U.S. such as the blacklegged tick (Ixodes scapularis), lone star tick (Amblyomma americanum), and American dog tick (Dermacentor variabilis), research that supports the CDC's guidelines for tick bite prevention.

"Our findings indicate that, fortunately, CDC-recommended personal protective measures such as the use of EPA-approved repellents and permethrin-treated clothing are equally effective against the invasive H. longicornis as our native human-biting ticks," says Eisen.

At the time of the CDC study, only H. longicornis nymphs were available for testing. Eisen says similar tests should be conducted on H. longicornis adults and larvae, and further research is needed to answer a host of other questions about this invasive tick species.

"There is much still to discover about H. longicornis--for example, do repellents work as well for adult ticks as for nymphs, and do ticks encountered in the field in the U.S. respond differently when they encounter a repellent compared to laboratory-reared ticks?" Eisen says. "Two other areas of interest are to test host-seeking H. longicornis ticks collected in the field to see if they are naturally infected with microorganisms known to cause human illness and to examine wild animals to better understand which hosts these ticks prefer. This can help us better understand which disease agents the ticks are most likely to acquire while feeding."

Credit: 
Entomological Society of America

New mechanism involved in senescence modulates inflammation, response to immunotherapy

PHILADELPHIA -- (Feb. 19, 2020) -- Scientists at The Wistar Institute discovered a novel pathway that enables detection of DNA in the cytoplasm and triggers inflammation and cellular senescence. This pathway may be modulated during senescence-inducing chemotherapy to affect cancer cell response to checkpoint inhibitors. Results were published online in Nature Communications.

Cellular senescence is a natural tumor suppression mechanism that stably halts proliferation of damaged or premalignant cells. Senescent cells also represent a trigger of inflammation and immune reaction as they produce an array of inflammatory molecules collectively known as senescence-associated secretory phenotype (SASP).

"Uncovering an important step that mediates the senescence response and enables the SASP, we identified a novel molecular pathway involved in immunotherapy response," said lead researcher Rugang Zhang, Ph.D., deputy director of The Wistar Institute Cancer Center, professor and co-program leader of the Gene Expression and Regulation Program. "We suggest that this pathway might be targeted to modulate senescence-inducing effects of cancer therapeutics and affect response of senescent cancer cells to immunotherapy."

Cells that have been exposed to various stressors and have suffered substantial DNA damage, for example during chemotherapy, transport pieces of DNA from the nucleus to the cytoplasm as a way to signal that something is wrong. The cGMP-AMP synthase (cGAS) senses cytosolic DNA and activates senescence and immunity by triggering a cascade of cellular events that culminate with production of the SASP. How cGAS senses DNA was unknown.

To investigate this, the Zhang lab focused on the proteins attached to cytoplasmic DNA in senescent cells and identified topoisomerase 1 (TOP1) as the missing link between cGAS and DNA. TOP1 is an enzyme that unwinds the DNA helix to facilitate its replication and transcription to RNA. It has the ability to attach to DNA forming a strong DNA-TOP1 complex called TOP1cc. According to the new study, TOP1 also interacts with cytosolic DNA and cGAS, connecting the two and facilitating the DNA-sensing activity of cGAS.

Importantly, researchers also found that HMGB2, a protein that regulates chromatin structure and orchestrates the SASP at the gene expression level, enhances the interaction of TOP1 with DNA by stabilizing the DNA-bound form TOP1cc and is required for senescence and SASP.

The authors went on to establish that the HMGB2-TOP1cc-cGAS pathway is essential for the antitumor effect of immune checkpoint blockade therapy in a mouse model, as knock down of HMGB2 abated response to anti-PD-L1 treatment. Treating tumors with a TOP1 inhibitor that stabilizes the TOP1cc-DNA binding and mimics HMGB2 restored treatment response and increased survival.

"TOP1 inhibitors are clinically used for cancer therapy," said Bo Zhao, Ph.D., first author of the study and a postdoctoral researcher in the Zhang Lab. "We suggest they may have additional applications to sensitize tumors to immunotherapy, especially targeting cancer cells that become senescent in response to therapies such as chemotherapy or radiotherapy."

Credit: 
The Wistar Institute

Random gene pulsing generates patterns of life

A team of Cambridge scientists working on the intersection between biology and computation has found that random gene activity helps patterns form during development of a model multicellular system.

We all start life as a single cell, which multiplies and develops into specialised cells that carry out different functions. This complex process relies on precise controls along the way, but these new findings suggest random processes also contribute to patterning.

In research published today in Nature Communications, the scientists from James Locke's team at the Sainsbury Laboratory Cambridge University and collaborators at Microsoft Research describe their discovery of surprising order in randomness while studying bacterial biofilms.

A biofilm develops when free-living single-celled bacteria attach to a surface and aggregate together to start multiplying and spreading across the surface. These multiplying individual cells mature to form a three-dimensional structure that acts like a multicellular organism.

And while individual cells can survive on their own, these bacteria prefer to work together with biofilms being the dominant form found in nature. The biofilm consortium provides bacteria with various survival advantages like increased resistance to environmental stresses.

The researchers developed a new time-lapse microscopy technique to track how genetically identical single cells behave as the living biofilm developed.

Dr Eugene Nadezhdin, joint lead-author, said: "We looked at how cells decide to take on particular roles in the biofilm. We found that towards the surface of the biofilm there were two different cell types frequently present - cells that form dormant spores and those that keep growing and activate protective stress responses. These two cell types are mutually exclusive, but they both could exist in the same location."

They focussed on obtaining a detailed picture of how gene expression (whether genes are active or inactive) changes over time for the individual cell types, specifically on expression of a regulatory factor, called sigmaB, which promotes stress responses and inhibits spore formation. They found that sigmaB randomly pulses on and off in cells at hourly intervals, generating a visible pattern of sporulating and stress-protected cells across the biofilm.

To understand the implications of the pulsing, the researchers generated a mathematical model of the sigmaB-controlled stress response and sporulation systems.

Dr Niall Murphy, joint lead-author, said: "The modelling revealed that the random pulsing means that at any one time only a fraction of cells will have high sigmaB activity and activation of the stress pathway, allowing the remainder of cells to choose to develop spores. While the pulsing is random, we were able to show through a simple mathematical model that increasing expression of the gene creates shifting patterns among the different regions of the biofilm."

The results demonstrate how random pulsing of gene expression can play a key role in establishing spatial structures during biofilm development.

Dr Locke said: "This randomness appears to control the distribution of cell states within a population - in this case a biofilm. The insights gained from this work could be used to help engineer synthetic gene circuits for generating patterns in multi-cellular systems. Rather than the circuits needing a mechanism to control the fate of every cell individually, noise could be used to randomly distribute alternative tasks between neighbouring cells."

Credit: 
University of Cambridge

The potentially deadly paradox of diabetes management

ROCHESTER, Minn. -- Diabetes affects nearly 1 in 10 adults in the U.S., of these millions, more than 90% have Type 2 diabetes. Controlling blood sugar and glycosylated hemoglobin levels ? or HbA1c, which is sometimes referred to as A1C ? is key to diabetes management and necessary to prevent its immediate and long-term complications. However, new Mayo Clinic research shows that diabetes management may be dangerously misaligned.

The new study, which will be published Feb. 19 in BMJ Open Diabetes Research & Care, shows paradoxical trends in overtreatment and undertreatment of patients with Type 2 diabetes.

An A1C of less than 7% is the target for most people, according to the American Diabetes Association. Sometimes Type 2 diabetes can be managed by diet and exercise. Most often, people also need medications or insulin to keep their blood sugar at a healthy level.

However, there is a fine line between enough treatment to prevent the complications caused by high blood sugars and too much treatment that could cause blood sugar levels to fall dangerously low ? a condition known as hypoglycemia. Ideally each patient would have individualized treatment goals and regimens, says Rozalina McCoy, M.D., a primary care physician and endocrinologist at Mayo Clinic, and the study's lead author.

"Patients who are older or who have serious health conditions are at high risk for experiencing hypoglycemia, which, for them, is likely to be much more dangerous than a slightly elevated blood sugar level," she says. "At the same time, the benefits of intensive treatment usually take many years, even decades, to realize. So many patients may be treated intensively and risk hypoglycemia for no real benefit to them."

The opposite is true for younger, healthier people with diabetes, she explains. These people are less likely to experience severe hypoglycemia and are most likely to achieve meaningful long-term improvements in health with intensive diabetes therapy.

"These patients should be treated more aggressively, meaning that we should not shy away from using insulin or multiple medications to lower the A1C," says Dr. McCoy. "We need to ensure that all our patients with diabetes receive high-quality care and are able to manage their disease to prevent complications both now and in the future."

In their study, Dr. McCoy and colleagues found that people with diabetes across the U.S. often receive treatment that is too aggressive or not aggressive enough.

"What makes it even worse is that patients who are treated intensively are those who are most likely to be harmed by it," says Dr. McCoy. "But at the same time, patients who would benefit from more intensive treatment are not receiving the basic care that they need. The paradox and misalignment of treatment intensity with patients' needs is really striking."

Numbers don't lie

The researchers used patient information from the OptumLabs Data Warehouse to conduct this study. The OptumLabs Data Warehouse is a longitudinal, real-world data asset with de-identified administrative claims and electronic health record data.

They examined the records of 194,157 patients with Type 2 diabetes, looking at A1C levels and the use of insulin and/or a sulfonylurea, a type of diabetes medication that stimulates insulin production, across multiple age groups and levels of clinical complexity. The team specifically focused on the 16 comorbidities specified by both American Diabetes Association and Department of Veterans Affairs guidelines as warranting relaxation of A1C targets, and cautious use of insulin and sulfonylureas.

The research team found that the highest A1C levels -- with a mean of 7.7% -- were among people ages 18-44 and the lowest levels -- with a mean of 6.9% -- were among those 75 years and older. Also, patients who had no comorbidities had the highest A1C -- with a mean of 7.4% -- while those with advanced comorbidities, including dementia, cancer or end-stage kidney disease, maintained the lowest A1C levels -- with a mean of 7%.

This antithetical relationship of overtreatment among those least likely to benefit and undertreatment where stricter control would have been life-extending was apparent when the authors examined the proportion of patients who achieved very low or very high A1C levels while treated with insulin.

"Patients least likely to benefit from intensive glycemic control and most likely to experience hypoglycemia with insulin therapy were most likely to achieve low HbA1c levels and to be treated with insulin to achieve them," reports the paper.

According to the study, "These HbA1c levels reflect HbA1c levels achieved by the patient, not necessarily HbA1c levels pursued by the clinician."

There are many possible reasons for these findings, and Dr. McCoy hopes that future research will shed light on the causes for this risk-treatment paradox and ways to reverse it.

The lesson in all of this

"Most importantly, clinicians should continue to engage their patients in shared and informed decision-making, weighing the risks and benefits of glucose-lowering treatment regimens in the specific context of each patient, carefully considering the patient's comorbidity burden, age, and goals and preferences for care," concludes the paper.

Dr. McCoy stresses the importance of recognizing and addressing "therapeutic inertia" -- failure to recognize an appropriate time to modify treatment -- in diabetes management.

"We have a great opportunity to simplify and de-intensify the treatment regimens of our more elderly patients, which would reduce their risk of hypoglycemia and treatment burden without spilling over into hyperglycemia," Dr. McCoy says. "At the same time, we need to better engage younger, healthier patients, work with them to identify barriers to diabetes management, and support them to improve their glycemic control."

"As clinicians, we need to be current on the guidelines and the evidence, know our patients, and work closely with them to do what is right for them," says Dr. McCoy.

Credit: 
Mayo Clinic

sphingotec's biomarker penKid® shows best representation of true glomerular filtration rate and has utility in patients with severe burns two studies show

Hennigsdorf/Berlin, Germany, February 19, 2019 - Diagnostics company SphingoTec GmbH ("sphingotec", Hennigsdorf Germany) today announced the publication of two studies demonstrating that its kidney function biomarker Proenkephalin (penKid®) is the most accurate surrogate for assessing true glomerular filtration rate (true GFR) and is reliably predictive for acute kidney injury (AKI) in patients with severe burns. Although AKI is a major complication in critically ill patients, current diagnostic standard methods do not properly and timely diagnose impaired renal function. The two recently published studies add to the rapidly growing body of evidence suggesting that penKid® can address this highly unmet diagnostic need.

In Shock (1), a team headed by Prof. Peter Pickkers from Radboud UMC (Nijmegen, The Netherlands) confirmed by in-depth diagnostic method comparison in patients with impaired kidney function that penKid® levels properly reflect kidney function. While today's standard of care uses estimations of the glomerular filtration rate (eGFR) to assess renal impairment, the current study shows that these methods overestimate true GFR with over 30%. The published findings demonstrate that penKid® can add value by properly reflecting true GFR that can otherwise only be measured using in vivo clearance of iohexol, an invasive method too laborious and time-consuming for clinical routine use.

Another study published in the Journal of Burns (2) reports for the first time that high penKid® plasma levels at admission to the intensive care unit (ICU) were associated with the risk for developing AKI in patients with severe burns, where mortality rates range from 30-70%.

The new data suggest, that the current renal function- and AKI standard markers should be complemented with penKid® values for accurately quantifying the kidney function in critically ill patients. Dr. Andreas Bergmann, CEO of sphingotec, commented: "penKid® is an early renal function biomarker that is not biased by co-morbidities while reflecting true GFR. penKid® has been tested for the first time to identify ICU patients with severe burns that need rapid and aggressive intervention to prevent mortality caused by AKI."

To support timely treatment decisions that are likely to improve patient management in critical care patients, sphingotec launched a fully automated CE-IVD-marked penKid® assay on its Nexus IB10 platform in January 2020. This new test complements a wide-range of assays for acute care settings that are already available on this widely used point-of-care platform that can be flexibly deployed in near-patient as well as laboratory settings.

Credit: 
sphingotec GmbH

A spookily good sensor

image: Schematic of modes of interest in the single-magnon detector. The uniformly-processing mode of collective spin excitations in the ferromagnetic crystal, called Kittel mode, is coherently coupler to a superconducting qubit through a microwave cavity mode.

Image: 
©Dany Lachance-Quirion

Tokyo, Japan - Scientists from the Research Center for Advanced Science and Technology (RCAST) at The University of Tokyo demonstrated a method for coupling a magnetic sphere with a sensor via the strange power of quantum entanglement. They showed that the existence of even a single magnetic excitation in the sphere could be detected with a one-shot measurement. This work represents a major advancement toward quantum systems that can interact with magnetic materials.

Imagine having a sensor powerful enough to tell you, in a single sweep, if a nearby haystack contained a needle or not. Such a device might seem like it could exist only in science fiction, but, using one of the most counterintuitive effects of quantum mechanics, this level of sensitivity can become reality. Entanglement, the strange process at the heart of quantum mechanics that allows linked particles to interact instantly over long distances, was once called "spooky action at a distance" by Albert Einstein.

Experiments have confirmed that quantum mechanics permits situations in which parts of a system can no longer be described separately, but rather become fundamentally entangled, such that measurement of one automatically determines the fate of the other. For example, two electrons can become entangled so that they are both pointing up or both pointing down - so measuring one instantly affects the state of the other. "Entanglement has been in quantum mechanics textbooks for decades," says first author Dr. Dany Lachance-Quirion, "but the applications for producing very sensitive detectors with it are only now starting to be realized."

In the experiments conducted at RCAST, a millimeter-sized sphere of yttrium iron garnet was placed in the same resonant cavity as a superconducting Josephson junction qubit, which acted as the sensor. Because of the coupling of the sphere to resonant cavity, and, in turn, between the cavity to the qubit, the qubit could only be excited by an electromagnetic pulse if no magnetic excitations were present in the sphere. Reading the state of the qubit then reveals the state of the sphere.

"By using single-shot detection instead of averaging, we were able to make our device both highly sensitive and very fast," Professor Yasunobu Nakamura explains. "This research could open the way for sensors powerful enough to help with the search for theoretical dark-matter particles called axions."

Credit: 
Japan Science and Technology Agency

Green chemistry of fullerene: Scientists invented an environmentally friendly way to realize organic

Scientists from the Skoltech Center for Energy Science and Technology (CEST) and the Institute for Problems of Chemical Physics of Russian Academy of Sciences have developed a novel approach for preparing thin semiconductor fullerene films. The method enables fabrication of organic electronics without using toxic organic solvents and costly vacuum technologies, thus reducing the environmental risks and making organic electronics more accessible. The results of their study were published in the Journal of Materials Chemistry C.

Organic electronics provides manufacturers with unique capabilities inconceivable in other technologies. The light weight, flexibility and low cost of organic semiconductors along with their tailored through chemical modification properties opens wide opportunities for designing inexpensive and efficient devices for the Internet of Things (IoT) technology, real-time health monitoring, food quality control and many other applications.

However, there are several obstacles for wide-scale commercial use of organic semiconductors, in particular the environmental risks associated with mass production of organic semiconductor electronics using coating and printing techniques with a large volume of environmentally hazardous toxic organic solvent vapors discharged into the atmosphere. The vacuum methods are environmentally friendly but very energy-intensive, which results in much higher production costs and larger emissions of CO2 and other greenhouse gases in energy generation. Replacing toxic organic compounds, such as chloroform, toluene or 1,2-dichlorobenzene, with safe solvents like water or alcohols can be a major breakthrough.

A unique form of carbon, fullerene C60, is represented by molecules similar in shape to a soccer ball and possessing a wealth of remarkable properties, in particular being good n-type semiconductor. However, like many other semiconductors, it is mostly soluble in toxic (and often chlorinated) organic solvents.

Earlier, a research team led by Skoltech Professor Pavel Troshin demonstrated that sulfur-containing fullerene derivatives decompose when slightly heated, producing initial fullerene. In their recent work, the researchers leveraged this property to obtain thin fullerene films from aqueous solutions.

"The goal of our study was to develop a method for coating thin films of fullerene from aqueous or alcohol solutions. Of particular interest in this context are sulfur-containing fullerene derivatives with ionogenic (amine or carboxylic) groups that are readily soluble in water. This means that one can use aqueous solutions of these precursor compounds as "electronic ink" and apply them on a substrate using the existing printing and coating techniques to obtain the films that only need to be annealed to get a high-quality fullerene semiconductor films," explains the first author of the paper and Skoltech PhD student, Artyom Novikov.

The fullerene semiconductor films obtained from a water-soluble precursor compound were used to make organic field-effect transistors with highly charge carrier mobility and gas sensors that can detect analyte (ammonia) in concentrations of less than 1 ppm.

The results obtained in this study feature a great potential of water-soluble precursor compounds for environmentally friendly production of organic electronics.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

An optimized structure of memristive device for neuromorphic computing systems implemented

image: High-resolution transmission electron microscopy image (a) and schematic (b) of the cross-section of the multi-layer memristive structure in the region of the conducting filament (CF), the dependence of resistive states on the number of switching cycles and a photograph of the memristive chip with memristive microdevices (c)

Image: 
Lobachevsky University

Lobachevsky University scientists have implemented a new variant of the metal-oxide memristive device, which holds promise for use in RRAM (Resistive Random Access Memory) and novel computing systems, including neuromorphic ones.

Variability (lack of reproducibility) of resistive switching parameters is the key challenge on the way to new applications of memristive devices. This variability of parameters in "metal-oxide-metal" device structures is determined by the stochastic nature of the migration process of the oxygen ion and/or oxygen vacancies responsible for oxidation and reduction of conductive channels (filaments) near the metal/oxide interface. It is also compounded by the degradation of device parameters in case of uncontrolled oxygen exchange.

Traditional approaches to controlling the memristive effect include the use of special electrical field concentrators and the engineering of materials/interfaces in the memristive device structure, which typically require a more complex technological process for fabricating memristive devices.

According to Alexey Mikhaylov, head of the UNN PTRI laboratory, Nizhny Novgorod scientists for the first time used in their work an approach that combines the advantages of materials engineering and self-organization phenomena at the nanoscale. It involves a combination of the materials of electrodes with certain oxygen affinity and different dielectric layers, as well as the self-assembly of metal nanoclusters that serve as electric field concentrators.

"This approach does not require any additional operations in the process of fabrication of such devices and demonstrates a practically important result: the stabilization of resistive switching between nonlinear resistive states in a multilayer device structure based on yttrium-stabilized zirconium dioxide films with a given concentration of oxygen and additional layers of tantalum oxide," explains Alexey Mikhaylov.

Following a comprehensive study of the structure and composition of materials by Lobachevsky University scientists, it is possible to interpret the obtained result on the basis of the concept about the formation of filaments with a central conductive part in the ZrO2(Y) film and reproducible structural transformations between a more conductive rutile-like TaOx phase and the dielectric Ta2O5 phase in the underlying tantalum oxide film under Joule heating of the local area near the filament.

The presence of grain boundaries in ZrO2(Y) as preferred nucleation sites for filaments, the presence of nanoclusters as field concentrators in the Ta2O5 film, and the exchange of oxygen between the oxide layers at the interface with TiN contribute to the stabilization of resistive states.

"It is important to note that the optimized structure has also been implemented as part of the memristive chip with cross-point and cross-bar devices (device size - 20 μm × 20 μm), which demonstrate robust switching and low variation of resistive states (less than 20%), which opens up the prospect of programming memristive weights in large passive arrays and their application in the hardware implementation of various functional circuits and systems based on memristors. It is expected that the next step towards commercialization of the proposed engineering solutions will consist in integrating the array of memristive devices with the CMOS layer containing peripheral and control circuits", concludes Alexey Mikhaylov.

For this purpose, a new integrated circuit topology is being developed at Lobachevsky University. This work is funded by a Nizhny Novgorod region grant in the field of science, technology and engineering (Agreement No. 316-06-16-20/19).

Credit: 
Lobachevsky University

New cholesterol-lowering guidelines would increase cost of treatment

image: Peter Ueda, intern physician and postdoctoral researcher at the Department of Medicine in Solna at Karolinska Institutet.

Image: 
Stefan Zimmerman

The financial burden on health systems would drastically increase if new European expert guidelines for cholesterol-lowering treatment were implemented, according to a new simulation study by researchers at Karolinska Institutet in Sweden, published in the European Heart Journal. The findings highlight an urgent need for cost-effectiveness analysis given the current cost of the proposed treatment for very high-risk patients, the researchers say.

In August 2019, the European Society of Cardiology (ESC) and the European Atherosclerosis Society (EAS) recommended that low-density lipoproteins cholesterol levels (LDL-C), also described as the "bad" cholesterol, should be substantially lowered to prevent cardiovascular disease, especially in very high-risk patients. The guidelines carry significant weight for clinicians and authorities and are often used as a reference point for treatments in Europe and elsewhere.

For patients with a very high risk of cardiovascular disease, such as those with a recent heart attack, the new guidelines recommend both lowering the LDL-C level by at least 50 percent and having a LDL-C level of less than 1.4 millimoles per liter of blood (mmol/L). This is a sharp reduction compared to previous guidelines presented three years earlier. To reach these targets, the organizations recommend combining lifestyle modifications with the low-cost cholesterol-lowering drugs statins and ezetimibe. If the LDL-C goal isn't reached despite the use of these therapies, adding a new type of high-cost cholesterol-lowering drug known as a PCSK9 inhibitor is recommended.

In this study, the researchers predicted the implications of the new guidelines by calculating how many patients would be eligible for expanded therapy. Using Sweden's national registry for heart disease patients, SWEDEHEART, the researchers studied more than 25,000 people who suffered heart attacks between 2013 to 2017 and whose cholesterol levels were measured during follow-up visits after six to 10 weeks.

The researchers found that more than 50 percent of the patients would be eligible for PCSK9 inhibitors as they would not have reached the LDL-C targets with only high-intensity statins and ezetimibe. When use of two currently approved PCSK9 inhibitor drugs (alirocumab or evolocumab) was simulated in those patients, around 90 percent of all patients attained the LDL-C target. The annual cost of treating a patient in Sweden with PCSK9 inhibitors is more than 4,500 euros compared to only around 30 euros with statins or ezetimibe.

"PCSK9 inhibitors are clearly effective cholesterol-lowering drugs which may reduce the risk of cardiovascular events but they come at a substantial cost," says Ali Allahyari, resident physician in cardiology and doctoral student at the Department of Clinical Sciences, Danderyd Hospital, Karolinska Institutet, and first author of the study. "If half of the patients with heart attacks would be eligible for this drug, the financial burden on health systems throughout Europe and other countries using the ESC/EAS guidelines may be substantial unless the cost of treatment is reduced."

Using previous analyses, the researchers looked at to what degree lowering one's bad cholesterol can help reduce the risk of having another severe cardiovascular episode. They estimated that using the PCSK9 inhibitor drug alirocumab to prevent one major adverse cardiovascular event, such as another heart attack, would cost around 846,000 euros in Sweden (8.9 million Swedish kronor).

"Many new therapies are being tested and introduced in cardiovascular medicine today," says Peter Ueda, intern physician and postdoctoral researcher at the Department of Medicine in Solna who led the study. "Our analyses highlight yet another situation for which we need to consider what we deem reasonable in terms of the number of patients being treated, the expected health gains and cost."

Credit: 
Karolinska Institutet

Cancer screening among women prescribed opioids

U.S. women who take prescription opioids are no less likely to receive key cancer screenings when compared to women who are not prescribed opioids. Researchers at the University of California, Davis analyzed data from a nationally representative sample of 53,982 women in the United States. Findings revealed that women who are prescribed opioids were more likely to receive breast, cervical, and colorectal cancer screenings for the simple fact that they are frequent users of the health care system. They had a median number of doctor visits that was five times higher than their non-prescribed counterparts. When this factor was controlled for, analysis showed no association between prescription opioid use and cancer screening. This study is one of the first to examine access to key preventive health services for opioid versus non-opioid users. Authors suggest that "the key driver of whether women receive recommended cancer screening is simply how often they see the doctor."

Credit: 
American Academy of Family Physicians