Tech

Lots of lead in the water? Maybe manganese is to blame

image: New research from the McKelvey School of Engineering demonstrates the role manganese plays in the rate of transformation of lead carbonate to lead dioxide.

Image: 
Washington University in St. Louis

Manganese is not a particularly toxic mineral. In fact, people need a little in their diets to remain healthy.

Research at Washington University in St. Louis has shown however, that in conjunction with certain other chemicals, naturally occurring manganese can lead to big changes in the water in lead pipes. Depending on what disinfectants are used in the water, those changes can have significant -- even dangerous -- consequences.

The results were recently published in Environmental Science and Technology.

The research focuses on a unique form of lead, PbO2 or lead dioxide (lead in the plus-4 oxidation state). Lead dioxide has a very low water solubility -- it does not easily dissolve in water alone. It is also uncommon in nature, unlike the more familiar PbCO3, the lead carbonate that makes up the scales that tend to form on pipes.

"You don't find PbO2 in the environment because there is no strong oxidizing agent," said Daniel Giammar, the Walter E. Browne Professor of Environmental Engineering at the McKelvey School of Engineering. "But good disinfectants are often good oxidizing agents."

Chlorine is a great disinfectant, so much so that it's used commonly in drinking water in America and across the world. It is also good oxidizing agent and promotes the transformation of lead carbonate to lead dioxide.

It turns out, however, that the process isn't particularly speedy, a fact that jibes with some real-world systems, but, seemingly, not with others.

"If you look at a system that has lead pipes and free chlorine, then you do the calculations, you'd expect that every single one would have lead dioxide on the pipes," Giammar said. "But we don't see that. It makes us think: Something else is influencing whether or not a particular system ends up with lead dioxide on its inner surface.

"That's where manganese comes in."

In the presence of oxidants, manganese can easily change oxidation states; if the manganese comes into contact with chlorine, it's oxidized, turning into manganese oxide. Both in computer models and in experiments that mimicked water pipes -- complete with artificial tap water --Giammar's lab found the manganese oxide acted as a catalyst, increasing the rate of conversion from lead carbonate to lead dioxide by two orders of magnitude.

"The chlorine is still the reactant that's driving the lead conversion, but the manganese oxide acts as a catalyst to make it faster," Giammar said.

This research may well help inform the way other chemical interactions affect rates of lead transformation. "What other things that aren't lead may be affecting these rates?" Giammar asked. "Do iron oxides do it? Aluminum is something we'll study, too."

Further research into understanding what reactions influence lead transformation rates and otherwise affect the availability of lead in water will lead to more than breakthroughs in the lab. They will have real implications on health.

Take Washington, D.C. in 2000, for example.

The District's Water and Sewer Authority changed from a chlorine disinfectant to a less strong one called chloramine because the chlorine was creating some unpleasant byproducts. But there was an unforeseen consequence.

"When the water authority switched the disinfectant, the lead dioxide in the pipe scale was no longer stable," he said. "It dissolved rapidly and generated high concentrations of lead in the tap water."

The events in D.C. made other systems using free chlorine start asking questions about whether or not they should be concerned about lead dioxide if they were to change to chloramine. Interestingly many systems observe lead dioxide in the scales on lead service lines, but other systems do not. Varying concentrations of manganese among public water systems could potentially explain these differences.

"How you're going to treat your water depends on the source and its composition, also your infrastructure," Giammar said. "There's no one size fits all."

This discovery was an accident.

The lab was running another experiment with artificial tap water in lead pipes and treated it with chlorine to see if they could create lead dioxide.

They included substances commonly found in tap water: calcium, magnesium, sodium and chloride. "There was a new student working on the project and, instead of adding magnesium, she added manganese," Giammar said.

Then things got weird. "The water had been clear, all of a sudden it was cloudy and black."

There was a lot of lead precipitation for a few weeks, but then it died down.

"We opened up the pipes and looked,"Giammar said. "Oh, we have the lead dioxide we were trying to make." The manganese just accelerated the process.

Credit: 
Washington University in St. Louis

New deactivation mechanism for switch proteins detected

image: Klaus Gerwert, Till Rudack and Carsten Kötting (from left) have investigated switch proteins for years, for instance the Ras protein depicted here.

Image: 
RUB, Kramer

A new mechanism for the deactivation of switch proteins has been identified by researchers from Ruhr-Universität Bochum, headed by Professor Klaus Gerwert and Dr. Till Rudack from the Department of Biophysics, and the University of Uppsala in Sweden. Switch proteins such as Ras regulate many processes in the body and affect diseases such as cancer. The research team published their report on the newly discovered mechanism in the current issue of the Journal of the American Chemical Society, JACS, on 10 July 2019.

Ultra-accelerated reactions

Bound to switch proteins, the GTP molecule is vital for the deactivation of many of them. If one of the three phosphate groups is detached from GTP, the protein switches to "off", thus affecting cellular processes. "The proteins are extremely efficient and accelerate reactions that would usually take billions of years so that they are executed within the fraction of a second," says Klaus Gerwert.

At least one water molecule is always involved in the deactivation process. To date, researchers assumed that this water molecule had to be activated - namely by a reaction partner transferring a proton to the water molecule. "The nature of the reaction partner has been argued for decades - is it the GTP itself or is it a protein component," explains Carsten Kötting, one of the authors from the Bochum-based team. "In the current study, we have surprisingly identified an entirely new mechanism, where the activation takes place without any proton transfer whatsoever."

Theory versus experiment

Using computer-aided analysis, the team studied all deactivation options for seven different switch protein systems. The researchers thus identified various speeds for the deactivation process. They compared the calculated speeds with values gained in experiments through time-resolved infrared spectroscopy.

While the values for the two previously suspected mechanisms deviated strongly from each other, the experimental results for the newly identified mechanism corresponded with theoretical assumptions - for all seven tested systems, at that. "The matches show that our newly discovered deactivation mechanism is universal and, consequently, is relevant for numerous cellular processes," concludes Till Rudack.

Mechanism relevant for tumour formation

"Diseases are often caused by a defect in the deactivation mechanism of key proteins," says Till Rudack. "In order to understand the molecular processes underlying diseases and to develop therapies, we have to understand the deactivation mechanism first."

The newly identified deactivation mechanism is, for example, responsible for switching Ras off, a protein whose defects result in uncontrolled cellular growth in tumours. Researchers have been trying for decades to find a drug that affects the dysfunctional Ras protein in human tumours. "We expect that our results explain why the search has remained fruitless to date," says Klaus Gerwert. "The correct molecular deactivation mechanism can now become the starting point for the development of anti-cancer drugs."

Credit: 
Ruhr-University Bochum

NIST physicists create record-setting quantum motion

image: NIST physicist Katie McCormick adjusts a mirror to steer a laser beam used to cool a trapped beryllium ion (electrically charged atom). McCormick and colleagues got the ion to display record-setting levels of quantum motion, an advance that can improve quantum measurements and quantum computing.

Image: 
Burrus/NIST

Showcasing precise control at the quantum level, physicists at the National Institute of Standards and Technology (NIST) have developed a method for making an ion (electrically charged atom) display exact quantities of quantum-level motion--any specific amount up to 100 packets of energy or "quanta," more than five times the previous record high of 17.

Quantum mechanics, the fundamental theory of the atomic world, states that energy is released or absorbed in tiny parcels, or packets, called quanta. Atoms release light energy by radiating photons, or quanta of light. When caught in a trap by researchers, atoms' motional energy is carried by phonons, or quanta of motion.

In addition to creating single numbers of quanta, the NIST team controlled the pendulumlike motion of their ion to simultaneously exhibit two different amounts of motional quanta: zero (minimum motion) plus any number up to 18. Such a "superposition" of two states is a hallmark of the curious quantum world.

Published online by Nature on July 22, the new methods could be used with any quantum mechanical oscillator, including systems that oscillate like a simple pendulum or vibrate like a spring. The techniques could lead to new types of quantum simulators and sensors using phonons as the carriers of information. In addition, the ability to tailor superposition states can improve quantum measurements and quantum information processing. Using the ion in a superposition as a frequency-measurement instrument more than doubled the precision compared with conventional measurements of the ion's vibration frequency.

"If we have quantum control of an object, we can 'bend' classical rules to have lower uncertainties in certain desired directions at the expense of greater uncertainties in other directions," first author Katie McCormick said. "We can then use the quantum state as a ruler to measure properties of a system. The more quantum control we have, the more tightly spaced the lines on the ruler are, allowing us to measure quantities more and more precisely."

The experiments were performed with a single beryllium ion held 40 micrometers above the gold electrodes of a chilled electromagnetic trap. The new results were possible because NIST researchers were able to minimize unwanted factors such as stray electric fields that exchange energy with and disrupt the ion, McCormick said.

To add phonons to the ion, NIST researchers alternated ultraviolet laser pulses just above and below the frequency difference between two of the ions' "spin" states, or internal energy configurations. Each pulse flipped the ion from "spin up" to "spin down" or vice versa, with each flip adding one quantum of ion rocking motion. To create superpositions, researchers applied those laser pulses to only half of the ion's wavefunction (the wavelike pattern of the probability of the particle's location and spin state). The other half of the wavefunction was in a third spin state that was unaffected by the laser pulses and remained motionless.

Superpositions of the ion's motionless (or ground) state and a higher phonon number gave NIST researchers "quantum-enhanced" measurement sensitivity, or precision. They used the ion as an interferometer, an instrument that splits and merges two partial waves to create an interference pattern that can be analyzed to characterize frequency. NIST researchers used the interferometer to measure the ion's oscillation frequency with an uncertainty smaller than is normally possible.

Specifically, measurement precision increased linearly with the number of quanta of motion, up until the best performance in the 0-and-12 superposition state, which offered more than twice the sensitivity of a classically behaving quantum state (technically composed of a set of number states). That 0-and-12 superposition state also was more than seven times more precise than the simplest interferometer superposition of 0 and 1.

To understand why superposition states help measure the ion's oscillation frequency more precisely, McCormick suggests imagining a wheel with spokes.

"In a certain abstract space that describes the position and momentum of the ion, the oscillation is represented by a rotation," McCormick said. "We want to be able to measure this rotation very precisely. Superpositions of the ion's ground state of motion and higher number states are a great ruler for this measurement because, in this abstract representation, they can be visualized as a wheel with spokes. These spokes can be used to determine the amount by which the state has rotated. And the higher the number state, the more spokes there are and the more precisely we can measure this rotation."

The measurement sensitivity offered by superposition states should help characterize and reduce noise in the motion, an important source of error that researchers want to minimize in quantum information processing with trapped ions.

Credit: 
National Institute of Standards and Technology (NIST)

Research in Regenerative Medicine proposes a quality control framework for umbilical cord blood-sourced allografts

The recent study from Burst Biologics challenges existing standards and outlines future safety and potency benchmarks.

The prestigious peer-reviewed journal Regenerative Medicine has announced the publication of the original research article from Burst Biologics (ID, USA) entitled, "Characterization of an umbilical cord blood sourced product suitable for allogeneic applications." The article provides a comprehensive overview of the quality attributes, clinical suitability and efficacy of BioBurst Rejuv, an umbilical cord blood (UCB)-sourced allograft, defined as a tissue graft from a donor of the same species.

Previous research has demonstrated that UCB contains cytokines and growth factors, components that are integral to bone consolidation and tissue repair.

"This current study addresses the unmet need for a uniform quality control framework to determine clinical suitability and safety of cord blood allografts," said Christopher D. Jones, CEO of Burst Biologics and a corresponding author of the paper. "This is the first known published report enumerating the tests to address identity, purity, safety and potency for a UCB sourced allograft."

The study reports on exosome-based therapy, which may be the next quantum leap in regenerative medicine, a field focused on replacing or regenerating human cells, tissue or organs, to restore or establish normal function. The published article identifies a unique liaison among the UCB-sourced allograft, host mesenchymal stem cells (MSCs) and their secreted exosomes that influences tissue regeneration in vivo.

"Based on these findings, clinical suitability of cord blood allografts must account for the non-cellular components while evaluating their promising role in tissue regeneration," stated Jones. "This study brings us one step closer to optimizing the bounds of regenerative medicine."

In the study, the identity of BioBurst Rejuv was established by use of flow cytometry, a mass spectrometry-based proteomic approach and protein multiplexing testing. Microbiological screenings, graft-versus-host disease testing, and endotoxin values were used to assess safety and purity.

The non-cellular components were found to be effective in promoting cell proliferation, migration and neutralizing redox-stress in vivo and MSCs treated with a UCB-sourced allograft demonstrated boosted potential for tissue regeneration. The UCB-sourced allograft was also found to be functionally stable up to 2 years when stored at optimum conditions.

"Burst Biologics has been pioneering research and product development since its inception," said Jones. "We are committed to exploring the unique attributes of UCB components and developing future applications that improve lives."

Credit: 
Future Science Group

Following a healthy plant-based diet may lower type 2 diabetes risk

People who follow predominantly plant-based diets with greater adherence may have a lower risk of developing type 2 diabetes than those who follow these diets with lower adherence, according to a new meta-analysis from Harvard T.H. Chan School of Public Health. The researchers also found that the association was stronger for people whose diets emphasized healthy plant-based foods.

The study will be published online July 22, 2019 in JAMA Internal Medicine.

"Plant-based dietary patterns are gaining popularity in recent years, so we thought it was crucial to quantify their overall association with diabetes risk, particularly since these diets can vary substantially in terms of their food composition," said first author Frank Qian, who conducted the research as a masters student in the Department of Nutrition.

While previous studies have suggested that plant-based dietary patterns may help lower type 2 diabetes risk, there has been a lack of research analyzing the overall body of epidemiological evidence. According to the researchers, the current study provides the most comprehensive evidence to date for the association between adherence to healthy plant-based diets and reduced type 2 diabetes risk.

The researchers identified nine studies that looked at this association and were published through February 2019. Their meta-analysis included health data from 307,099 participants with 23,544 cases of type 2 diabetes. They analyzed adherence to an "overall" predominantly plant-based diet, which could include a mix of healthy plant-based foods such as fruits, vegetables, whole grains, nuts, and legumes, but also less healthy plant-based foods such as potatoes, white flour, and sugar, and modest amounts of animal products. The researchers also looked at "healthful" plant-based diets, which were defined as those emphasizing healthy plant-based foods, with lower consumption of unhealthy plant-based foods.

The researchers found that people with the highest adherence to overall predominantly plant-based diets had a 23% lower risk of type 2 diabetes compared to those with weaker adherence to the diets. They also found that the association was strengthened for those who ate healthful plant-based diets.

One mechanism that may explain the association between predominantly plant-based diets and reduced type 2 diabetes risk, according to the researchers, is that healthy plant-based foods have been shown to individually and jointly improve insulin sensitivity and blood pressure, reduce weight gain, and alleviate systemic inflammation, all of which can contribute to diabetes risk.

"Overall, these data highlighted the importance of adhering to plant-based diets to achieve or maintain good health, and people should choose fresh fruits and vegetables, whole grains, tofu, and other healthy plant foods as the cornerstone of such diets," said senior author Qi Sun, associate professor in the Department of Nutrition.

Credit: 
Harvard T.H. Chan School of Public Health

Astronomers make first calculations of magnetic activity in 'hot Jupiter' exoplanets

image: This illustration shows a hot Jupiter orbiting so close to a red dwarf star that the magnetic fields of both interact, producing activity on the star. Astrophysicists have used that activity to calculate field strengths in four hot Jupiter star-and-planet systems.

Image: 
NASA, ESA and A. Schaller (for STScI)

Gas-giant planets orbiting close to other stars have powerful magnetic fields, many times stronger than our own Jupiter, according to a new study by a team of astrophysicists. It is the first time the strength of these fields has been calculated from observations.

The team, led by Wilson Cauley of the University of Colorado, also includes associate professor Evgenya Shkolnik of Arizona State University's School of Earth and Space Exploration. The other researchers are Joe Llama of Northern Arizona University and Antonino Lanza of the Astrophysical Observatory of Catania in Italy. Their report was published July 22 in Nature Astronomy.

"Our study is the first to use observed signals to derive exoplanet magnetic field strengths," says Shkolnik. "These signals appear to come from interactions between the magnetic fields of the star and the tightly orbiting planet."

Many worlds

More than 3,000 exoplanet systems containing over 4,000 planets have been discovered since 1988. Many of these star systems include what astronomers call "hot Jupiters." These are massive gaseous planets presumed to be like the Sun's Jupiter but orbiting their stars at close distances, typically about five times the star's diameter, or roughly 20 times the Moon's distance from Earth.

Such planets travel well inside their star's magnetic field, where interactions between the planetary field and the stellar one can be continual and strong.

Previous studies, the team says, have placed upper limits on exoplanet magnetic fields, for example from radio observations or derived purely from theory.

"We combined measurements of increased stellar emission from the magnetic star-planet interactions together with physics theory to calculate the magnetic field strengths for four hot Jupiters," says lead author Cauley.

The magnetic field strengths the team found range from 20 to 120?gauss. For comparison, Jupiter's magnetic field is 4.3 gauss and Earth's field strength is only half a gauss, although that is strong enough to orient compasses worldwide.

Triggering activity

The astrophysicists used telescopes in Hawaii and France to acquire high-resolution observations of emission from ionized calcium (Ca II) in the parent stars of the four hot Jupiters. The emission comes from a star's hot, magnetically heated chromosphere, a thin layer of gas above the cooler stellar surface. The observations let the team calculate how much energy was being released in the stars' calcium emission.

Says Shkolnik, "We used the power estimates to calculate magnetic field strengths for the planets using a theory for how the planets' magnetic fields interact with the stellar magnetic fields."

Cauley explains, "Magnetic fields like to be in a state of low energy. If you twist or stretch the field like a rubber band, this increases the energy stored in the magnetic field." Hot Jupiters orbit very close to their parent stars and so the planet's magnetic field can twist and stretch the star's magnetic field.

"When this happens," Cauley says,"energy can be released as the two fields reconnect, and this heats the star's atmosphere, increasing the calcium emission."

Probing deep

Astrophysicists have suspected that hot Jupiters would, like our own Jupiter, have magnetic fields produced deep inside them. The new observations provide the first probe of the internal dynamics of these massive planets.

"This is the first estimate of the magnetic field strengths for these planets based on observations, so it's a huge jump in our knowledge," Shkolnik notes. "It's giving us a better understanding of what is happening inside these planets."

She adds that it should also help researchers who model the internal dynamos of hot Jupiters. "We knew nothing about their magnetic fields -- or any other exoplanet magnetic fields -- and now we have estimates for four actual systems."

Surprisingly powerful

The field strengths, the team says, are larger than one would expect considering only the rotation and age of the planet. The standard dynamo theory of planetary magnetic fields predicts field strengths for the sampled planets that are much smaller than what the team found.

Instead, the observations support the idea that planetary magnetic fields depend on the amount of heat moving through the planet's interior. Because they are absorbing a lot of extra energy from their host stars, hot Jupiters should have larger magnetic fields than planets of similar mass and rotation rate.

"We are pleased to see how well the magnitude of the field values corresponded to those predicted by the internal heat flux theory," says Shkolnik. "This may also help us work toward a clearer understanding of magnetic fields around temperate rocky planets."

Credit: 
Arizona State University

Studies show the influence of environment on the evolution of weeds

image: Shown is the invasive weedy plant Lonicera japonica Thunb., commonly known as Japanese honeysuckle. This plant, a member of the Caprifoliaceae family, is native to east Asia and introduced into many countries around the world, including the US. It is a perennial vine that produces fragrant flowers and overtops other plants, reducing light available for native species. Increases in atmospheric CO2 may favor introduced weeds such as L. japonica. This picture was taken at the edge of a patch of woodland in a park in Newark, Delaware.

Image: 
Steven J. Franks

WESTMINSTER, Colorado - July 22, 2019 - Rapid increases in herbicide resistance show that weeds can undergo important genetic changes over very brief periods of time. But herbicide use isn't the only factor influencing the evolution of weeds.

An article featured in the journal Invasive Plant Science and Management shows climate and elevated levels of carbon dioxide (CO2) are also influencing how weeds evolve and may actually contribute to the development of herbicide resistance.

After a review of the available literature, the authors compiled several key findings about the role climate plays in weed evolution:

Evidence suggests elevated CO2 may contribute to weeds developing greater competitive abilities and resistance to herbicides.

Adaptive evolution is likely common among weeds due to the combination of two factors: the strong selective pressures exerted by changes in climate and the unique characteristics of weed populations, including short lifecycles, strong dispersal abilities and ample genetic variation.

Weed evolution is influenced by both the direct effects of climate change on the environment, as well as its many indirect effects, such as changing fire patterns, new crop introductions and altered herbicide effectiveness.

Weed traits - such as growth rates and lifecycle events - have been found to vary predictably with variations in climate. In addition, drought and elevated CO2 have been observed to cause genetic and phenotypic changes within individual weed populations.

The authors say further research is needed to close important knowledge gaps and to further explore the relationship between climate and herbicide resistance.

Credit: 
Cambridge University Press

New technique helps create more personalized therapies for people with advanced cancers

UCLA RESEARCH ALERT

FINDINGS

Being able to identify targets for adoptive cell therapies is one of the first steps in developing personalized treatments for people with hard-to-treat cancers. However, predicting whether a patient will have an immune response to a particular abnormal protein caused by mutations that serves as a new antigen (neoantigen), can be challenging. Using an ultra-sensitive and high-throughput isolation technology (termed imPACT Isolation Technology®) designed to isolate neoepitope specific T-cells, UCLA researchers were able to characterize and identify the neoantigens driving the antitumor responses in a patient treated with anti-PD-1 blockade and isolate the T cell receptors responsible for such effect.

BACKGROUND

Using immune checkpoint inhibitors to treat people with metastatic melanoma has helped transform the way people with the most deadly skin cancer are treated. Despite its success, there are still many people who do not benefit from the treatment. Up until now, adoptive cell therapy, which involves extracting and harvesting T cells from a patient and engineering them in the laboratory, have targeted shared antigens. That restricts many of the people that can potentially be treated with the therapy because not every cancer has the same antigen that needs to be targeted. Researchers are working to improve methods to identify new targets for these therapies in hopes to develop more effective and personalized therapies.

METHOD

Researchers analyzed T cell responses in two patients with advanced melanoma, one who responded to anti-PD1 therapy and one who did not respond to the therapy. Using samples collected before and during treatment, the team isolated the T cells specifically recognizing the mutations on the tumor by using the imPACT Isolation Technology® developed by PACT Pharma. The technology allows researchers to identify the T cells, and their T cell receptors, that have the ability to detect mutations. After identifying the T cell receptors, they were re-introduced in T cells from peripheral blood using a non-viral genome engineering method to generate new neoantigen-specific T cells that were used to kill melanoma cells from the same patient.

"In the setting of patients treated with anti-PD-1, we identified for the first time, in a high-throughput manner, which neoantigen mutations in the tumor are being targeted by T cells. More importantly, we were able to identify their T cell receptors and demonstrate that they can actually specifically kill the tumor cells," said lead author Cristina Puig-Saus, PhD, associate project scientist in hematology/oncology at the David Geffen School of Medicine at UCLA. "We hope that a better understanding of the T cell responses that occur after immune checkpoint blockade will guide the design of personalized adoptive T cell therapies."

IMPACT

Uncovering new ways to identify targets for immunotherapies significantly increases the number of patients who will benefit from immunotherapy. The imPACT Isolation Technology® allows researchers to identify the mutation-specific T cells and understand which mutations are inducing responses against tumors.

AUTHORS

Lead author is Cristina Puig-Saus, PhD, an associate project scientist in hematology/oncology at the David Geffen School of Medicine at UCLA. Senior author is Antoni Ribas, MD, PhD., director of the tumor immunology program at the UCLA Jonsson Comprehensive Cancer Center and professor of medicine in the David Geffen School of Medicine at UCLA. Thirty-three additional authors are listed in the abstract.

JOURNAL

The research was featured at the American Associate of Cancer Research special conference on immune cell therapies for cancer.

Credit: 
University of California - Los Angeles Health Sciences

Plasticizer interaction with the heart

image: New preclinical research finds acute exposure to MEHP, a common plasticizer used in medical equipment, increases risk for alternans and arrhythmias, disruptions in heart rhythm. The images above show changes in heart rhythm, measured by slowed epicardial conduction velocity, enhanced action potential prolongation and impaired sinus node activity.

Image: 
<em>Circulation: Arrhythmia and Electrophysiology</em>, Plasticizer Interaction with the Heart, Volume: 12, Issue: 7, DOI: (10.1161/CIRCEP.119.007294)

Calling an ambulance during an emergency, emailing a breaking news or journal article before a 5 p.m. deadline and maintaining conditions during the fifth week of a 6-week lab study, without altering the light or temperature, requires electricity and translates into time, money and lives. During critical moments, we appreciate the tiny particles and ions in electric currents that power our phones, computers or laboratory equipment. We seldom think about the speed of these connections or potential disruptors when conditions are stable. The same applies to the electric currents, or electrophysiology, of our heart.

Arrhythmias, irregular heart rhythms, affect millions of Americans but can be controlled with routine screenings and preventive care. In critical settings, such as an ICU, doctors monitor arrhythmias, which range from a patient's heart beating too slow to too fast. Helping a patient maintain a steady heart rate, especially if they are at risk for cardiac complications, may support a faster recovery, shorter hospital stays, reduced health care costs and improved health outcomes, such as avoiding complications from heart failure or stroke.

A preclinical study, entitled "Plasticizer Interaction With the Heart," appears in the July issue of Circulation: Arrhythmia and Electrophysiology and examines the role plastic exposure, akin to exposure in a medical setting, has on heart rhythm disruptions and arrhythmias.

The research team, led by researchers at Children's National Health System, discovered increased risks for irregular heart rhythms after exposing intact, in vitro heart models to 30 minutes of mono-2-ethylhexyl phthalate (MEHP), a metabolite from Di-2-ethylhexyl phthalate (DEHP). DEHP is a chemical commonly used to make plastics pliable in FDA-approved medical devices. This phthalate accounts for 40% of the weight of blood storage bags and up to 80% of the weight of tubes used in an intensive care setting, such as for assisted feeding or breathing, and for catheters used in diagnostics or to conduct minimally invasive cardiac procedures.

The team chose to study the heart's reaction to 60 μM of MEHP, a level comparable to stored blood levels of MEHP observed in pediatric patients and in neonatal exchange transfusion procedures. They found 30-minute exposure to MEHP slowed atrioventricular conduction and increased the atrioventricular node effective refractory period. MEHP prolonged action potential duration time, enhanced action potential triangulation, increased the ventricular effective refractory period and slowed epicardial conduction velocity, which may be due to the inhibition of Nav 1.5, or sodium current.

"We chose to study the impact of MEHP exposure on cardiac electrophysiology at concentrations that are observed in an intensive care setting, since plastic medical products are known to leach these chemicals into a patient's bloodstream," says Nikki Gillum Posnack, Ph.D., a principal investigator with the Sheikh Zayed Institute for Pediatric Surgical Innovation at Children's National and an assistant professor of pediatrics at the George Washington University School of Medicine and Health Sciences. "In critical conditions, a patient may have a blood transfusion, require extracorporeal membrane oxygenation, undergo cardiopulmonary bypass or require dialysis or intravenous fluid administration. All of these scenarios can lead to plastic chemical exposure. Our research team wants to investigate how these plastic chemicals can impact cardiac health."

In this review, Dr. Posnack's team mentions one reason for the observed changes in the preclinical heart models may be due to the structure of phthalates, which resemble hormones and can interfere with a variety of biological processes. Due to their low molecular weight, these chemicals can interact directly with ion channels, nuclear receptors and other cellular targets.

Existing epidemiological research shows associations between exposure to phthalates and adverse health outcomes, including metabolic disturbances, reproductive disorders, inflammatory conditions, neurological disorders and cardiovascular disease. This is the first study to examine the link between cardiac electrophysiology in intact hearts and exposure to MEHP, comparable to levels observed in an ICU.

Dr. Posnack's team previously found DEHP reduced cellular electrical coupling in cardiomyocyte cell models, which slowed conduction velocity and produced an arrhythmogenic phenotype. A microarray analysis found heart cells treated with DEHP led to mRNA changes in genes responsible for contracting and calcium handling. Another preclinical study showed DEHP altered nervous system regulation of the cardiovascular system. Future studies to expand on this research may include the use of larger preclinical models or human assessments. For the latter, stem cell-derived cardiomyocytes can be used to compare the safety profile of plastic chemicals with potential alternatives.

An accompanying editorial, entitled "Shocking Aspects of Nonconductive Plastics," authored by cardiology researchers at the University of Wisconsin-Madison, puts this novel research into perspective. Like Dr. Posnack, the team notes that while the clinical impact plasticizers have on heart health still needs to be determined, the work contributes to compelling data among multiple researchers and shows DEHP and MEHP are not inert substances.

"Toxic plasticizers in children's toys and baby products hit public headlines 20 years ago, but exposure to these compounds is up to 25x higher in patients undergoing complex medical procedures," write the University of Wisconsin-Madison researchers. "We readily (and unknowingly) administer these compounds, and at times in high quantity, to some of our most vulnerable patients. This work highlights the need for further investigation into short and long-term plasticizer exposure on cardiac electrophysiology."

The Agency for Toxic Substances and Disease Registry (ATSDR), part of the Centers for Disease Control and Prevention (CDC), released a public health statement about DEHP in 2002, noting more research in humans is needed to issue formal warnings against this phthalate.

ATSDR states there is no conclusive evidence about the adverse health effects of children exposed to DEHP in a medical setting, such as procedures that require the use of flexible tubing to administer intravenous fluids or medication. However, the CDC statement includes limits of DEHP exposure, based on preclinical models, used to guide upper DEHP limits in consumer products, including food packaging, drinking water, and air quality in the workplace.

"It's important to note that this was a preliminary study performed on an ex vivo model that is largely resilient to arrhythmias", says Rafael Jaimes III, Ph.D., the first author of the study and a senior scientist at Children's National. "Due to the nature of the design, it was somewhat alarming that we found such significant effects. I predict that electrophysiological disturbances will be more pronounced in models that more closely resemble humans. These types of models should absolutely be studied."

"And, importantly, our results may incentivize the development and use of new products that are manufactured without phthalates," Dr. Posnack adds.

These questions are powering Dr. Posnack and her team through a decade-long, multi-institution research investigation to understand how plastic chemicals and medical device biomaterials can impact cardiac health.

Credit: 
Children's National Hospital

UTSA reduces seizures by removing newborn neurons

image: UTSA researchers removed new neurons that formed following a seizure in mice. The team then monitored seizure activity and observed that the treated mice experienced a 65% reduction in seizures compared to untreated mice.

Image: 
UTSA

Epileptic seizures happen in one of every 10 people who have experienced a traumatic brain injury (TBI). However, new research at The University of Texas at San Antonio (UTSA) has uncovered an innovative approach to possibly slow the progression of epilepsy. Researchers at UTSA have successfully removed new neurons that have developed after a brain injury to reduce seizures in mice. They believe that the technique could potentially reduce post-injury epilepsy.

"We already know that new neurons contribute to epilepsy, but we didn't know if we could target them post-injury, after seizures have already started," said Jenny Hsieh, Semmes Foundation Chair and Professor in Cell Biology and Director of the UTSA Brain Health Consortium.

"Having the ability to do this would be clinically relevant, because the early warning signs of epilepsy are the seizures themselves."

People who experience traumatic brain injury as a consequence of gun violence or automobile accidents are at higher risk of developing seizures. During a seizure there is a sudden abnormal electrical disturbance in the brain that results in various symptoms: strange movement of the head, body, arms, legs or eyes such as stiffening or shaking. Unresponsiveness and staring, chewing, lip smacking and even experiencing strange visual images are also indicative of a seizure.

According to the U.S. Centers for Disease and Control, TBI-related emergency department visits, hospitalizations and deaths have increased by 53%. Those who suffer a seizure a week after suffering head trauma are 80% more likely to suffer another post-trauma epileptic attack.

Seizures usually happen where there is a scar in the brain as a consequence of the injury. New neurons generated following a brain injury often do not migrate or develop normally. If left untreated, these cells may contribute to the development of epilepsy.

Hsieh and her colleagues at The University of Texas at San Antonio systematically removed new neurons that formed during the eight weeks following a seizure in mice. The UTSA team then monitored seizure activity in the mice and observed that the treated mice experienced a 65 percent reduction in seizures compared to untreated mice. This effect required more than four weeks of continuous treatment.

"Now we know we can remove new neurons after the initial seizures," said Hsieh. "While we cannot stop the first seizures, we can try to prevent the secondary seizures. This is very exciting and may lead to new therapeutic strategies."

Although these findings support a role for new neurons in the onset of epilepsy, they also suggest additional factors are involved.

"We found that once the treatment stopped, the seizure reduction was not permanent. This could be due to abnormal changes in the epileptic brain, such as chronic inflammation or reactive astrocytes, affecting the development of new neurons. We're looking into those possibilities now," said Hsieh.

Credit: 
University of Texas at San Antonio

Succeed in sensitivity increase and noise reduction of accelerometer

image: The illustration shows a schematic image of the proposed single-axis MEMS capacitive accelerometer. Input acceleration can be sensed by monitoring the capacitance change between the proof mass and the fixed electrode. The device is realized by the multiple layers made of electroplated gold. We utilize the third (M3) and fourth (M4) layers for the spring structure, and the M4 and fifth (M5) layers for the proof mass structure.

Image: 
Sensors and Materials, Daisuke Yamane

A significant increase in the demand of accelerometers is expected as the market for consumer electronics, such as smartphones, and social infrastructure monitoring applications are expanding. Such miniaturized and mass-producible accelerometers are commonly developed by silicon MEMS technology where the fabrication process is well established.

In the design of accelerometers, there is a trade-off between the size reduction and the noise reduction because the mechanical noise dominated by the Brownian noise is inversely proportional to the mass of the moving electrode called as proof mass. Moreover, as for capacitive accelerometers, the sensitivity is generally proportional to the accelerometer size, and thus there is also a trade-off between the size reduction and the sensitivity increase. Since high-resolution accelerometers require low noise and high sensitivity performances, it has been difficult for conventional silicon-based MEMS accelerometers to detect 1 μG level input acceleration.

Low noise and high sensitivity MEMS accelerometer

The research group consisting of Tokyo Tech and NTT Advanced Technology Corporation has previously proposed a method to downscale the proof mass size of MEMS accelerometers to less than one-tenth by using gold material. In this work, as an extension of this achievement, they have employed multi-layer metal structures to the proof mass and spring components, and developed a low noise and high sensitivity MEMS accelerometer.

As shown in Fig. 1, they reduced the Brownian noise being inversely proportional to the proof mass by increasing the mass per area with the use of multiple layers of gold for the proof mass structure.

Furthermore, they utilized the whole area of the 4 mm square chip by reducing the warpage of the proof mass, which enabled them to increase the capacitance sensitivity of the accelerometer. Figure 2 shows a chip photograph and scanning electron microscope images of the developed MEMS accelerometer.

With these results, the developed accelerometer achieved the sensitivity to be one hundred times or more and the noise to be one-tenth or less as compared with conventional accelerometers with the same size, as shown in Fig. 3. Accordingly, they confirmed that the accelerometer would have potential to detect input acceleration to be as low as 1 μG. The fabrication process utilized semiconductor microfabrication process and electroplating, and thus it could be possible to implement the developed MEMS structures on an integrated circuit chip. Therefore, the proposed technology would be useful to increase the resolution of miniaturized accelerometers for general purpose use.

An era of placing a large number of sensors on all thing

It would be a breakthrough in a variety of motion sensing application to realize an ultra-compact and high-resolution accelerometer. Such accelerometer can be applied to medical and healthcare technology, infrastructure monitoring, high precision control of ultra-lightweight robots, mobile vehicle control, navigation system in places where GPS cannot be used and space environment measurement that needs ultra-low acceleration sensing.

In the near future, an era of placing a large number of sensors on all thing is expected to come, and in such case, it would be extremely important to miniaturize high-resolution accelerometers because accelerometer technology is a fundamental of motion sensing.

Credit: 
Tokyo Institute of Technology

Research shows high prices of healthy foods contribute to malnutrition worldwide

Poor diets are the now the leading risk factor for the global burden of disease, accounting for one-fifth of all deaths worldwide. While the causes of poor diets are complex, new research finds the affordability of more nutritious foods is an important factor.

A new study by researchers at the International Food Policy Research Institute (IFPRI) is the first to document that the affordability of both healthy and unhealthy foods varies significantly and systematically around the world. The study also suggests that these relative price differences help explain international differences in dietary patterns, child stunting and overweight prevalence among adults.

Past research has only studied relative price differences in specific countries, mostly in the context of the relative cheapness of calorie-dense processed foods as a risk factor for obesity in upper- and middle-income countries. But until now, no studies have examined the structure of relative price differences globally, or how these price structures might contribute to undernutrition and obesity outcomes.

"Our research shows that most healthy foods are substantially more expensive in poorer countries," says IFPRI Senior Research Fellow and study co-author Derek Headey. "But while healthier foods become cheaper over the course of development, so too do unhealthy processed foods, like soft drinks."

The study, "The relative caloric prices of healthy and unhealthy foods differ systematically across income levels and continents," co-authored by IFPRI's Headey and Harold Alderman, was published in The Journal of Nutrition. Using national price data for 657 standardized food products in 176 countries collected under The International Comparison Program (ICP), the authors develop a novel measure of how costly it is to diversify diets away from traditional calorie-dense staple foods such as bread, corn or rice. The study shows that higher caloric prices of a food predict lower consumption of that food and explores how those price differences might explain international differences in child stunting and adult obesity.

The study finds marked variations in the affordability of both healthy and unhealthy foods across different regions of the world, and at differing levels of development. In the world's poorest countries, healthy foods were often extremely expensive, especially nutrient-dense animal sourced foods, which are widely known to be effective in reducing stunting. Eggs and fresh milk, for example, are often 10 times as expensive as starchy staples. Another ultra-healthy food for kids - specialized infant cereals fortified with a wide range of extra nutrients - are sometimes 30 times as expensive as the nutrient-sparse traditional cereals more commonly fed to infants.

"Prior to this study, we already knew that the poorest children in the world weren't consuming enough of the really nutrient-dense foods that promote healthy growth and brain development", said Headey. "But now we have a better idea why: poor people also live in poor food systems. That combination of low incomes and high prices means they're simply not going to buy enough and eat enough of these nutrient-dense foods."

While poor child feeding practices are often attributed to limited nutritional knowledge in low income settings, the authors found that the high prices of nutrient-dense foods offered an alternative explanation of their low consumption. Even more strikingly, they find that higher prices of milk, eggs and fortified infant cereals predict higher rates of stunting. "The link between milk prices and stunting is especially strong", said Alderman, "which is entirely consistent with a whole body of evidence on the strong linkages between dairy consumption and child growth."

Although the study found that economic development tends to make healthy foods more affordable, that process also tends to make unhealthy foods cheaper. Sugar-rich soft drinks are relatively expensive in many low-income countries but have become inexpensive and widely consumed in middle- and upper-income settings.

Indeed, Headey and Alderman find that lower prices of soft drinks and sugar-rich snacks predict significant increases in overweight prevalence among adult populations. "Public health agencies in upper income countries have been concerned with the high consumption of sugar-rich foods for some time," said Alderman, "but our study shows that these products often become very affordable in middle income countries, and sometimes even in relatively poor countries where obesity rates are really on the rise."

The researchers noted that policymakers have several tools available to help make nutrient-rich foods relatively more affordable, including nutrition-sensitive agricultural investments that could make healthy foods cheaper, and taxation and regulation efforts - such as food labelling - to curb consumption of unhealthy foods.

"These findings raise an important agenda for future research: understanding why food prices vary across countries, and sometimes within them, and how best to change food prices in a way that leads to better diets and nutrition outcomes in rich and poor countries alike," Headey said.

Credit: 
International Food Policy Research Institute

Study sheds light on the darker parts of our genetic heritage

More than half of our genome consists of transposons, DNA sequences that are reminiscent of ancient, extinct viruses. Transposons are normally silenced by a process known as DNA methylation, but their activation can lead to serious diseases. Very little is known about transposons but researchers in an international collaboration project have now succeeded for the first time in studying what happens when DNA methylation is lost in human cells. These findings provide new insight into how changes in DNA methylation contribute to diseases.

Even when our DNA is intact, the expression and behaviour of our genes can change. This can happen in various ways, including through DNA methylation, a chemical process which shuts off genes and other parts of our genome, such as transposons.

Transposons - jumping genes - are sometimes referred to as the dark part of our genome and consist of transposable DNA sequences that can cause genetic change, for example if they are integrated into a gene. These transposons are often silenced during foetal development, specifically by DNA methylation.

"Sometimes, however, DNA methylation is disrupted and studies have shown that this is significant in certain cancer tumours and in some neuropsychiatric diseases. DNA methylation is used as a target for therapy in certain cancer types, such as leukaemia, but we still lack knowledge about why this is effective and why it only works for certain types of cancer", says Johan Jakobsson, professor at Lund University and leader of the study, which also included researchers from the Max Planck Institute for Molecular Genetics and Karolinska Institutet. The findings are now published in Nature Communications.

In fact, we know very little about the role of transposons in our DNA. One theory held by the researchers in Lund is that DNA methylation silences the parts of the genome that are not used, but only now has it been possible to study what happens when this process is removed from human cells.

The researchers used the CRISPR/Cas9 technique to successfully shut down DNA methylation in human neural stem cells in the laboratory.

"The results were very surprising. If you shut down DNA methylation in mouse cells, they don't survive. But when DNA methylation was shut down in the human nerve stem cells, they survived and a specific set of transposons were activated. These transposons in turn affected many genes that are important in the development of the nerve cells", says Johan Jakobsson.

Johan Jakobsson thinks that the results open up potential for a completely new understanding of how a loss of DNA methylation affects our genome in various diseases, but he also emphasises that the study was conducted on cultured cells in a laboratory. Now the researchers want to move forward and see what happens if they shut down methylation in cancer cells that are affected by DNA methylation, for example in glioblastoma.

Credit: 
Lund University

Flexible user interface distribution for ubiquitous multi-device interaction

image: KAIST researchers have developed mobile software platform technology that allows a mobile application (app) to be executed simultaneously and more dynamically on multiple smart devices. Its high flexibility and broad applicability can help accelerate a shift from the current single-device paradigm to a multiple one, which enables users to utilize mobile apps in ways previously unthinkable.

Image: 
KAIST

KAIST researchers have developed mobile software platform technology that allows a mobile application (app) to be executed simultaneously and more dynamically on multiple smart devices. Its high flexibility and broad applicability can help accelerate a shift from the current single-device paradigm to a multiple one, which enables users to utilize mobile apps in ways previously unthinkable.

Recent trends in mobile and IoT technologies in this era of 5G high-speed wireless communication have been hallmarked by the emergence of new display hardware and smart devices such as dual screens, foldable screens, smart watches, smart TVs, and smart cars. However, the current mobile app ecosystem is still confined to the conventional single-device paradigm in which users can employ only one screen on one device at a time. Due to this limitation, the real potential of multi-device environments has not been fully explored.

A KAIST research team led by Professor Insik Shin from the School of Computing, in collaboration with Professor Steve Ko's group from the State University of New York at Buffalo, has developed mobile software platform technology named FLUID that can flexibly distribute the user interfaces (UIs) of an app to a number of other devices in real time without needing any modifications. The proposed technology provides single-device virtualization, and ensures that the interactions between the distributed UI elements across multiple devices remain intact.

This flexible multimodal interaction can be realized in diverse ubiquitous user experiences (UX), such as using live video steaming and chatting apps including YouTube, LiveMe, and AfreecaTV. FLUID can ensure that the video is not obscured by the chat window by distributing and displaying them separately on different devices respectively, which lets users enjoy the chat function while watching the video at the same time.

In addition, the UI for the destination input on a navigation app can be migrated into the passenger's device with the help of FLUID, so that the destination can be easily and safely entered by the passenger while the driver is at the wheel.

FLUID can also support 5G multi-view apps - the latest service that allows sports or games to be viewed from various angles on a single device. With FLUID, the user can watch the event simultaneously from different viewpoints on multiple devices without switching between viewpoints on a single screen.

PhD candidate Sangeun Oh, who is the first author, and his team implemented the prototype of FLUID on the leading open-source mobile operating system, Android, and confirmed that it can successfully deliver the new UX to 20 existing legacy apps.

"This new technology can be applied to next-generation products from South Korean companies such as LG's dual screen phone and Samsung's foldable phone and is expected to embolden their competitiveness by giving them a head-start in the global market." said Professor Shin.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

NASA sees Tropical Storm Danas track through the East China Sea

image: On July 19, 2019, the MODIS instrument aboard NASA's Aqua satellite provided a visible image of Tropical Storm Danas in the East China Sea.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

NASA's Aqua satellite provided a visible image of Tropical Storm Danas moving through the East China Sea on July 19, 2019.

On July 19, the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite captured a visible image of Danas that showed a large storm in the East China Sea. The storm is large and extends northeast into the Yellow Sea, east of the Korean Peninsula. The MODIS image also showed bands of strongest thunderstorms were east of the storm's center of circulation.

At 11 a.m. EDT (1500 UTC), the center of Danas was located near latitude 32.3 degrees north and longitude 125.1 degrees west. Danas was about 266 nautical miles south-southwest of Kunsan Air Base, South Korea. Danas was moving to the north-northeast and had maximum sustained winds near 45 knots (52 mph/83 kph).

The Joint Typhoon Warning Center expects Danas to approach the southwestern coast of South Korea by 11 p.m. EDT (0300 UTC on July 20); it is forecast to weaken due to frictional effects moving over land. Significant weakening is forecast after landfall.

Credit: 
NASA/Goddard Space Flight Center