Earth

New immunotherapy for lung cancer shows promise of success

image: In a clinical trial, Mark Rubinstein, Ph.D., (left) and John Wrangle, M.D., used two drugs that have never been combined in humans before to slow the progression of lung cancer.

Image: 
Sarah Pack, Medical University of South Carolina

In a groundbreaking development, results from a recent clinical trial to treat lung cancer show that a novel immunotherapy combination is surprisingly effective at controlling the disease's progression. The study, published April 4 in the journal The Lancet Oncology, focused on non-small cell lung cancer, which is the most common form of lung cancer.

Immunologist John Wrangle, M.D., of the Hollings Cancer Center at the Medical University of South Carolina said it's a promising therapy that can be delivered in an outpatient setting. "People don't talk about 'curing' patients with metastatic lung cancer. We now get to flirt with the idea for certain patients using immunotherapy. And at the very least we have a significant proportion of patients enjoying prolonged survival even if we can't call them 'cured'," he said.

He, along with his colleague Mark Rubinstein, Ph.D. also of the Hollings Cancer Center, designed a clinical trial that started in 2016.

Patients with metastatic non-small cell lung cancer will always progress after chemotherapy, so most patients go on to be treated with immunotherapy, a type of therapy that uses the body's immune system to fight cancer. One class of immunotherapeutic drugs is known as "checkpoint" inhibitors, as they target checkpoints in immune system regulation to allow the body's natural defenses, such as white blood cells, to more effectively target the cancer.

Rubinstein said checkpoint therapies work by cutting the brake cables on the white bloods cells that are inherently able to kill tumor cells. "Tumor cells often produce suppressive factors which essentially turn the brakes on tumor-killing white blood cells. What's unique about the therapy that we're testing is that in addition to cutting the brake cables on white blood cells, we're providing fuel to them so that they can more effectively kill cancer cells."

Wrangle and Rubinstein's therapy is a combination of a checkpoint drug, nivolumab, with a new and powerful immune stimulation drug, ALT-803. "What's unique about our trial is that it's two completely different types of drugs that have never been combined in humans before, and the trial demonstrated that these drugs can be safely administered, and also, there's evidence that it may help patients where checkpoint therapy is not good enough alone," said Rubinstein.

Patients who have stopped responding to checkpoint therapy may be helped significantly by adding ALT-803. Pre-clinical studies have shown that ALT-803 activates the immune system to mobilize lymphocytes against tumor cells and could potentially serve as an important component in combination treatments. Of the 21 patients treated, nine previously either had stable disease or responded to single-agent immunotherapy before becoming resistant to this treatment. Of these nine patients, 100 percent either had stable disease or had a partial response to the treatment used in this study.

"We can reassert control, at least in terms of stable disease, in essentially everybody we've treated so far," Wrangle said.

This novel combination is a huge step forward in cancer treatment. "Whereas for decades the modalities of therapy were surgery, radiation, and chemotherapy, the last decade has brought targeted therapy, and more recently, immunotherapy. It fundamentally alters the balance of power between your body and your cancer," Wrangle said.

A lung cancer specialist, Wrangle said 75 percent of lung cancer patients unfortunately are diagnosed at an incurable stage. "If 10 years ago you were talking about defining a five-year survival rate for metastatic non-small cell lung cancer patients, someone would laugh in your face. It would be a joke. It's just a very different time now," he said of the progress being made in the treatment of lung cancer.

He credits Rubinstein's work, instrumental in the development of ALT-803, in helping to make this advance. Research into ALT-803 started years ago while Rubinstein was doing his postdoctoral training at the Scripps Research Institute. It was there that he co-discovered the powerful immune system stimulator used in this trial. The stimulator, known as IL-15 complexes, is actually a combination of an immune system growth factor and its soluble receptor. IL-15 is a growth factor for certain kinds of white blood cells including natural killer cells and T cells.

Wrangle explained that natural killer cells are the chief arm of the innate immune response. "They are an important part of anti-cancer response that haven't been really talked about for a long time."

Wrangle said his collaboration with Rubinstein is a powerful example of what team science can accomplish.

"His ownership of the intellectual foundation of this therapy is manifest," Wrangle said of Rubinstein's contribution. "He is brilliant and just works furiously to help understand how we can develop this therapy."

Successful trials for the treatment of cancer are incredibly rare, he said. "There are very few people in human history who get the privilege of developing a new therapy for any human disease, much less cancer. Mark and I are now in this weird micro-club of folks who have developed the promise of a new therapy for cancer. That's such an amazing privilege to be able to do that," he said.

In contrast to other immunotherapies that require admission to a hospital, this new therapeutic combination can be administered in an outpatient setting. "The plan was to do it all as an outpatient therapy because inpatient therapy is just infeasible. My patients feel like they have the flu, but they go about their day, and it's totally manageable. That's the kind of revolutionary part with regard to this class of agent," Wrangle said.

Wrangle and Rubinstein are surprised and elated at the success demonstrated in their latest study. Wrangle said the landscape of oncology is "eyeball-deep in failed trials," so he and Rubinstein are hopeful this will provide more treatment options for patients. "The number of trials that work is miniscule, so was I surprised? I was ecstatic that it was working," he said.

Rubinstein agreed, adding that the success of the trial is a testament to the commitment, hard work and incredible insight that Wrangle has for making a difference for his patients. "He has an amazing vision for how to bridge the gap between basic and clinical research."

Wrangle said there's still plenty of work to do before the new combination of drugs can be used outside of a clinical trial. "We have a lot to figure out about how to use this therapy, and we need to treat a few hundred patients in order to get a better sense of how to refine the synergy of these two classes of drugs. That's just going to take time," he said.

Both of the researchers, who are in their early forties, said they were motivated by the need to give lung cancer patients better options. Wrangle plans to frame the study's publication. "I think this manuscript will be the thing that we have on the wall that we look back at 20 years from now, when we're still working together and discovering new therapies."

Credit: 
Medical University of South Carolina

Women who believe their sex drive changes can better cope with low libido

Women who believe that their sex drive will change over time are better able to handle difficulties with sexual desire, according to a study from the University of Waterloo.

Siobhan Sutherland, a PhD candidate, and Uzma S. Rehman, a professor of psychology at Waterloo, conducted the research. They sought to determine how a woman's belief about sexual desire as either changing or unchanging over time affects her ability to cope with desire difficulties, such as problems getting in the mood or maintaining arousal.

Their findings suggest that women who see their sexual desire as variable and rate themselves as likely to have problems with it are less likely to behave negatively by ignoring or avoiding the sexual problem. Conversely, they also found that women who believe that desire is unchanging are less likely to try to overcome sexual-desire problems when they arise. The participants did not have a diagnosis of any clinical sexual dysfunction.

"Women who believe that sexual desire levels remain the same may feel that challenges with sexual desire, such as low sex drive, are impossible to overcome and therefore they try to avoid or ignore the problem," said Sutherland, the study's lead author and a recipient of the prestigious Vanier Canada Graduate Scholarship.

In two online studies, the researchers randomly assigned readings designed to result in different beliefs about sexual desire. The participants were then asked to indicate how true it is that they have experienced or are likely to experience a problem with sexual desire. They then completed a test to measure how they handle desire problems.

"Our findings suggest that holding a belief that sexual desire changes over time may protect women against responding helplessly to their sexual problems," said Sutherland. "Gaining a better understanding of how women's beliefs affect their coping with sexual desire challenges can help to refine psychological interventions for women's problems with sexual desire."

The researchers surveyed 780 women of mixed ages and ethnicities in the U.S. The findings appear in the Journal of Sex and Marital Therapy.

Credit: 
University of Waterloo

Web-based program may help address underage drinking

A new study supports the use of a brief, web-based program alone and in combination with a parent campaign for preventing alcohol consumption among adolescents transitioning from middle school to high school.

The Journal of Addictions & Offender Counseling study found that brief, web-based personalized feedback alone or in combination with a brief parent brochure is more effective than traditional educational lectures in delaying drinking initiation among female ninth-grade students. Prevalence rates for alcohol use were 18.8%, 29.4%, and 66.3% in the web-based, combined, and traditional education groups, respectively.

For male ninth-grade students, prevalence rates for alcohol use were 21.6%, 21.1%, and 33.3% in the respective groups. Although investigators did not find favorable effects for the web-based or combined program compared with traditional education for male students, examination of drinking rates suggests that all three types of programs may be effective for male teens.

Credit: 
Wiley

A study by the University of Tartu scientists: Drained peatlands emit laughing gas

image: These are the studied areas and their average measured laughing gas emission on the map of the world's organic soils.

Image: 
Pärn et al., 2018.

A global study lead by geographers at the University of Tartu has revealed that drained nitrogen-rich peatlands produce laughing gas, which degrades the ozone layer and warms the climate. To avoid this, swamp forests, fens and bogs need to be conserved.

Laughing gas (chemical formula N2O) lifts the mood and alleviates pain. An increase in the laughing-gas content of the atmosphere, however, is not a laughing matter. N2O is the main driver of stratospheric ozone depletion and one of the most significant greenhouse gases causing climate change. While the other main greenhouse gases - carbon dioxide and methane - have been thoroughly studied, laughing gas is the product of many complex processes of the nitrogen cycle and reasons behind an increase in its amount are largely unknown. Scientists at the University of Tartu studied the processes related to laughing gas production and published their results in the reputable scientific journals Nature Communications and Scientific Reports.

N2O is mainly produced as a by-product of two processes:

1) denitrification, i.e. anoxic reduction of nitrate (formula NO3-) into laughing gas and its movement into an oxygen-rich environment so that the process is not completed (safe atmospheric nitrogen N2 is not formed);

2) nitrification, i.e. oxidation of ammonium (NH4+) into nitrate, a by?product of which is laughing gas. An increase in the atmospheric concentration of laughing gas has mainly been associated with agriculture (e.g. nitrate fertilisers), but significant laughing gas emissions have been noted also in natural areas. For decades, under the leadership of professor Ülo Mander and with the participation of some of the world's top scientists, the researchers and students of the department of geography at the University of Tartu have studied the nitrogen cycle and the formation of greenhouse gases in various landscapes.

Peatlands were studied globally

The purpose of this research was to explain which environmental conditions (soil chemistry, soil moisture, soil temperature, weather, land use, vegetation, etc.) determine the emission of laughing gas from peatland and other types of organic soil into air. To accomplish this, between the years 2011 and 2017, the research team measured fluxes of greenhouse gases in the the world's boreal, temperate, subtropical and rainy tropical climates, and their corresponding environmental conditions. The studied sites ranged from natural open fens and bogs, and swamp and bog forests to deep-drained grasslands and arable land. This resulted in the creation of the first database of laughing gas measurements from the world's organic soils.

To explain the mechanisms behind N2O production, we analysed the DNA of microbial communities in French Guiana's natural and drained fens dominated by Eleocharis sp., which is the first study at this level of detail in the world.

Emission of laughing gas depends on nitrates, moisture, temperature and cultivation

The analysis of N2O fluxes in the world's organic soils demonstrated that laughing gas emission is mainly associated with soil nitrate content, which is also one of the main active substances in mineral fertilisers. Nitrogen forms contained in manure are often also converted into nitrate. The studied organic soils were unfertilised but, due to earlier grazing and floodwaters, many of the areas under study had become nitrogen-rich. The connection with laughing gas can be explained by nitrification as well as by interrupted denitrification.

The measurements revealed that laughing gas emission is also connected to soil moisture. Laughing gas was primarily formed in intermediately (50%) moist soil; wet and dry soil largely lacked a flux of laughing gas. This indicates the completion of the denitrification process in the anoxic soil of natural peatlands. Natural peatlands are generally wet and no significant laughing gas is produced. The moisture of peat can be decreased significantly and, thus, drought or drainage can create the conditions necessary for the production of laughing gas.

In the aforementioned soil nitrogen processes, an important role is played by soil moisture, which in turn determines the soil's oxygen content. Sufficient soil moisture can create a sufficiently oxygen-rich environment for nitrification and for the interruption of denitrification.

Following nitrate and moisture content of soil, the third most important cause of laughing gas emission is soil temperature. Generally, it might be said that warmer soil creates more laughing gas. This is the reason why tropical peatlands have an especially high N2O potential. Cultivation was the fourth factor that stood out, i.e. the peatlands with soil overturned and vegetation removed emitted significantly more laughing gas than grasslands or forests.

The microbiological analysis of French Guiana's bogs confirmed the previously established global difference in the N2O fluxes of natural (wet) and drained bogs. Namely, it was found that drainage decreased the number of denitrifying and nitrogen?fixing microbes and increased the abundance and biodiversity of ammonium?oxidising microbes (specifically archaea). It was discovered that the production of laughing gas and N2 in natural and drained peatlands is essentially mediated by a different group of denitrifying microbes; however, in the natural bog, the nitrate-reducing microbes were more abundant and diverse.

In conclusion, it might be said that this study is the first work globally explaining the reasons behind the production of laughing gas measured in organic soils using a standardised methodology. The analysis of French Guiana's fens also demonstrated that the emission of laughing gas from a drained peatland is significantly higher than from natural fens under similar plant communities.

From these results, we conclude that to avoid an increase in the atmosphere's laughing gas content:

Keep natural fens, bogs and swamp forests wet;

Re-wet drained peatlands or keep them under forest orgrass;

Avoid adding nitrogen to bog soils directly (fertilisation) or indirectly.

Credit: 
Estonian Research Council

Why do children tattle?

When young children see a peer cause harm, they often tattle to a caregiver. But why do children tattle? A new Social Development study reveals that even when children cannot be blamed for a transgression, they tattle about it nonetheless, likely because tattling may be a way for children to enforce norms on others and thus help maintain cooperation.

The research sheds new light on why young children tattle and raises the question of whether tattling should necessarily be discouraged in early childhood.

"Children's tattling is often viewed as an undesirable behavior. But at least under some circumstances, tattling can also be seen as evidence that children recognize important social norms and that they care enough about those norms to try and make sure that others follow them as well. This kind of norm enforcement is generally seen as a positive force in social groups," said co-author Dr. Amrisha Vaish, of the University of Virginia.

Credit: 
Wiley

Trap, contain and convert

image: Basalt rocks like these can trap CO2 gas and convert it into an inert mineral. New research from scientists at Washington University in St. Louis shows the rate at which the process takes place.

Image: 
Joe Angeles/Washington University in St. Louis

When fossil fuels are burned, carbon dioxide (CO2) is emitted. As the gas rises and becomes trapped in the atmosphere, it retains heat as part of a process called the greenhouse effect. The increased temperatures associated with the greenhouse effect can cause melting ice caps, higher sea levels and a loss of natural habitat for plant and animal species.

Environmental scientists trying to mitigate the effects of CO2 have experimented with injecting it deep underground, where it becomes trapped. These trials have mainly taken place in sandstone aquifers, however, the injected CO2 primarily remains present as a bubble that can return to the surface if is there are fracture in the capping formation. A different approach using basalt flows as injection sites -- chiefly at the CarbFix site in Iceland and in Washington state -- has yielded dramatic results. Metals in basalt have the ability to transform CO2 into a solid inert mineral in a matter of months. While the new method holds promise, the underground injections can be imprecise, difficult to track and measure.

Now, new research by scientists at Washington University in St. Louis sheds light on what happens underground when CO2 is injected into basalt, illustrating precisely how effective the volcanic rock could be as an abatement agent for CO2 emissions. The research, led by Daniel Giammar, the Walter E. Browne Professor of Environmental Engineering in the School of Engineering & Applied Science, was conducted in collaboration with researchers at Pacific Northwest National Laboratory and Philip Skemer, associate professor of earth and planetary sciences in Arts & Sciences at Washington University.

"In a field site, you inject the carbon dioxide in, and it's a very open system," Giammar said. "You can't get a good constraint in terms of a capacity estimate. You know you made some carbonate from the CO2, but you don't really know how much. In the lab, we have well-defined boundaries."

To obtain a clearer, quantifiable look at carbon trapping rates in basalt, Giammar collected samples of the rock from Washington state, where researchers previously injected a thousand tons of CO2 gas deep underground into a basalt flow. He placed the rocks in small reactors that resemble slow cookers to simulate underground conditions, and then injected CO2 to test the variables involved in the carbonization process.

"We reacted it at similar pressure and temperature conditions to what they had in the field, except we do all of ours in a small sealed vessel," Giammar said. "So we know how much carbon dioxide went in and we know exactly where all of it went. We can look at the entire rock afterwards and see how much carbonate was formed in that rock. "

The lab kept the basalt in the pressurizers and followed up, using 3-D imaging to analyze their pore spaces at six weeks, 20 weeks and 40 weeks. They were able to watch moment to moment as the CO2 precipitated into mineral, the exact voids within the basalt it filled, and the precise spots in the rock where the carbonization process began.

Once all of the data were collected and analyzed, Giammar and his team predicted 47 kilograms of CO2 can be converted into mineral inside one cubic meter of basalt. This estimate can now be used as a baseline to scale up, quantifying how much CO2 can effectively be converted in entire areas of basalt flow.

"People have done surveys of available basalt flows," Giammar said. "This data will help us determine which ones could actually be receptive to having CO2 injected into them, and then also help us to determine capacity. It's big. It's years and years worth of U.S. CO2 emissions."

Giammar's lab is currently sharing its results with colleagues at the University of Michigan, who will assist in developing a computational model to further help researchers to look for a solid fix for CO2 abatement. The Washington University researchers have also been invited to take part in the second phase of the U.S. Department of Energy's Carbon Storage Assurance Facility Enterprise, or CarbonSAFE, which investigates new technologies for CO2 abatement.

Credit: 
Washington University in St. Louis

Childhood exposure to flame retardant chemicals declines following phase-out

Exposure to flame retardants once widely used in consumer products has been falling, according to a new study by researchers at the Columbia Center for Children's Environmental Health at Columbia's Mailman School of Public Health. The researchers are the first to show that levels of polybrominated diphenyl ethers (PBDEs) measured in children significantly decreased over a 15-year period between 1998 and 2013, although the chemicals were present in all children tested. The Center previously linked exposure to PBDEs with attention problems and lower scores on tests of mental and physical development in children.

Results appear in the Journal of Exposure Science and Environmental Epidemiology.

Manufacturers used PBDEs as the primary flame retardant chemical in furniture between 1975 and 2004 to comply with fire safety standards, with the highest use of these chemicals occurring in North America. Due to their persistence in the environment and evidence of human health effects, pentaBDE, a specific technical mixture of PBDEs was phased out of use in couches, mattresses, carpet padding, and other upholstered products beginning in 2004. Since PBDE chemicals are stable, they tend to build up indoors and are found in house dust; humans are mainly exposed through ingestion of dust and have some exposure through dietary sources.

Researchers followed 334 mother-child pairs, a subset of CCCEH's ongoing urban birth cohort study in New York City, from prenatal life through adolescence. Researchers collected umbilical cord blood at birth and blood from the children at ages 2, 3, 5, 7 and 9. Over time, levels of BDE-47, the most frequently detected component of the pentaBDE mixture in humans, decreased by about 5 percent per year from 1998 to 2013. When examining only blood samples collected postnatally, researchers observed a 13 percent decrease per year between 2000 and 2013.

Children that were toddlers (ages 2-3) before the phase-out took effect in 2004-2005, had significantly higher levels of BDE-47 in their blood than children who turned age 2-3 following the phase out. Phase-out aside, 2-3 year olds had the highest concentrations of BDE-47 in their blood than at any other age, possibly because they spend more time on the floor and have more contact with dust at this age.

Though levels of these flame retardants are decreasing over time, investigators found PBDEs in every child blood sample. "These findings suggest that while pentaBDE levels have been decreasing since the phase-out, they continue to be detected in the blood of young children nearly 10 years following their removal from U.S. commerce," says first author Whitney Cowell, PhD, pediatric environmental health research fellow at Mt. Sinai and former doctoral student from the Columbia Center for Children's Environmental Health.

"These findings reinforce the decision to phase-out PBDEs from consumer products," says senior author Julie Herbstman, PhD, associate professor of Environmental Health Sciences. "However, it's important to remain vigilant. Since the phase-out of PBDEs, we have begun to detect other flame-retardant chemicals in children, which are likely being used as replacements."

Credit: 
Columbia University's Mailman School of Public Health

Terns face challenges when they fly south for winter

image: Researchers fitted common terns with geolocators (on the bird's leg) to track their movements.

Image: 
C. Henderson

The Common Tern is most widespread tern species in North America, but its breeding colonies in interior North America have been on the decline for decades despite conservation efforts. The problem, at least in part, must lie elsewhere--and a new study from The Auk: Ornithological Advances presents some of the best information to date on where these birds go when they leave their nesting lakes each fall.

The University of Minnesota's Annie Bracey and her colleagues attached geolocators--small, harmless devices that record a bird's location over time based on day length--to 106 terns from breeding colonies in Manitoba, Ontario, Minnesota, Wisconsin, and New York. When the birds returned to their breeding grounds in the following years, the researchers were able to recapture and retrieve data from 46 birds. The results show important migratory staging areas in the inland U.S. and along the Gulf of Mexico--a surprise, since it was previously thought that most Common Terns head for the Atlantic coast before continuing south. Birds from different colonies intermingled freely in the winter, but most ended up on the coast of Peru, suggesting that the population could be especially vulnerable to environmental change in that region.

For long-lived birds such as Common Terns, adult survival likely drives population trends more than breeding productivity, so identifying causes of mortality is crucial for effective conservation. Coastal Peru is vulnerable to multiple effects of climate change, including increasingly frequent and severe storms, changes in the availability of terns' preferred foods, and rising sea levels. "Because survival is lowest during the non-breeding season, identifying coastal Peru as a potentially important wintering location was significant, as it will help us target studies aimed at identifying potential causes of adult mortality in this region," says Bracey.

"This paper is both important and interesting, because it takes a species we consider 'common' and examines the reasons for its decline," adds Rutgers University Distinguished Professor of Biology Joanna Burger, a tern conservation expert who was not involved in the research. "In short, this is one of the first studies that examines the entire complex of terns breeding in inland US locations, along with migratory routes, stopover areas, and wintering sites. It vastly increases our knowledge of the causes of declines and the locations and times at which terns are at risk, and more importantly, provides a model for future studies of declining populations."

Credit: 
American Ornithological Society Publications Office

Mutation of worm gene, swip-10, triggers age-dependent death of dopamine neurons

image: Wildtype worms with healthy dopamine neurons (on the top) and swip-10 mutant worms with sick dopamine neurons (on the bottom).

Image: 
Florida Atlantic University

Dopamine, a signaling chemical in the brain, has the lofty job of controlling emotions, moods, movements as well as sensations of pleasure and pain. Dysfunction of this critical neurotransmitter is the cause of a number of diseases, most notably, Parkinson's disease. Parkinson's disease is caused by the death of dopamine-producing cells and most theories of disease risk involve the selective vulnerability of ageing dopamine neurons to genetic mutations or to environmental toxins, or both.

By visualizing the dopamine neurons in the "brain" of a tiny worm called C. elegans, neuroscientists in the lab of Randy Blakely, Ph.D., professor of biomedical science in Florida Atlantic University's Charles E. Schmidt College of Medicine and executive director of the FAU Brain Institute, have identified a novel pathway that sustains the health of these cells.

In a study published in PLOS Genetics, Blakely's team provides evidence that the normal actions of swip-10 to protect dopamine neurons are indirect, derived from the gene's action in support cells called glia that lie adjacent to the dopamine neurons. Although glial cells have been recognized for years in worms and humans to play a critical role in shaping neuronal development, structure, and function, the new studies offer a clear demonstration that glial cells also keep dopamine cells alive.

In 2015, while at Vanderbilt University, graduate student Andrew Hardaway and Blakely, reported the discovery of the gene swip-10 in a screen for genes that modify dopamine signaling. They found that when swip-10 is mutated, dopamine neurons become more excitable, releasing excessive amounts of dopamine. Additionally, Blakely's team found that the changes in dopamine release were a result of overstimulation of dopamine neurons by the amino acid glutamate. When not serving as an energy source, or as a building block of proteins, glutamate acts to communicate excitation between nerve cells at nerve cell synapses.

"When we found that the dopamine neurons were being overly excited by glutamate we figured that they were healthy, just overstimulated, like a person with one too many cups of coffee," said Blakely. "However, work has shown that if the excitatory actions of glutamate are not tightly controlled, the neurotransmitter can turn toxic, stressing neurons to the point of death."

Indeed, when Blakely and current graduate student Chelsea Gibson, lead author on the new study, studied the swip-10 mutant worms whose dopamine neurons had been made to fluoresce so that their full shape could be visualized, they saw that many of these cells had what appeared to be swollen, misshapen or fragmented processes, as well as shrunken cell bodies, clear signs of unhealthy neurons. Follow-up studies by David Hall, Ph.D., at Einstein University, using an electron microscope, confirmed Blakely's suspicions - the dopamine neurons in the swip-10 mutants were definitely not normal.

In Blakely's prior studies showing that the dopamine neurons were overstimulated by glutamate, they were able to restore normal dopamine neuron activity by placing a working copy of the swip-10 gene back into glial cells.

"Putting the gene back into the dopamine neurons really did nothing to impede their degeneration," said Gibson. "But restoration of swip-10 gene in glial cells did the trick. Moreover, when we took away specific genes needed for response to glutamate, the dopamine neurons also got healthier."

The researchers believe that improper control of glutamate by glia, while not the only factor, is an important reason why the dopamine neurons die.

More than 15 years ago, Blakely's team reported that an exogenous toxin known to kill dopamine neurons in rats and mice also could selectively kill the worm's complement of dopamine neurons. Although this was an exciting finding, and encouraged other groups to consider use of the worm to study mechanisms of Parkinson's disease, Blakely moved away from research on neural degeneration because he wanted to identify genes more relevant to dopamine-linked disorders where dopamine neurons can malfunction, but do not die, such as ADHD, addiction and schizophrenia.

"It looks like we have come full circle," said Blakely.

Ongoing research in the Blakely lab is oriented toward understanding what the protein made by the swip-10 gene does in normal cells and how the loss of swip-10 in the glial cell neighbors leads to dopamine neuron degeneration.

"Swip-10 has the molecular structure of an enzyme, but as of yet, we don't know what molecule this enzyme is targeting. Once we can determine this we should have important clues as to how glial cells keep dopamine neurons healthy, findings that may provide us with a path to Parkinson's disease medications," said Blakely.

Blakely's group has identified mouse and human forms of swip-10, and recently eliminated this gene from mice. Now they are working to determine whether dopamine neurons malfunction or die in these animals. Strikingly, a former Blakely graduate student, Cassie Retzlaff, recently reported that a neuroprotective drug, ceftriaxone, binds to the protein made by the human form of the swip-10 gene.

"This finding really got our attention, particularly as work to date in rodents with ceftriaxone suggests that the drug's neuroprotection involves an action on glia that results in a suppression of glutamate signaling between neurons," said Blakely. "The idea that worms lacking swip-10 may provide important insights into Parkinson's disease and its treatment now seems much less far-fetched."

According to the Parkinson's Foundation, more than 10 million people worldwide are living with the disease.

Credit: 
Florida Atlantic University

Tungsten oxide nanoparticles fight against infection and cancer

image: This is a scheme of the mechanisms of interaction between radiation and nanoparticles. A -- semiconductor state B -- plasmon state.

Image: 
ITEB RAS

Tungsten oxide attracts a great deal of attention on the part of scientists and industrialists due to its vast scope of applications for photo- and electrochromic devices, photosensitive materials and biomedicine. For example, it may be used as an X-ray contrast agent for computer tomography - an essential diagnostic tool for internal organ visualisation. Moreover, tungsten oxide has a strong antibacterial effect based on high photocatalytic activity, which may be further increased by UV irradiation and the use of smaller tungsten particles. Therefore, tungsten oxide is currently used as a visible spectrum photocatalyst for wastewater purification.

Chemists from the Institute of Theoretical and Experimental Biophysics (ITEB RAS), Institute of General and Inorganic Chemistry in Russia with their colleagues from the Ukrainian Institute of Microbiology and Virology obtained nanodisperse tungsten oxide colloid solution. The scientists conducted a complex analysis of particle features and demonstrated their photocatalytic activity through the photodegradation of an indigo carmine pigment. In the presence of tungsten oxide particles, the pigment quickly disintegrated even in the daylight. If additional UV irradiation was applied, this process sped up dramatically even despite low particle concentration.

The team studied how new particles influence various biological objects including both Gram-positive and negative bacteria, Candida fungus and mice cells. Anton Popov, a member of Cell and Tissue Growth Laboratory of ITEB RAS comments on the study: "We analysed tungsten oxide particles cytotoxicity to prokaryotic microorganisms and eukaryotic cells. They show different sensitivity to particle influence, which is apparently related to morphological features of cell membranes and metabolic differences."

During experiments, researchers treated cells with a range of particle concentrations, irradiated cells with UV light and evaluated their condition. In particular, they examined reactive oxygen species level and cell division speed. It turned out that tungsten oxide toxic effect depended on particle dosage and UV irradiation time. Interestingly, low tungsten oxide concentrations were harmless to mice cells while being fatal to bacteria. Presumably it relates to the difference in structure of prokaryotic and eukaryotic cell membranes. However, high tungsten oxide concentrations showed significant toxicity regardless of the cell type.

Another remarkable feature of the new particles is their selective toxicity to cancer cells, which opens the way for a new promising research field. Since particles are selectively toxic to cancer and may serve as an X-ray contrast agent for computer tomography, it is possible to use them for theranostics. This is a new way for designing drugs that act as both diagnostic and treatment agents.

Experimental data suggests even more applications for the new particles. For instance, coatings based on such particles may be useful for providing biosecurity in public places such as hospitals, supermarkets or public transport. Nonetheless, the study also shows that tungsten oxide should be used under strict control so as to avoid toxic influence to humans. The study of synthesis and properties of tungsten oxide nanoparticles was conducted with financial support of The Russian Science Foundation.

Credit: 
AKSON Russian Science Communication Association

Global South experts urge developing countries to lead on solar geoengineering research

Writing in Nature today, a group of 12 scholars from across the developing world made an unprecedented call for developing countries to lead on the research and evaluation of solar radiation management (SRM) geoengineering.

Solar radiation management is a controversial idea for reducing some of the impacts of climate change. The leading proposal would involve spraying tiny reflective particles into the upper atmosphere, filtering the sun's energy to mimic the cooling effect of volcanoes.

The consequences of solar geoengineering are still uncertain and developing countries could be most affected by its use. SRM would lower global temperatures and so could reduce some of the harmful effects of climate change that affect poor countries, such as higher temperatures, changes to rainfall patterns and stronger tropical cyclones. But it could have unexpected and damaging side effects, could cause international tensions and could distract policymakers from cutting carbon emissions. Without leadership from the Global South, Northern voices will set the policy agenda and developing countries will be left behind.

Most research to date has taken place in Europe and North America. The Comment in Nature argues that developing countries have the most to gain or lose from SRM and should be central to international efforts to understand the technology.

The Comment's co-signatories are a diverse group of distinguished scientists and NGO leaders, all of whom ran pioneering workshops to expand discussion of SRM in their countries or regions: Bangladesh, Brazil, China, Ethiopia, India, Jamaica, Kenya, Pakistan, the Pacific, the Philippines and Thailand.

Dr Atiq Rahman, Director of the Bangladesh Centre for Advanced Studies and the Comment's lead author, said: "Clearly SRM could be dangerous but we need to know whether, for countries like Bangladesh, it would be more or less risky than passing the 1.5C warming goal agreed by the UNFCCC. This matters greatly to people from developing countries and our voices need to be heard".

Prof Paulo Artaxo, a Brazilian physicist and IPCC lead author, who helped organise the first major workshop on SRM in Brazil, agreed: "I support aggressive mitigation and am dubious that SRM will ever be safe enough to use, but developing countries have to lead on research to better understand what it might mean for them".

The Comment is linked to the launch of a new SRM modelling fund for scientists in the South. The DECIMALS fund (Developing Country Impact Modelling Analysis for SRM) will provide grants to scientists who want to understand how SRM might affect their regions. It is being administered by The World Academy of Sciences (TWAS) and the SRM Governance Initiative, with funding support from the Open Philanthropy Project. The call for proposals is open until 29 May 2018 and scientists from across the Global South are encouraged to apply if they would like to better understand the impacts of SRM while stimulating a wider conversation about its risks and benefits.

Credit: 
TWAS

Study explores safety of rear-facing car seats in rear impact car crashes

video: xperts know that rear-facing car seats protect infants and toddlers in front and side impact crashes, but they are rarely discussed when it comes to rear-impact collisions. Because rear-impact crashes account for more than 25 percent of all accidents, researchers at The Ohio State University Wexner Medical Center conducted a new study to explore the effectiveness of rear-facing car seats in this scenario.

Image: 
The Ohio State Wexner Medical Center

COLUMBUS, Ohio – Rear-facing car seats have been shown to significantly reduce infant and toddler fatalities and injuries in frontal and side-impact crashes, but they’re rarely discussed in terms of rear-impact collisions. Because rear-impact crashes account for more than 25 percent of all accidents, researchers at The Ohio State University Wexner Medical Center conducted a new study to explore the effectiveness of rear-facing car seats in this scenario.

“It’s a question that parents ask me a lot, because they are concerned about the child facing the impact of the crash,” said Julie Mansfield, lead author of the study and research engineer at Ohio State College of Medicine’s Injury Biomechanics Research Center. “It shows parents are really thinking about where these impacts are coming from.”

Mansfield and her team conducted crash tests with multiple rear-facing car seats, investigating the effects of various features like the carry handle position and anti-rebound bars. The study, which is published in SAE International, shows that when used correctly, all were effective because they absorbed crash forces while controlling the motion of the child, making rear-facing car seats a good choice in this scenario.

“Even though the child is facing the direction of the impact, it doesn’t mean that a rear-facing car seat isn’t going to do its job,” said Mansfield. “It still has lots of different features and mechanisms to absorb that crash energy and protect the child.”

Mansfield says what they found aligns well with what is known from crash data in the real world, and it’s important for parents to follow the recommended guidelines on the correct type of car seat for their child’s height, weight and age.

“The rear-facing seat is able to support the child’s head, neck and spine and keep those really vulnerable body regions well protected. These regions are especially vulnerable in the newborns and younger children whose spine and vertebrae haven’t fused and fully developed yet,” said Mansfield.

Credit: 
MediaSource

Uncovering a mechanism causing chronic graft-vs-host disease after bone marrow transplant

Allogeneic bone marrow transplant (BMT) is an essential treatment to cure patients with blood cancers such as leukemia. In patients who have undergone chemotherapy and radiation, a small number of cancer cells can remain in the bloodstream and allow the malignancy to recur.

Replacing host bone marrow is the best strategy for preventing relapse but recipients cannot always find an ideal, biologically matched donor. The less well-matched the donor, the higher the risk for developing graft-versus-host disease (GVHD). In GVHD, donor cells trigger an immune response that attacks normal tissues, leading to a chain reaction of cellular and molecular responses that increase morbidity and mortality in these patients. A long-standing question has been how to improve the success of BMT by reducing GVHD incidence while, at the same time, preserving the anti-tumor response of donor cells.

New research by a team of investigators at the Medical University of South Carolina (MUSC) directed by Xue-Zhong Yu, M.D., professor of Microbiology and Immunology, in collaboration with researchers at the University of Minnesota, demonstrates that one particular family of microRNAs (miRs), called miR-17-92, is responsible for the T-cell and B-cell pathogenicity that causes GVHD. The findings were reported in an article prepublished online on March 12, 2018 by Blood.

GVHD can be divided into acute (aGVDH) or chronic forms (cGVHD). "They are very different diseases," explains Yongxia Wu, Ph.D., a postdoctoral fellow and lead author on the article. "Our ability to prevent or treat aGVHD has considerably improved, but the incidence of cGVHD continues to increase. Chronic GVHD has a different pathophysiology and different target organs than aGVHD. It's been a big challenge to try to find a target for cGVHD therapies, because of the more complex immune reaction in cGVHD and the fact that its cellular and molecular mechanisms are not as well understood."

Chronic GVHD is characterized by autoimmune-like, fibrotic changes in multiple organs such as the skin (causing scleroderma) and the lungs (causing bronchiolitis obliterans), and fibrosis of the salivary glands, liver, and gut. With 30 to 70 percent of patients who receive allogeneic BMT developing cGVHD, the lack of effective therapies is a major unmet clinical need.

The MUSC team previously found that, in aGVHD, miR-17-92 played a critical role in regulating CD4 T-cell proliferation and Th1 and Treg differentiation. Based on this work, they decided to investigate whether miR-17-92 regulates T- and B-cell differentiation and function in the development of cGVHD.

"We decided to extend our aGVHD study to cGVHD. But there's no single, well-defined murine model that can reflect all of the clinical manifestations seen in cGVHD patients," explains Wu. "Different patients experience different symptoms because cGVHD can be manifested in many organs -- some patients have skin symptoms, some have lung symptoms -- it varies. So, we decided to study four different cGVHD models to best understand how miR-17-92 contributes overall, across many clinical presentations."

The team undertook a series of experiments to define the role of miR-17-92 in regulating T- and B-cell pathogenicity using murine models of allogeneic BMT, including models of scleroderma that had transitioned from aGVHD to cGVHD, classic cGVHD scleroderma, lung inflammation and a lupus-like condition. The team also conducted two clinical translation studies to test whether pharmacologically blocking miR-17-92 might have clinical relevance in the lupus-like condition and the scleroderma cGVHD model.

Their results demonstrated shared mechanisms by which miR-17-92 mediates cGVHD progression -- namely by regulating T helper-cell differentiation, B-cell activation, germinal center responses, and autoantibody production. The clinical translation studies also found that miR-17 blockade alleviated proteinuria (in the lupus-like condition) and scleroderma symptoms.

"The mechanism for how miR-17-92 regulates T- and B-cells was very consistent. In other words, we did not find any big differences among the models," says Wu. "So, we not only found a new mechanism for cGVHD development by demonstrating that this miR-17-92 is heavily involved in the T- and B-cell responses that lead to cGVHD, but we also found that blocking miR-17 substantially reduced cGVHD symptoms in mice. That's exciting because it provides strong evidence that this miR may be a good target for controlling cGVHD after allogeneic BMT."

Although miR-17-92 has been well studied, its role in cGVHD development has never before been defined. Because cGVHD has a similar pathophysiology to some autoimmune diseases, it is likely that these findings will be useful for developing new treatments and preventive therapies in other conditions.

"We are very excited to publish this work because we are hoping that a clinical research group will be inspired to take our study findings further in patients," says Wu.

In the meantime, the MUSC team, led by Yu, will continue their work and try to extend the current findings by investigating how other miRs may be involved in regulating T- and B-cell function during allogeneic BMT.

Credit: 
Medical University of South Carolina

Cracking eggshell nanostructure

video: 
Not much else in the biological world 'mineralizes' so fast as a bird egg.

Image: 
Professor Marc McKee, McGill University

How is it that fertilized chicken eggs manage to resist fracture from the outside, while at the same time, are weak enough to break from the inside during chick hatching? It's all in the eggshell's nanostructure, according to a new study led by McGill University scientists.

The findings, reported today in Science Advances, could have important implications for food safety in the agro-industry.

Birds have benefited from millions of years of evolution to make the perfect eggshell, a thin, protective biomineralized chamber for embryonic growth that contains all the nutrients required for the growth of a baby chick. The shell, being not too strong, but also not too weak (being "just right" Goldilocks might say), is resistant to fracture until it's time for hatching.

But what exactly gives bird eggshells these unique features?

To find out, Marc McKee's research team in McGill's Faculty of Dentistry, together with Richard Chromik's group in Engineering and other colleagues, used new sample-preparation techniques to expose the interior of the eggshells to study their molecular nanostructure and mechanical properties.

"Eggshells are notoriously difficult to study by traditional means, because they easily break when we try to make a thin slice for imaging by electron microscopy," says McKee, who is also a professor in McGill's Department of Anatomy and Cell Biology.

"Thanks to a new focused-ion beam sectioning system recently obtained by McGill's Facility for Electron Microscopy Research, we were able to accurately and thinly cut the sample and image the interior of the shell."

Eggshells are made of both inorganic and organic matter, this being calcium-containing mineral and abundant proteins. Graduate student Dimitra Athanasiadou, the study's first author, found that a factor determining shell strength is the presence of nanostructured mineral associated with osteopontin, an eggshell protein also found in composite biological materials such as bone.

A glimpse into egg biology

The results also provide insight into the biology and development of chicken embryos in fertilized and incubated eggs. Eggs are sufficiently hard when laid and during brooding to protect them from breaking. As the chick grows inside the eggshell, it needs calcium to form its bones. During egg incubation, the inner portion of the shell dissolves to provide this mineral ion supply, while at the same time weakening the shell enough to be broken by the hatching chick. Using atomic force microscopy, and electron and X-ray imaging methods, Professor McKee's team of collaborators found that this dual-function relationship is possible thanks to minute changes in the shell's nanostructure that occurs during egg incubation.

In parallel experiments, the researchers were also able to re-create a nanostructure similar to that which they discovered in the shell by adding osteopontin to mineral crystals grown in the lab. Professor McKee believes that a better understanding of the role of proteins in the calcification events that drive eggshell hardening and strength through biomineralization could have important implications for food safety.

"About 10-20% of chicken eggs break or crack, which increases the risk of Salmonella poisoning," says McKee. "Understanding how mineral nanostructure contributes to shell strength will allow for selection of genetic traits in laying hens to produce consistently stronger eggs for enhanced food safety."

Credit: 
McGill University

Engineers turn plastic insulator into heat conductor

image: Researchers at MIT have designed a new way to engineer a polymer structure at the molecular level, via chemical vapor deposition. This allows for rigid, ordered chains, versus the messy, 'spaghetti-like strands' that normally make up a polymer. This chain-like structure enables heat transport both along and across chains.

Image: 
MIT News Office / Chelsea Turner

CAMBRIDGE, MASS.--Plastics are excellent insulators, meaning they can efficiently trap heat - a quality that can be an advantage in something like a coffee cup sleeve. But this insulating property is less desirable in products such as plastic casings for laptops and mobile phones, which can overheat, in part because the coverings trap the heat that the devices produce.

Now a team of engineers at MIT has developed a polymer thermal conductor -- a plastic material that, however counterintuitively, works as a heat conductor, dissipating heat rather than insulating it. The new polymers, which are lightweight and flexible, can conduct 10 times as much heat as most commercially used polymers.

"Traditional polymers are both electrically and thermally insulating. The discovery and development of electrically conductive polymers has led to novel electronic applications such as flexible displays and wearable biosensors," says Yanfei Xu, a postdoc in MIT's Department of Mechanical Engineering. "Our polymer can thermally conduct and remove heat much more efficiently. We believe polymers could be made into next-generation heat conductors for advanced thermal management applications, such as a self-cooling alternative to existing electronics casings."

Xu and a team of postdocs, graduate students, and faculty, have published their results today in Science Advances. The team includes Xiaoxue Wang, who contributed equally to the research with Xu, along with Jiawei Zhou, Bai Song, Elizabeth Lee, and Samuel Huberman; Zhang Jiang, physicist at Argonne National Laboratory; Karen Gleason, associate provost of MIT and the Alexander I. Michael Kasser Professor of Chemical Engineering; and Gang Chen, head of MIT's Department of Mechanical Engineering and the Carl Richard Soderberg Professor of Power Engineering.

Stretching spaghetti

If you were to zoom in on the microstructure of an average polymer, it wouldn't be difficult to see why the material traps heat so easily. At the microscopic level, polymers are made from long chains of monomers, or molecular units, linked end to end. These chains are often tangled in a spaghetti-like ball. Heat carriers have a hard time moving through this disorderly mess and tend to get trapped within the polymeric snarls and knots.

And yet, researchers have attempted to turn these natural thermal insulators into conductors. For electronics, polymers would offer a unique combination of properties, as they are lightweight, flexible, and chemically inert. Polymers are also electrically insulating, meaning they do not conduct electricity, and can therefore be used to prevent devices such as laptops and mobile phones from short-circuiting in their users' hands.

Several groups have engineered polymer conductors in recent years, including Chen's group, which in 2010 invented a method to create "ultradrawn nanofibers" from a standard sample of polyethylene. The technique stretched the messy, disordered polymers into ultrathin, ordered chains -- much like untangling a string of holiday lights. Chen found that the resulting chains enabled heat to skip easily along and through the material, and that the polymer conducted 300 times as much heat compared with ordinary plastics.

But the insulator-turned-conductor could only dissipate heat in one direction, along the length of each polymer chain. Heat couldn't travel between polymer chains, due to weak Van der Waals forces -- a phenomenon that essentially attracts two or more molecules close to each other. Xu wondered whether a polymer material could be made to scatter heat away, in all directions.

Xu conceived of the current study as an attempt to engineer polymers with high thermal conductivity, by simultaneously engineering intramolecular and intermolecular forces -- a method that she hoped would enable efficient heat transport along and between polymer chains.

The team ultimately produced a heat-conducting polymer known as polythiophene, a type of conjugated polymer that is commonly used in many electronic devices.

Hints of heat in all directions

Xu, Chen, and members of Chen's lab teamed up with Gleason and her lab members to develop a new way to engineer a polymer conductor using oxidative chemical vapor deposition (oCVD), whereby two vapors are directed into a chamber and onto a substrate, where they interact and form a film. "Our reaction was able to create rigid chains of polymers, rather than the twisted, spaghetti-like strands in normal polymers." Xu says.

In this case, Wang flowed the oxidant into a chamber, along with a vapor of monomers - individual molecular units that, when oxidized, form into the chains known as polymers.

"We grew the polymers on silicon/glass substrates, onto which the oxidant and monomers are adsorbed and reacted, leveraging the unique self-templated growth mechanism of CVD technology," Wang says.

Wang produced relatively large-scale samples, each measuring 2 square centimeters - about the size of a thumbprint.

"Because this sample is used so ubiquitously, as in solar cells, organic field-effect transistors, and organic light-emitting diodes, if this material can be made to be thermally conductive, it can dissipate heat in all organic electronics," Xu says.

The team measured each sample's thermal conductivity using time-domain thermal reflectance -- a technique in which they shoot a laser onto the material to heat up its surface and then monitor the drop in its surface temperature by measuring the material's reflectance as the heat spreads into the material.

"The temporal profile of the decay of surface temperature is related to the speed of heat spreading, from which we were able to compute the thermal conductivity," Zhou says.

On average, the polymer samples were able to conduct heat at about 2 watts per meter per kelvin - about 10 times faster than what conventional polymers can achieve. At Argonne National Laboratory, Jiang and Xu found that polymer samples appeared nearly isotropic, or uniform. This suggests that the material's properties, such as its thermal conductivity, should also be nearly uniform. Following this reasoning, the team predicted that the material should conduct heat equally well in all directions, increasing its heat-dissipating potential.

Going forward, the team will continue exploring the fundamental physics behind polymer conductivity, as well as ways to enable the material to be used in electronics and other products, such as casings for batteries, and films for printed circuit boards.

"We can directly and conformally coat this material onto silicon wafers and different electronic devices" Xu says. "If we can understand how thermal transport [works] in these disordered structures, maybe we can also push for higher thermal conductivity. Then we can help to resolve this widespread overheating problem, and provide better thermal management."

Credit: 
Massachusetts Institute of Technology