Earth

Vaping vs. smoking: Impact on cells compared

image: Diagram showing the six reporter cells lines of the ToxTracker test, plus a 'wild type (wt)' cell line used to visualise microscopic signs of DNA damage.

Image: 
Toxys

Imperial Brands scientists have utilised Toxys' ToxTracker suite of stem-cell based in-vitro assays, which provide mechanistic insight into the potential DNA damaging properties of chemicals, comparing vape e-liquid samples and their aerosols to combustible cigarette smoke.

Imperial scientists are the first to publish results using the ToxTracker system for the assessment of vape e-liquids and aerosols, and it forms part of the company's continuing research into the tobacco harm reduction potential of Next Generation Products (NGPs) such as vapes.

The assays help assess how product samples may impact cellular functioning across six reporter cell lines, picking up the tell-tale molecular signs of potential harm in the form of oxidative stress, DNA and protein damage, as well as activation of the p53 gene that has a role in cell cycle regulation and tumour suppression.

The results, peer reviewed and published in the journal Mutagenesis, showed that under the conditions of test, undiluted vape e-liquids and their aerosol extracts exhibited entirely absent or vastly reduced indications of DNA damaging potential in cells, compared to smoke from combustible cigarettes.

"Overall, the data from our latest study adds to the weight of scientific evidence demonstrating vape products offer significant harm reduction potential compared to combustible cigarettes," says Lukasz Czekala, Senior Pre-Clinical Toxicologist at Imperial Brands and lead author of the paper.

Suite results

After calibrating and validating the system to ensure the principal e-liquid components propylene glycol (PG) and vegetable glycerine (VG) were compatible with the ToxTracker suite, a selection of neat myblu vape e-liquids and their aerosol extracts were compared to smoke samples from the reference 1R6F combustible cigarette. Results showed that:

Vape aerosols trapped in a buffer solution did not induce responses in any of the six cell lines.

Undiluted flavoured e-liquids (tested up to 1%) did induce both oxidative stress reporters, but this was considered an effect of osmolarity (a measure of solution concentration) caused by PG/VG in an in-vitro testing environment.

Nicotine content did not affect responses: tobacco flavour e-liquid at either 1.6% freebase nicotine, 1.6% nicotine salt or nicotine-free produced the same results.

In addition, nicotine (tested alone) only produced an oxidative stress response at levels that would be a concentration of more than 40,000 times higher than found in the blood from typical smoking.

Dr Fiona Chapman, Pre-Clinical Toxicologist and corresponding author adds that "ToxTracker is quick, sensitive and can provide a greater mechanistic resolution than existing Next Generation Product (NGP) stewardship DNA damage tests like the micronucleus and Ames assays".

She continued: "Our adoption of this cutting-edge in-vitro suite reinforces our commitment to using advanced cellular assays which adhere to Toxicity Testing in the 21st Century (TT21C) principles. This also contributes to reducing industry reliance on in-vivo (animal) experiments." (Imperial does not test any products on animals: see our Position here.)

"This paper adds to the established body of scientific data that shows vaping products, when manufactured to high quality and safety standards, have significant tobacco harm reduction potential relative to continued cigarette smoking," says Dr Grant O'Connell, Head of Tobacco Harm Reduction Science at Imperial Brands. "We appreciate society's ongoing concerns about the health risks of smoking, and are committed to undertake high quality research on potentially less harmful nicotine product alternatives to combustible tobacco for adult smokers."

Credit: 
Imperial Brands

Optically active defects improve carbon nanotubes

image: The optical properties of carbon nanotubes, which consist of a rolled-up hexagonal lattice of sp2 carbon atoms, can be improved through defects. A new reaction pathway enables the selective creation of optically active sp3 defects. These can emit single photons in the near-infrared even at room temperature.

Image: 
Simon Settele (Heidelberg)

The properties of carbon-based nanomaterials can be altered and engineered through the deliberate introduction of certain structural "imperfections" or defects. The challenge, however, is to control the number and type of these defects. In the case of carbon nanotubes - microscopically small tubular compounds that emit light in the near-infrared - chemists and materials scientists at Heidelberg University led by Prof. Dr Jana Zaumseil have now demonstrated a new reaction pathway to enable such defect control. It results in specific optically active defects - so-called sp3 defects - which are more luminescent and can emit single photons, that is, particles of light. The efficient emission of near-infrared light is important for applications in telecommunication and biological imaging.

Usually defects are considered something "bad" that negatively affects the properties of a material, making it less perfect. However, in certain nanomaterials such as carbon nanotubes these "imperfections" can result in something "good" and enable new functionalities. Here, the precise type of defects is crucial. Carbon nanotubes consist of rolled-up sheets of a hexagonal lattice of sp2 carbon atoms, as they also occur in benzene. These hollow tubes are about one nanometer in diameter and up to several micrometers long.

Through certain chemical reactions, a few sp2 carbon atoms of the lattice can be turned into sp3 carbon, which is also found in methane or diamond. This changes the local electronic structure of the carbon nanotube and results in an optically active defect. These sp3 defects emit light even further in the near-infrared and are overall more luminescent than nanotubes that have not been functionalised. Due to the geometry of carbon nanotubes, the precise position of the introduced sp3 carbon atoms determines the optical properties of the defects. "Unfortunately, so far there has been very little control over what defects are formed," says Jana Zaumseil, who is a professor at the Institute for Physical Chemistry and a member of the Centre for Advanced Materials at Heidelberg University.

The Heidelberg scientist and her team recently demonstrated a new chemical reaction pathway that enables defect control and the selective creation of only one specific type of sp3 defect. These optically active defects are "better" than any of the previously introduced "imperfections". Not only are they more luminescent, they also show single-photon emission at room temperature, Prof. Zaumseil explains. In this process, only one photon is emitted at a time, which is a prerequisite for quantum cryptography and highly secure telecommunication.

According to Simon Settele, a doctoral student in Prof. Zaumseil's research group and the first author on the paper reporting these results, this new functionalisation method - a nucleophilic addition - is very simple and does not require any special equipment. "We are only just starting to explore the potential applications. Many chemical and photophysical aspects are still unknown. However, the goal is to create even better defects."

This research is part of the project "Trions and sp3-Defects in Single-walled Carbon Nanotubes for Optoelectronics" (TRIFECTs), led by Prof. Zaumseil and funded by an ERC Consolidator Grant of the European Research Council (ERC). Its goal is to understand and engineer the electronic and optical properties of defects in carbon nanotubes.

"The chemical differences between these defects are subtle and the desired binding configuration is usually only formed in a minority of nanotubes. Being able to produce large numbers of nanotubes with a specific defect and with controlled defect densities paves the way for optoelectronic devices as well as electrically pumped single-photon sources, which are needed for future applications in quantum cryptography," Prof. Zaumseil says.

Credit: 
Heidelberg University

CO2 mitigation on Earth and magnesium civilization on Mars

image: Bubble the air in water with a pinch of magnesium and we will get fuel.

Image: 
Vivek Polshettiwar

Excessive CO2 emissions are a major cause of climate change, and hence reducing the CO2 levels in the Earth's atmosphere is key to limit adverse environmental effects. Rather than just capture and store CO2, it would be desirable to use it as carbon feedstock for fuel production to achieve the target of "net-zero-emissions energy systems". The capture and conversion of CO2 (from fuel gas or directly from the air) to methane and methanol simply using water as a hydrogen source under ambient conditions would provide an optimal solution to reduce excessive CO2 levels and would be highly sustainable.

Prof. Vivek Polshettiwar's group at Tata Institute of Fundamental Research (TIFR), Mumbai, demonstrated the use of Magnesium (nanoparticles and bulk) to directly react CO2 with water at room temperature and atmospheric pressure, forming methane, methanol, and formic acid without requiring external energy sources. Magnesium is the eighth most abundant element in the Earth's crust and fourth most common element in the Earth (after iron, oxygen and silicon).

The conversion of CO2 (pure, as well as directly from the air) took place within a few minutes at 300 K and 1 bar. A unique cooperative action of Mg, basic magnesium carbonate, CO2, and water enabled this CO2 transformation. If any of the four components were missing, no CO2 conversion took place. The reaction intermediates and the reaction pathway were identified by 13CO2 isotopic labeling, powder X-ray diffraction (PXRD), nuclear magnetic resonance (NMR) and in-situ attenuated total reflectance-Fourier transform Infrared spectroscopy (ATR-FTIR), and rationalized by density-functional theory (DFT) calculations. During CO2 conversion, Mg was converted to magnesium hydroxide and carbonate, which may be regenerated.

Mg is one of the metals with the lowest energy demand for production and generates the lowest amount of CO2 during production. Using this protocol, 1 kg of magnesium via simple reaction with water and CO2 produces 2.43 liters of methane, 940 liters of hydrogen and 3.85 kg of basic magnesium carbonate (used in green cement, pharma industry etc.), and also small amounts of methanol, and formic acid.

In the absence of CO2, Mg does not react efficiently with water and hydrogen yield was extremely low, 100 μmol g-1 as compared to 42000 μmol g-1 in the presence of CO2. This was due to the poor solubility of magnesium hydroxide formed by the reaction of Mg with water, restricting the internal Mg surface from reacting further with water. However, in the presence of CO2, magnesium hydroxide gets converted to carbonates and basic carbonates, which are more soluble in water than magnesium hydroxide and get peeled off from Mg, exposing fresh Mg surface to react with water. Thus, this protocol can even be used for hydrogen production (940 liter per kg of Mg), which is nearly 420 times more than hydrogen produced by the reaction of Mg with water alone (2.24 liter per kg of Mg).

Notably, this entire production happens in just 15 minutes, at room temperature and atmospheric pressure, in the exceptionally simple and safe protocol. Unlike other metal powder, the Mg powder is extremely stable (due to the presence of a thin MgO passivation surface layer) and can be handled in the air without any loss in activity. The use of fossil fuels need to be restricted (if not avoided), to combat climate change. This Mg protocol will then be one of the sustainable CO2 conversion protocols, for a CO2-neutral process to produce various chemicals and fuels (methane, methanol, formic acid and hydrogen).

Planet Mars' environment has 95.32% of CO2, while its surface has water in the form of ice. Recently, the presence of magnesium on Mars in abundant amounts was also reported. Therefore, to explore the possibility of the use of this Mg-assisted CO2 conversion process on Mars, researchers carried out this Mg-assisted CO2 conversion at a lower temperature. Notably, methane, methanol, formic acid and hydrogen were produced in a reasonable amount. These results indicate the potential of this Mg process to be used in the Mars' environment, a step towards magnesium utilization on Mars, although more detailed studies are needed.

Credit: 
Tata Institute of Fundamental Research

Treating sleep apnea may reduce dementia risk

A new study finds older adults who received positive airway pressure therapy prescribed for obstructive sleep apnea may be less likely to develop Alzheimer's disease and other kinds of dementia.

Researchers from Michigan Medicine's Sleep Disorders Centers analyzed Medicare claims of more than 50,000 Medicare beneficiaries ages 65 and older who had been diagnosed with OSA. In this nationally representative study, they examined if those people who used positive airway pressure therapy were less likely to receive a new diagnosis of dementia or mild cognitive impairment over the next 3 years, compared to people who did not use positive airway pressure.

"We found a significant association between positive airway pressure use and lower risk of Alzheimer's and other types of dementia over three years, suggesting that positive airway pressure may be protective against dementia risk in people with OSA," says lead author Galit Levi Dunietz, Ph.D., M.P.H., an assistant professor of neurology and a sleep epidemiologist.

The findings stress the impact of sleep on cognitive function. "If a causal pathway exists between OSA treatment and dementia risk, as our findings suggest, diagnosis and effective treatment of OSA could play a key role in the cognitive health of older adults," says study principal investigator Tiffany J. Braley, M.D., M.S., an associate professor of neurology.

Obstructive sleep apnea is a condition in which the upper airway collapses repeatedly throughout the night, preventing normal breathing during sleep. OSA is associated with a variety of other neurological and cardiovascular conditions, and many older adults are at high risk for OSA.

And dementia is also prevalent, with approximately 5.8 million Americans currently living with it, Braley says.

Credit: 
Michigan Medicine - University of Michigan

Excellent outcomes reported for first targeted therapy for pediatric Hodgkin lymphoma

image: (From left) First and corresponding author Monika Metzger, M.D., Departments of Oncology and Global Pediatric Medicine; co- senior author Matthew Krasin, M.D., Department of Radiation Oncology; and co-senior author Melissa Hudson, M.D., Cancer Survivorship Division director.

Image: 
St. Jude Children's Research Hospital

Scientists are reporting results of the first frontline clinical trial to use targeted therapy to treat high-risk pediatric Hodgkin lymphoma. The study showed that the addition of brentuximab vedotin achieved excellent outcomes, reduced side effects, and allowed for reduced radiation exposures.

The study was the result of work by a multi-site consortium dedicated to pediatric Hodgkin-lymphoma. Collaborating institutions include St. Jude Children's Research Hospital, Stanford University School of Medicine, Dana-Farber Cancer Institute, Massachusetts General Hospital, Maine Children's Cancer Program and OSF Children's Hospital of Illinois.

A paper detailing the findings was published today in the Journal of Clinical Oncology.

A new kind of therapy

Brentuximab vedotin is an anti-CD30 antibody drug conjugate. The drug, which is already approved to treat adults with Hodgkin lymphoma, is targeted specifically to Hodgkin Reed Sternberg cells (cancer cells in Hodgkin lymphoma). Brentuximab vedotin delivers the medicine directly where it is needed.

"I think of brentuximab vedotin as a smart drug," said first and corresponding author Monika Metzger, M.D., St. Jude Departments of Oncology and Global Pediatric Medicine. "Unlike conventional chemotherapy, which can have wide-ranging effects on all cells of the body, this drug knows to go directly to the Hodgkin lymphoma cells - maximizing its effect while minimizing side effects."

This phase 2 clinical trial replaced the chemotherapy drug vincristine with brentuximab vedotin in the frontline (first therapy given) treatment regimen. The regimen included other chemotherapy agents and radiation when needed. Vincristine is associated with neuropathy. Removing it from the regimen resulted in patients reporting a reduction in this symptom.

Overall three-year survival for the trial was 99%. Of the 77 patients enrolled in the study, 35% were spared radiation. When radiation was needed, it was precisely tailored, and doses were reduced when possible.

"We have already reduced the use of radiation for low-risk Hodgkin lymphoma patients. In this study we've shown that it is also possible to either omit or reduce the extent of radiation for high-risk patients, using highly focal methods such as proton beam radiation or intensity modulated radiation," said co-senior author Matthew Krasin, M.D., St. Jude Department of Radiation Oncology.

The researchers concluded that brentuximab vedotin in the frontline treatment of pediatric high-risk Hodkgin lymphoma is tolerable, reduced radiation exposure and produced excellent outcomes. Brentuximab vedotin is currently being incorporated into other national trials for the care of pediatric patients with Hodgkin lymphoma.

"Being able to offer Hodgkin lymphoma patients a targeted therapy in the frontline setting is an exciting development," said co-senior author Melissa Hudson, M.D., St. Jude Cancer Survivorship Division director. "We are constantly learning from research and applying new findings to the next iteration of clinical trials."

Credit: 
St. Jude Children's Research Hospital

Modern human brain originated in Africa around 1.7 million years ago

image: Skulls of early Homo from Georgia with an ape-like brain (left) and from Indonesia with a human-like brain (right).

Image: 
M. Ponce de León and Ch. Zollikofer, UZH

Modern humans are fundamentally different from our closest living relatives, the great apes: We live on the ground, walk on two legs and have much larger brains. The first populations of the genus Homo emerged in Africa about 2.5 million years ago. They already walked upright, but their brains were only about half the size of today's humans. These earliest Homo populations in Africa had primitive ape-like brains - just like their extinct ancestors, the australopithecines. So when and where did the typical human brain evolve?

CT comparisons of skulls reveal modern brain structures

An international team led by Christoph Zollikofer and Marcia Ponce de León from the Department of Anthropology at the University of Zurich (UZH) has now succeeded in answering these questions. "Our analyses suggest that modern human brain structures emerged only 1.5 to 1.7 million years ago in African Homo populations," Zollikofer says. The researchers used computed tomography to examine the skulls of Homo fossils that lived in Africa and Asia 1 to 2 million years ago. They then compared the fossil data with reference data from great apes and humans.

Apart from the size, the human brain differs from that of the great apes particularly in the location and organization of individual brain regions. "The features typical to humans are primarily those regions in the frontal lobe that are responsible for planning and executing complex patterns of thought and action, and ultimately also for language," notes first author Marcia Ponce de León. Since these areas are significantly larger in the human brain, the adjacent brain regions shifted further back.

Typical human brain spread rapidly from Africa to Asia

The first Homo populations outside Africa - in Dmanisi in what is now Georgia - had brains that were just as primitive as their African relatives. It follows, therefore, that the brains of early humans did not become particularly large or particularly modern until around 1.7 million years ago. However, these early humans were quite capable of making numerous tools, adapting to the new environmental conditions of Eurasia, developing animal food sources, and caring for group members in need of help.

During this period, the cultures in Africa became more complex and diverse, as evidenced by the discovery of various types of stone tools. The researchers think that biological and cultural evolution are probably interdependent. "It is likely that the earliest forms of human language also developed during this period," says anthropologist Ponce de León. Fossils found on Java provide evidence that the new populations were extremely successful: Shortly after their first appearance in Africa, they had already spread to Southeast Asia.

Brain imprints in fossil skulls reveal evolution of humans

Previous theories had little to support them because of the lack of reliable data. "The problem is that the brains of our ancestors were not preserved as fossils. Their brain structures can only be deduced from impressions left by the folds and furrows on the inner surfaces of fossil skulls," says study leader Zollikofer. Because these imprints vary considerably from individual to individual, until now it was not possible to clearly determine whether a particular Homo fossil had a more ape-like or a more human-like brain. Using computed tomography analyses of a range of fossil skulls, the researchers have now been able to close this gap for the first time.

Credit: 
University of Zurich

Transforming crop and timber production could reduce species extinction risk by 40%

Ensuring sustainability of crop and timber production would mitigate the greatest drivers of terrestrial wildlife decline, responsible for 40% of the overall extinction risk of amphibians, birds and mammals, according to a paper published today in Nature Ecology & Evolution. These results were generated using a new metric which, for the first time, allows business, governments and civil society to assess their potential contributions to stemming global species loss, and can be used to calculate national, regional, sector-based, or institution-specific targets. The work was led by the IUCN Species Survival Commission's Post-2020 Taskforce, hosted by Newcastle University (UK), in collaboration with scientists from 54 institutions in 21 countries around the world.

"For years, a major impediment to engaging companies, governments and others in biodiversity conservation has been the inability to measure the impact of their efforts," said IUCN Director General Dr Bruno Oberle. "By quantifying their contributions, the new STAR metric can bring all these actors together around the common objective of preserving the diversity of life on Earth. We need concerted global action to safeguard the world's biodiversity, and with it our own safety and wellbeing."

The authors applied the new STAR (Species Threat Abatement and Restoration) metric to all species of amphibians, birds, and mammals - groups of terrestrial vertebrate species that are comprehensively assessed on the IUCN Red List of Threatened Species. They found that removing threats to wildlife from crop production would reduce global extinction risk across these groups by 24%. Ending threats caused by unsustainable logging globally would reduce this by a further 16%, while removing threats associated with invasive alien species would bring a further 10% reduction, according to the paper. STAR can also be used to calculate the benefits of restoration: global extinction risk could potentially be reduced by 56% through comprehensive restoration of threatened species' habitats, according to the paper.

Actions that benefit more species, and in particular the most threatened species, yield higher STAR scores. The results reveal that safeguarding "key biodiversity areas", covering just 9% of land surface, could reduce global extinction risk by almost half (47%). While every country contributes to the global STAR score, conservation in five megadiverse countries could reduce global extinction risk by almost a third (31%), with Indonesia alone potentially contributing 7%.

"We are in the midst of a biodiversity crisis and resources are limited, but our study shows that extinction risk is concentrated in relatively small areas with greater numbers of highly threatened species. The STAR methodology allows us to consistently measure where and how conservation and restoration could have the biggest impact," said Louise Mair of Newcastle University, lead author of the study. "At the same time, our analysis shows that threats to species are omnipresent, and that action to stem the loss of life on Earth must happen in all countries without exception."

To show how the metric can be used by individual institutions, the authors applied STAR to an 88,000-hectare commercial rubber initiative in central Sumatra, Indonesia, where the major threats to biodiversity are crop production, logging and hunting. By abating these threats within its concession area, the company could report reducing overall extinction risk by 0.2% across Sumatra, 0.04% across Indonesia and 0.003% globally. These scores would be due in part to safeguarding the area's populations of tigers (Panthera tigris; Endangered) and Asian elephants (Elephas maximus; Endangered), as well as leaf-nosed bats (Hipposideros orbiculus; Vulnerable, and only found in the region). Measuring contributions to biodiversity targets and assessing biodiversity-related risk - both facilitated by STAR - can feed into companies' Environmental, Social and Governance reporting.

The STAR metric will be available in time to inform major international negotiations for nature in 2021. These include the IUCN World Conservation Congress in Marseille, France, in September, followed by the Fifteenth Conference of the Parties to the Convention on Biological Diversity, in Kunming China.

"The post-2020 Global Biodiversity Framework seeks to identify specific actions that will improve the overall state of biodiversity," said Elizabeth Maruma Mrema, Executive Secretary of the Convention on Biological Diversity. "STAR provides a way to measure how reducing threats in a particular place can decrease overall extinction risk, linking proposed actions to achieving the Convention's vision of living in harmony with nature."

Credit: 
International Union for Conservation of Nature

Study calls for urgent climate change action to secure global food supply

New Curtin University-led research has found climate change will have a substantial impact on global food production and health if no action is taken by consumers, food industries, government, and international bodies.

Published in one of the highest-ranking public health journals, the Annual Review of Public Health, the researchers completed a comprehensive 12-month review of published literature on climate change, healthy diet and actions needed to improve nutrition and health around the world.

Lead researcher John Curtin Distinguished Emeritus Professor Colin Binns, from the Curtin School of Population Health at Curtin University, said climate change has had a detrimental impact on health and food production for the past 50 years and far more needs to be done to overcome its adverse effects.

"The combination of climate change and the quality of nutrition is the major public health challenge of this decade and, indeed, this century. Despite positive advances in world nutrition rates, we are still facing the ongoing threat of climate change to our global food supply, with Sub-Saharan Africa and part of Asia most at risk" Professor Binns said.

"For the time being, it will be possible to produce enough food to maintain adequate intakes, using improved farming practices and technology and more equity in distribution, but we estimate that by 2050 world food production will need to increase by 50 per cent to overcome present shortages and meet the needs of the growing population.

"Our review recommends that by following necessary dietary guidelines and choosing foods that have low environmental impacts, such as fish, whole grain cereals, fruits, vegetables, legumes, nuts, berries, and olive oil, would improve health, help reduce greenhouse gases and meet the United Nations Sustainable Development Goals, which in turn would improve food production levels in the future."

Professor Binns said that while climate change will have a significant effect on food supply, political commitment and substantial investment could go some way to reduce the effects and help provide the foods needed to achieve the Sustainable Development Goals.

"Some changes will need to be made to food production, nutrient content will need monitoring, and more equitable distribution will be required to meet the proposed dietary guidelines. It was also be important to increase breastfeeding rates to improve infant and adult health, while helping to reduce greenhouse gases and benefit the environment," Professor Binns said.

"Ongoing research will be vital to assessing the long-term impacts of climate change on food supply and health in order to adequately prepare for the future."

Credit: 
Curtin University

The fastest one wins

Indole, and structures derived from it, are a component of many natural substances, such as the amino acid tryptophan. A new catalytic reaction produces cyclopenta[b]indoles--frameworks made of three rings that are joined at the edges--very selectively and with the desired spatial structure. As a research team reports in the journal Angewandte Chemie, the rates of the different steps of the reaction play a critical role.

Indole derivates are widely distributed in nature; they are part of serotonin and melatonin, as well as many alkaloids--some of which are used as drugs, for example, as treatments for Parkinson's disease. Indole is an aromatic six-membered ring fused to a five-membered ring along one edge. The five-membered ring has a double bond and a nitrogen atom. The basic indole framework can be equipped with a variety of side groups or bound to additional rings. Indole and many indole derivatives can be made by an indole synthesis reaction developed by and named after Emil Fischer (acid-assisted condensation of ketones with phenyl hydrazines).

The most important class of indole derivatives are cyclopentane[b]indoles--molecules with a framework made of one indole unit and an additional five-membered ring. This five-membered ring can contain a chiral carbon center, which is a ring carbon that has two additional side groups, and it can be arranged in two ways that are mirror images of each other. Only one of the two enantiomers, or mirror images, is found in nature. However, the classic Fischer indole synthesis produces a mix of both enantiomers.

A team led by Santanu Mukherjee and Garima Jindal at the Indian Institute of Science, Bangalore (India) has now developed a catalytic version of the Fischer indole synthesis that primarily produces one of the enantiomers (i.e., the reaction is enantioselective). The starting materials are a class of diketones (2,2-disubstituted cyclopentane-1,3-diones) and phenylhydrazine derivatives equipped with special protecting groups. The secret of their success is a special catalyst: a chiral, cyclic phosphoric acid. The reaction is carried out in the presence of zinc chloride as a co-catalyst and an acidic cation-exchange resin, which captures the ammonia that forms as a byproduct.

The heart of the reaction mechanism is called a dynamic kinetic resolution. During the reaction, a chiral hydrazone is first formed as an intermediate in both enantiomeric forms. This step is reversible, so that both of the enantiomeric hydrazones can interconvert during the course of the reaction. The reaction of the hydrazones to make the indole derivatives is the actual catalytic reaction. This reaction is much faster for one of the hydrazone enantiomers compared to the other because one form has a more favorable geometry when binding to the chiral catalyst. The other hydrazone enantiomer reacts very slowly and leads to only a small amount of the indole product. Instead, the slow-to-react hydrazone enantiomer converts to the fast-reacting hydrazone enantiomer, causing the equilibrium to eventually shift to the product cyclopentane[b]indole.

This method made it possible for the team to produce many different indole derivatives in moderate yields, but with good to excellent enantiomeric selectivity.

Credit: 
Wiley

'Bug brain soup' expands menu for scientists studying animal brains

image: The common eastern bumble bee (Bombus impatiens) with superimposed stained brain image to show the relative size and location of its brain.

Image: 
Wulfila Gronenberg

Using a surprisingly simple technique, researchers in the University of Arizona Department of Neuroscience have succeeded in approximating how many brain cells make up the brains of several species of bees, ants and wasps. The work revealed that certain species of bees have a higher density of brain cells than even some species of birds, whereas ants turned out to have fewer brain cells than originally expected.

Published in the scientific journal Proceedings of the Royal Society B, the study marks the first time the new cell counting method has been applied to invertebrate animals and provides a robust and reproducible protocol for other research groups studying the brains of invertebrate animals.

For more than a century, scientists have attempted to measure and compare the brains and brain components of vertebrates across species in efforts to draw conclusions about how brains support the animals' behavioral and cognitive abilities and ecological requirements. Theories of cognitive capacities of animal brains, including those of fossilized remains of the evolutionary ancestors of humans, are based on such measures.

To that end, scientists need to know how many neurons make up a given brain. Until recently, it was extremely tedious and time-consuming to count or estimate the number of neurons in a brain, even with computer and software-based systems.

For this reason, there were very few reliable neuron numbers available for any animals, including the human brain. Instead, brain researchers relied on estimates and extrapolations based on measurements of brain size or mass. But that approach can be fraught with uncertainties and biases, according to the authors of this study. For example, while larger animals, as a general rule, tend to have larger brains than smaller animals, the volume and mass of a given brain alone don't say much about its cognitive capabilities.

"How big or how heavy a brain is does not give you the best measure of an animal's cognitive capabilities," says the study's lead author, R. Keating Godfrey, a postdoctoral researcher in the Department of Molecular and Cellular Biology.

Why 'bird brain' Is actually a compliment

One major reason is that the size of a brain is less relevant for its processing capacities compared with the number of neurons, or nerve cells, it contains. This is analogous to the processing power of a computer, which has little to do with the physical size of its central processor. Neurons are highly specialized types of cells found in virtually any species across the animal kingdom.

Contrast, for example, the sea hare - a giant sea slug found off the coast of California that can weigh more than 12 pounds - with the fruit fly Drosophila. The sea slug's brain alone dwarfs the entire fly by a lot, yet it has just 18,000 neurons, far fewer than the fly's approximately 100,000.

"Just because the brain of one species may be 10 times larger than that of another does not mean it has 10 times as many neurons," says the paper's senior author, Wulfila Gronenberg, a professor of neuroscience who heads a Department of Neuroscience research group dedicated to unraveling the mysteries of insect brains.

Whereas "bird brain" is widely used as a derogatory term for a lack of intelligence, it actually is a misnomer, Gronenberg says.

"Bird brains have many more neurons than a typical mammal of comparable size," he says. "Birds have to navigate a three-dimensional space by flight, and in order to get all that processing power into a small, lightweight package, their neurons are smaller and more densely packed."

Social brains

Gronenberg's research group is interested in the neuronal underpinnings of insects that live in social communities, like honeybees or many wasps.

"We wanted to know: Is there something special about the brains of social insects?" Godfrey says.

Specifically, she and her colleagues set out to study whether the "social brain" hypothesis, which was developed for vertebrate animals and postulates that the size of a brain or particular brain region is correlated with social group size and group behaviors, also holds true for social insects.

With the help of undergraduate students, Godfrey worked on adapting a technique - developed in 2005 by Brazilian neuroscientist Suzanna Herculano-Houzel that revolutionized the field of vertebrate neuroscience - to the insect brains. Instead of slicing brains into hundreds or thousands of thin sections and counting neurons in each section, the method requires only that the brain tissue is homogenized. That's science speak for "blended," which results in a brain soup.

"We release the nuclei from the cells so we can count them," Godfrey says. "Vertebrates have dedicated brain regions and structures that you can sample from, but in insects, we can only really squish the whole thing. So we get a neuron density count for the entire brain."

Godfrey and her co-authors compared the brain cell counts with the body sizes of a large range of hymenoptera - bees, wasps and ants - and found that the neuron number and brain size relationships are very similar to those found in vertebrates.

Putting a number on an ant brain

Certain bees, the team reports, have particularly high numbers of neurons, which should stimulate renewed research into their behavioral capacities, and ants, in general, have fewer neurons than their wasp and bee relatives, probably because they do not fly and thus need less brain power for visual processing and flight control.

Some bees, it turned out, have even higher brain cell densities than some of the most compact bird and mammal brains. For example, the metallic green sweat bee, which is commonly seen in the Southwest and belongs to the genus Augochlorella, has a particularly high number of neurons for its brain size: about 2 million per milligram, more than the highest neuron densities found in the smallest vertebrate species - smoky shrews in mammals and goldcrests in birds.

Ants, on the other hand, tended to come in on the lower end of the spectrum. Compared with bees and wasps, ants had small brains and relatively few brain cells. A desert harvesting ant species common in Arizona amounted to just 400,000 cells per milligram of brain mass. Considering that this ant's brain weighs in at less than 1 milligram, this animal makes do with a total of 90,000 or so brain cells, Gronenberg estimates.

"We think this has to do with the ability to fly, which would make it less about intelligence but more about processing of information," he says. "Ants rely on scent information, whereas bees rely more on visual information."

How low can you go?

These findings beg the question of how many brain cells nature needs to make a functioning brain. Invertebrate brains tend to have highly specialized neurons, each performing a certain task, according to the authors of the study, which allows them to accomplish tasks with a small brain and a small number of neurons.

Gronenberg points to the tiny fairy wasp as a strong contender for the "tiniest brain in the insect world" award. Three strands of human hair, laid side by side, would cover the body length of the tiny creature, whose brain consists of fewer than 10,000 neurons.

"Yet, this parasitic wasp can do all the things it needs to do to survive," Gronenberg says.

"It can find a host, it can mate, it can lay eggs, it can walk and it can fly," he says. "While a small insect may just have one or a few neurons to perform a particular function, humans and other vertebrates tend to have many thousands, or even tens of thousands, of these specialized neurons dedicated to one task, which allows us to do things more precisely and in a more sophisticated way."

Credit: 
University of Arizona

Research gives new insight into formation of the human embryo

Pioneering research led by experts from the University of Exeter's Living Systems Institute has provided new insight into formation of the human embryo.

The team of researchers discovered an unique regenerative property of cells in the early human embryo.

The first tissue to form in the embryo of mammals is the trophectoderm, which goes on to connect with the uterus and make the placenta. Previous research in mice found that trophectoderm is only made once.

In the new study, however, the research team found that human early embryos are able to regenerate trophectoderm. They also showed that human embryonic stem cells grown in the laboratory can similarly continue to produce trophectoderm and placental cell types.

These findings show unexpected flexibility in human embryo development and may directly benefit assisted conception (IVF) treatments. In addition, being able to produce early human placental tissue opens a door to finding causes of infertility and miscarriage.

The study is published in the leading international peer-review journal Cell Stem Cell on Wednesday, April 7th 2021.

Dr Ge Guo, lead author of the study from the Living Systems Institute said: "We are very excited to discover that human embryonic stem cells can make every type of cell required to produce a new embryo."

Professor Austin Smith, Director of the Living Systems Institute and co-author of the study added, said: "Before Dr Guo showed me her results, I did not imagine this should be possible. Her discovery changes our understanding of how the human embryo is made and what we may be able do with human embryonic stem cells"

Credit: 
University of Exeter

Health impacts of holocaust linger long after survival

The damaging effects of life under Nazi rule have long been known with many victims having experienced periods of protracted emotional and physical torture, malnutrition and mass exposure to disease. But recent research from the Hebrew University of Jerusalem show that even for those who survived, their health and mortality continued to be directly impacted long after the end of the Holocaust.

The study, led by Drs. Iaroslav Youssim and Hagit Hochner from the School of Public Health at the Faculty of Medicine and published in the American Journal of Epidemiology, investigated mortality rates from specific diseases over the course of many years among Israel-based Holocaust survivors.

The researchers analyzed death records of approximately 22,000 people who were followed-up from 1964 to 2016 and compared the rates of mortality from cancer and heart disease among survivors to the rates in individuals who did not live under Nazi occupation. Among women survivors, the study found a 15% higher rate of overall mortality and a 17% higher chance of dying from cancer. Among men, while overall death rates of the survivors were not different from those of the unexposed, mortality from cancer during the studied period was 14% higher among the survivor population and remarkably the rate of mortality from heart disease was 39% higher.

"Our research showed that people who experienced life under Nazi rule early in life, even if they were able to successfully migrate to Israel and build families, continued to face higher mortality rates throughout their lives," Youssim explains. "This study supports prior theories that survivors are characterized by general health resilience combined with vulnerabilities to specific diseases." Hochner added, "These findings reflect the importance of long-term monitoring of people who have experienced severe traumas and elucidates mortality patterns that might emerge from those experiences."

Credit: 
The Hebrew University of Jerusalem

Supportive partners protect relationship quality in people with depression or stress

image: Social psychologist Paula Pietromonaco is a professor emerita at UMass Amherst.

Image: 
UMass Amherst

Having a responsive, supportive partner minimizes the negative impacts of an individual's depression or external stress on their romantic relationship, according to research by a University of Massachusetts Amherst social psychologist.

Paula Pietromonaco, professor emerita of psychological and brain sciences, drew on data from her Growth in Early Marriage project (GEM) to investigate what she had discovered was an under-studied question. Findings are published in the journal Social Psychological and Personality Science.

"I was really surprised that although there's a ton of work out there on depression, there was very little in the literature looking at the kinds of behavior that partners could do that would buffer the detrimental effects of depression," says Pietromonaco, whose co-authors are Nickola Overall, professor of psychology at the University of Auckland in New Zealand, and Sally Powers, professor emerita of psychological and brain sciences at UMass Amherst.

In the 3 ½-year GEM study involving more than 200 newlywed couples and funded by the National Cancer Institute, Pietromonaco and colleagues examined how couples change over time and how their relationships affect health. During each annual visit to the lab, couples were videotaped while they discussed a major conflict in their relationship.

"The unique thing about our study is that we looked at responsiveness in terms of people's actual behavior, as opposed to their perceptions," Pietromonaco explains. "We used a very complex, intensive coding scheme that captures a whole range of behaviors that we can call responsive behavior."

The study found that being a responsive partner - one who focuses effort and energy to listen to their partner without reacting, tries to understand what's being expressed and be supportive in a helpful way, and knows what their particular partner needs - is in general associated with better relationship quality, "which is what you would think," Pietromonaco says.

"But when people have a vulnerability like being depressed or having a lot of external stress," she adds, "having a responsive partner seems to protect them against a sharp drop in relationship quality from one time point to the next."

The researchers predicted that a person with signs of mild to moderate depression would experience a drop in marital quality from one year to the next during the study. "And that's what we saw," Pietromonaco says. "It was a big drop - five points."

Such a significant drop in relationship quality was not seen with people who had low depression scores and also partners who were low in responsiveness. "But if you were depressed and your partner was responsive, in the next wave your marital quality did not look any different from people who were not depressed," she says.

Similarly, a person's external stress resulted in a drop in marital quality over time - unless their partner was found to be highly responsive, supportive and accepting. "If your partner is high in responsiveness, you don't show any more of a decline than people who have low external stress. But if your partner is low in responsiveness, you drop an average of over seven points, and that is a large effect," Pietromonaco says.

The new research advances Pietromonaco's previous work probing the couple-level dynamics of romantic relationships. "Each person's behavior and responsiveness and feelings affect the other person's, and they do so reciprocally," she explains.

The paper concludes that "these findings underscore the importance of adopting a dyadic perspective to understand how partners' responsive behavior can overcome the harmful effects of personal and situational vulnerabilities on relationship outcomes."

Credit: 
University of Massachusetts Amherst

RIT researcher finds that sign-language exposure impacts infants as young as 5 months old

While it isn't surprising that infants and children love to look at people's movements and faces, recent research from Rochester Institute of Technology's National Technical Institute for the Deaf studies exactly where they look when they see someone using sign language. The research uses eye-tracking technology that offers a non-invasive and powerful tool to study cognition and language learning in pre-verbal infants.

NTID researcher and Assistant Professor Rain Bosworth and alumnus Adam Stone studied early-language knowledge in young infants and children by recording their gaze patterns as they watched a signer. The goal was to learn, just from gaze patterns alone, whether the child was from a family that used spoken language or signed language at home.

They tested two groups of hearing infants and children that differ in their home language. One "control" group had hearing parents who spoke English and never used sign language or baby signs. The other group had deaf parents who only used American Sign Language at home. Both sets of children had normal hearing in this study. The control group saw sign language for the first time in the lab, while the native signing group was familiar with sign language.

The study, published in Developmental Science, showed that the non-signing infants and children looked at areas on the signer called "signing space," in front of the torso. The hands predominantly fall in this area about 80 percent of the time when signing. However, the signing infants and children looked primarily at the face, barely looking at the hands.

According to the findings, the expert sign-watching behavior is already present by about 5 months of age.

"This is the earliest evidence, that we know of, for effects of sign-language exposure," said Bosworth. "At first, it does seem counter-intuitive that the non-signers are looking at the hands and signers are not. We think signers keep their gaze on the face because they are relying on highly developed and efficient peripheral vision. Infants who are not familiar with sign language look at the hands in signing space perhaps because that is what is perceptually salient to them."

Another possible reason why signing babies keep their gaze on the face could be because they already understand that the face is very important for social interactions, added Bosworth.

"We think the reason perceptual gaze control matures so rapidly is because it supports later language learning, which is more gradual," Bosworth said. "In other words, you have to be able to know where to look before you learn the language signal."

Bosworth says more research is needed to understand the gaze behaviors of deaf babies who are or are not exposed to sign language.

Credit: 
Rochester Institute of Technology

Scientists discover 'jumping' genes that can protect against blood cancers

image: Zhimin Gu, Ph.D., postdoctoral fellow, Children's Medical Center Research Institute at UT Southwestern (CRI), and Jian Xu Ph.D., associate professor, CRI

Image: 
UT Southwestern Medical Center

DALLAS - April 8, 2021 - New research has uncovered a surprising role for so-called "jumping" genes that are a source of genetic mutations responsible for a number of human diseases. In the new study from Children's Medical Center Research Institute at UT Southwestern (CRI), scientists made the unexpected discovery that these DNA sequences, also known as transposons, can protect against certain blood cancers.

These findings, published in Nature Genetics, led scientists to identify a new biomarker that could help predict how patients will respond to cancer therapies and find new therapeutic targets for acute myeloid leukemia (AML), the deadliest type of blood cancer in adults and children.

Transposons are DNA sequences that can move, or jump, from one location in the genome to another when activated. Though many different classes of transposons exist, scientists in the Xu laboratory focused on a type known as long interspersed element-1 (L1) retrotransposons. L1 sequences work by copying and then pasting themselves into different locations in the genome, which often leads to mutations that can cause diseases such as cancer. Nearly half of all cancers contain mutations caused by L1 insertion into other genes, particularly lung, colorectal, and head-and-neck cancers. The incidence of L1 mutations in blood cancers such as AML is extremely low, but the reasons why are poorly understood.

When researchers screened human AML cells to identify genes essential for cancer cell survival, they found MPP8, a known regulator of L1, to be selectively required by AML cells. Curious to understand the underlying basis of this connection, scientists in the Xu lab studied how L1 sequences were regulated in human and mouse leukemia cells. They made two key discoveries. The first was that MPP8 blocked the copying of L1 sequences in the cells that initiate AML. The second was that when the activity of L1 was turned on, it could impair the growth or survival of AML cells.

"Our initial finding was a surprise because it's been long thought that activated transposons promote cancer development by generating genetic mutations. We found it was the opposite for blood cancers, and that decreased L1 activity was associated with worse clinical outcomes and therapy resistance in patients," says Jian Xu, Ph.D., associate professor in CRI and senior author of the study.

MPP8 thus suppressed L1 in order to safeguard the cancer cell genome and allow AML-initiating cells to survive and proliferate. Cancer cells, just like healthy cells, need to maintain a stable genome to replicate. Too many mutations, like those created by L1 activity, can impair the replication of cancer cells. Researchers found L1 activation led to genome instability, which in turn activated a DNA damage response that triggered cell death or eliminated the cell's ability to replicate itself. Xu believes this discovery may provide a mechanistic explanation for the unusual sensitivity of myeloid leukemia cells to DNA damage-inducing therapies that are currently used to treat patients.

"Our discovery that L1 activation can suppress the survival of certain blood cancers opens up the possibility of using it as a prognostic biomarker, and possibly leveraging its activity to target cancer cells without affecting normal cells," says Xu.

Credit: 
UT Southwestern Medical Center