Earth

Enzyme prisons

image: The slow-moving ships on the open sea serve to illustrate the limited cAMP dynamics. The whirlpools represent cAMP nanodomains around PDEs.

Image: 
Charlotte Konrad, MDC

There are up to a hundred different receptors on the surface of each cell in the human body. The cell uses these receptors to receive extracellular signals, which it then transmits to its interior. Such signals arrive at the cell in various forms, including as sensory perceptions, neurotransmitters like dopamine, or hormones like insulin.

One of the most important signaling molecules the cell uses to transmit such stimuli to its interior, which then triggers the corresponding signaling pathways, is a small molecule called cAMP. This so-called second messenger was discovered in the 1950s. Until now, experimental observations have assumed that cAMP diffuses freely - i.e., that its concentration is basically the same throughout the cell - and that one signal should therefore encompass the entire cell.

"But since the early 1980s we have known, for example, that two different heart cell receptors release exactly the same amount of cAMP when they receive an external signal, yet completely different effects are produced inside the cell," reports Dr. Andreas Bock. Together with Dr. Paolo Annibale, Bock is temporarily heading the Receptor Signaling Lab at the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC) in Berlin.

Like holes in a Swiss cheese

Bock and Annibale, who are the study's two lead authors, have now solved this apparent contradiction - which has preoccupied scientists for almost forty years. The team now reports in Cell that, contrary to previous assumptions, the majority of cAMP molecules cannot move around freely in the cell, but are actually bound to certain proteins - particularly protein kinases. In addition to the three scientists and Professor Martin Falcke from the MDC, the research project involved other Berlin researchers as well as scientists from Würzburg and Minneapolis.

"Due to this protein binding, the concentration of free cAMP in the cell is actually very low," says Professor Martin Lohse, who is last author of the study and former head of the group. "This gives the rather slow cAMP-degrading enzymes, the phosphodiesterases (PDEs), enough time to form nanometer-sized compartments around themselves that are almost free of cAMP." The signaling molecule is then regulated separately in each of these tiny compartments. "This enables cells to process different receptor signals simultaneously in many such compartments," explains Lohse. The researchers were able to demonstrate this using the example of the cAMP-dependent protein kinase A (PKA), the activation of which in different compartments required different amounts of cAMP.

"You can imagine these cleared-out compartments rather like the holes in a Swiss cheese - or like tiny prisons in which the actually rather slow-working PDE keeps watch over the much faster cAMP to make sure it does not break out and trigger unintended effects in the cell," explains Annibale. "Once the perpetrator is locked up, the police no longer have to chase after it."

Nanometer-scale measurements

The team identified the movements of the signaling molecule in the cell using fluorescent cAMP molecules and special methods of fluorescence spectroscopy - including fluctuation spectroscopy and anisotropy - which Annibale developed even further for the study. So-called nanorulers helped the group to measure the size of the holes in which cAMP switches on specific signaling pathways. "These are elongated proteins that we were able to use like a tiny ruler," explains Bock, who invented this particular nanoruler.

The team's measurements showed that most compartments are actually smaller than 10 nanometers - i.e., 10 millionths of a millimeter. This way, the cell is able to create thousands of distinct cellular domains in which it can regulate cAMP separately and thus protect itself from the signaling molecule's unintended effects. "We were able to show that a specific signaling pathway was initially interrupted in a hole that was virtually cAMP-free," said Annibale. "But when we inhibited the PDEs that create these holes, the pathway continued on unobstructed."

A chip rather than a switch

"This means the cell does not act like a single on/off switch, but rather like an entire chip containing thousands of such switches," explains Lohse, summarizing the findings of the research. "The mistake made in past experiments was to use cAMP concentrations that were far too high, thus enabling a large amount of the signaling molecule to diffuse freely in the cell because all binding sites were occupied."

As a next step, the researchers want to further investigate the architecture of the cAMP "prisons" and find out which PDEs protect which signaling proteins. In the future, medical research could also benefit from their findings. "Many drugs work by altering signaling pathways within the cell," explains Lohse. "Thanks to the discovery of this cell compartmentalization, we now know there are a great many more potential targets that can be searched for."

"A study from San Diego, which was published at the same time as our article in Cell, shows that cells begin to proliferate when their individual signaling pathways are no longer regulated by spatial separation," says Bock. In addition, he adds, it is already known that the distribution of cAMP concentration levels in heart cells changes in heart failure, for example. Their work could therefore open up new avenues for both cancer and cardiovascular research.

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

Quantum simulation for 3D chiral topological phase

Recently, Professor Liu Xiongjun's group at the International Center for Quantum Materials (ICQM) of Peking University, together with Professor Du Jiangfeng and Professor Wang Ya at University of Science and Technology of China, published a paper in Phys. Rev. Lett. reporting a progress on quantum simulation for 3D chiral topological phase [Phys. Rev. Lett. 125, 020504 (2020)]. This is the latest progress on the topic of characterization of equilibrium topological phases by non-equilibrium quantum dynamics, as proposed by Liu's group in the recent years.

Quantum simulation, as a state-of-the-art technique, provides a powerful way to explore topological quantum phases beyond natural limits. Nevertheless, it is usually hard to simulate both the bulk and surface topological physics at the same time to reveal their correspondence. In the recent paper published in PRL, Professor Liu at PKU, Professor Du and Professor Wang at USTC build up a quantum simulator using nitrogen-vacancy center to investigate a three-dimensional (3D) chiral topological insulator which was not realized in solid state system, and demonstrate a complete study of both the bulk and surface topological physics by quantum quenches. First, a dynamical bulk-surface correspondence in momentum space is observed, showing that the bulk topology of the 3D phase uniquely corresponds to the nontrivial quench dynamics emerging on 2D momentum hypersurfaces called band inversion surfaces (BISs), the concept proposed by Liu et al in a series of previous publications (see e.g. Science Bull. 63, 1385). This is the momentum-space counterpart of the bulk-boundary correspondence in real space. Further, the symmetry protection of the 3D chiral phase is uncovered by measuring dynamical spin textures on BISs, which exhibit perfect (broken) topology when the chiral symmetry is preserved (broken). Finally, they measure the topological charges to characterize directly the bulk topology and identify an emergent dynamical topological transition when varying the quenches from deep to shallow regimes. This work demonstrates how a full study of topological phases can be achieved in quantum simulators.

Graduate students Ji Wentao (USTC) and Zhang Ling (PKU) are co-first authors of the paper. This work was supported by NSFC, MOST, and CAS.

Credit: 
Peking University

Scientists create protein models to explore toxic methylmercury formation

image: A structural model of HgcA, shown in cyan, and HgcB, shown in purple, were created using metagenomic techniques to better understand the transformation of mercury into its toxic form, methylmercury.

Image: 
Connor Cooper, Oak Ridge National Laboratory/U.S. Dept. of Energy.

A team led by the Department of Energy's Oak Ridge National Laboratory created a computational model of the proteins responsible for the transformation of mercury to toxic methylmercury, marking a step forward in understanding how the reaction occurs and how mercury cycles through the environment.

Methylmercury is a potent neurotoxin that is produced in natural environments when inorganic mercury is converted by microorganisms into the more toxic, organic form. In 2013, ORNL scientists announced a landmark discovery: They identified a pair of genes, hgcA and hgcB, that are responsible for mercury methylation.

Those genes encode the proteins HgcA and HgcB, whose structure and function ORNL scientists have been working to better understand.

"Determining protein structures can be challenging," said Jerry Parks, the head investigator and leader of the Molecular Biophysics group at ORNL.

These two proteins are difficult to characterize experimentally for several reasons: they are produced by anaerobic microorganisms and are therefore highly sensitive to oxygen; they are expressed at such low levels in cells that they are barely detectable by conventional techniques; HgcA is embedded in the membrane of a cell, making it more challenging to study than a soluble protein; and both proteins have complex cofactors--substances that bind to proteins and are essential for their function.

"We don't have an experimental structure yet for these proteins, so the next best thing is to use computational techniques to predict their structure," Parks said.

The computational model was generated using a large dataset of HgcA and HgcB protein sequences from many different microorganisms, ORNL's high-performance computing resources and bioinformatics, and structural modeling techniques as detailed in a recent article in Communications Biology.

The result is a 3D structural model of the HgcAB protein complex and its cofactors that scientists can use to develop new hypotheses designed to understand the biochemical mechanism of mercury methylation and then test them experimentally.

Scientists have been predicting protein structures from their amino acid sequences for many years. In 2017, a team led by the University of Washington reached a milestone, modeling the structures of hundreds of previously unsolved protein families by mining large metagenomic datasets for diverse protein sequences. This approach predicts which pairs of amino acids in each protein are in close contact with each other, and then uses that information to fold the proteins computationally.

Parks was eager to apply the same techniques to the mercury work, turning to data available from DOE's Joint Genome Institute, a DOE Office of Science user facility.

The scientists searched the JGI database for HgcA and HgcB amino acid sequences. They then performed a coevolution analysis to identify coordinated changes that occur among pairs of amino acids. Coevolution makes it likely that those coordinated pairs are close to each other in the three-dimensional folded structure of the protein. This information can be used to guide computational protein folding and predict how the folded protein domains interact with each other.

One surprising finding by the team is that the two domains of HgcA don't interact with each other, but they both interact with the HgcB protein. The model also suggests that conserved cysteine amino acids in HgcB are likely involved in shuttling some forms of mercury, methylmercury, or both, to HgcA during the reaction. Some features of these proteins are similar to other more well-studied proteins, but others are unique and have not been observed before in any other protein.

Future research will involve experimental testing. Stephen Ragsdale's group at the University of Michigan is working out a way to produce the HgcA and HgcB proteins in E. coli bacteria in sufficient quantities to enable the proteins to be studied in the laboratory using spectroscopic techniques and X-ray crystallography. "We are excited that this important experimental work is being done," said Parks. "It will be interesting to see how well we did with our structure predictions."

Mercury is a naturally occurring element found worldwide, and scientists at ORNL have come to realize that the microorganisms that convert inorganic mercury to methylmercury are also widespread. "We don't know as much as we'd like about all the different reactions and processes that mercury can undergo," Parks added. "This work helps us understand more about one of the most important biotransformations of mercury in nature."

In addition to gaining insight into mercury methylation, the project creates a new capability at ORNL that can be used to explore the structure and function of other microbial proteins. In particular, Parks and colleagues are interested in characterizing proteins from microorganisms referred to as microbial dark matter because they are unable to be cultured in the lab and are otherwise difficult to study.

"There is so much we still don't know about all the unusual proteins that are produced by microorganisms," Parks said. "This technique allows us to begin characterizing these complex, mysterious biological systems."

Credit: 
DOE/Oak Ridge National Laboratory

Researchers introduce new theory to calculate emissions liability

image: The Michigan Tech campus.

Image: 
Michigan Tech

A comparison of the results for conventional point source pollution and bottleneck carbon emissions sources shows that oil and natural gas pipelines are far more important than simple point-source emissions calculations would indicate. It also shifts the emissions liability towards the East Coast from the Midwest. Most surprisingly, the study found that seven out of eight oil pipelines in the U.S. responsible for facilitating the largest amount of carbon emissions are not American.

Fossil fuels (coal, oil and natural gas) emit carbon dioxide when burned, which scientists say is the greenhouse gas primarily responsible for global warming and climate change. Climate change causes numerous problems that economists call "externalities," because they are external to the market. In a new study published in Energies, Alexis Pascaris, graduate student in environmental and energy policy, and Joshua Pearce, the Witte Professor of Engineering, both of Michigan Technological University, explain how current U.S. law does not account for these costs and explore how litigation could be used to address this flaw in the market. The study also investigates which companies would be at most risk.

Pearce explained their past work found that "as climate science moves closer to being able to identify which emitters are responsible for climate costs and disasters, emissions liability is becoming a profound business risk for some companies."

Most work in carbon emissions liability focuses on who did the wrong and what the costs are. Pascaris and Pearce's "bottleneck" theory places the focus on who enables emissions.

Focusing Efforts

The U.S. Environmental Protection Agency defines point source pollution as "any single identifiable source of pollution from which pollutants are discharged." For example, pipelines themselves create very little point source pollution, yet an enormous amount of effort has been focused on stopping the Keystone XL Pipeline because of the presumed emissions it enables.

The Michigan Tech study asked: Would the magnitude of the emissions enabled by a pipeline warrant the effort, or should lawsuits be focused elsewhere if minimizing climate change is the goal?

In order to answer this question quantitatively, the study presented an open and transparent methodology for prioritizing climate lawsuits based on an individual facility's ability to act as a bottleneck for carbon emissions.

"Just like a bottleneck that limits the flow of water, what our emissions bottleneck theory does is identify what carbon emissions would be cut off if a facility was eliminated rather than only provide what emissions come directly from it as a point source," Pearce said. "This study found that point source pollution in the context of carbon emissions can be quite misleading."

The results showed that the prominent carbon emission bottlenecks in the U.S. are for transportation of oil and natural gas. While the extraction of oil is geographically concentrated in both North Dakota and Texas, the pipeline network is extensive and transcends both interstate and national boundaries, further complicating legal issues.

Overall, seven of eight oil pipelines in the U.S. are foreign owned and accountable for contributing 74% of the entire oil industry's carbon emissions. They are a likely prioritization for climate-related lawsuits and thus warrant higher climate liability insurance premiums.

As a whole, fossil-fuel related companies identified in the study have increased risks due to legal liability, future regulations meant to curb climate destabilization and as targets for eco-terrorism.

"All of these business risks would tend to increase insurance costs, but significant future work is needed to quantify what climate liability insurance costs should be for companies that enable major carbon emissions," concluded Pearce.

Credit: 
Michigan Technological University

Using light's properties to indirectly see inside a cell membrane

image: Fluorescent probes emit light as they briefly attached to, then detach from, a cell membrane.

Image: 
Washington University in St. Louis

For those not involved in chemistry or biology, picturing a cell likely brings to mind several discrete, blob-shaped objects; maybe the nucleus, mitochondria, ribosomes and the like.

There's one part that's often overlooked, save perhaps a squiggly line indicating the cell's border: the membrane. But its role as gatekeeper is an essential one, and a new imaging technique developed at the McKelvey School of Engineering at Washington University in St. Louis is providing a way to see into, as opposed to through, this transparent, fatty, protective casing.

The new technique, developed in the lab of Matthew Lew, assistant professor in the Preston M. Green Department of Electrical and Systems Engineering, allows researchers to distinguish collections of lipid molecules of the same phase -- the collections are called nanodomains -- and to determine the chemical composition within those domains.

The details of this technique -- single-molecule orientation localization microscopy, or SMOLM -- were published online Aug. 21 in Angewandte Chemie, the journal of the German Chemical Society.

Editors at the journal -- a leading one in general chemistry -- selected Lew's paper as a "Hot Paper" on the topic of nanoscale papers. Hot Papers are distinguished by their importance in a rapidly evolving field of high interest.

Using traditional imaging technologies, it's difficult to tell what's "inside" versus "outside" a squishy, transparent object like a cell membrane, Lew said, particularly without destroying it.

"We wanted a way to see into the membrane without traditional methods" -- such as inserting a fluorescent tracer and watching it move through the membrane or using mass spectrometry -- "which would destroy it," Lew said.

To probe the membrane without destroying it, Jin Lu, a postdoctoral researcher in Lew's lab, also employed a fluorescent probe. Instead of having to trace a path through the membrane, however, this new technique uses the light emitted by a fluorescent probe to directly "see" where the probe is and where it is "pointed" in the membrane. The probe's orientation reveals information about both the phase of the membrane and its chemical composition.

"In cell membranes, there are many different lipid molecules," Lu said. "Some form liquid, some form a more solid or gel phase."

Molecules in a solid phase are rigid and their movement constrained. They are, in other words, ordered. When they are in a liquid phase, however, they have more freedom to rotate; they are in a disordered phase.

Using a model lipid bilayer to mimic a cell membrane, Lu added a solution of fluorescent probes, such as Nile red, and used a microscope to watch the probes briefly attach to the membrane.

A probe's movement while attached to the membrane is determined by its environment. If surrounding molecules are in a disordered phase, the probe has room to wiggle. If the surrounding molecules are in an ordered phase, the probe, like the nearby molecules, is fixed.

When light is shined on the system, the probe releases photons. An imaging method previously developed in the Lew lab then analyzes that light to determine the orientation of the molecule and whether it's fixed or rotating.

"Our imaging system captures the emitted light from single fluorescent molecules and bends the light to produce special patterns on the camera," Lu said.

"Based on the image, we know the probe's orientation and we know whether it's rotating or fixed," and therefore, whether it's embedded in an ordered nanodomain or not.

Repeating this process hundreds of thousands of times provides enough information to build a detailed map, showing the ordered nanodomains surrounded by the ocean of the disordered liquid regions of the membrane.

The fluorescent probe Lu used, Nile red, is also able to distinguish between lipid derivatives within the same nanodomains. In this context, their chosen fluorescent probe can tell whether or not the lipid molecules are hydrolyzed when a certain enzyme was present.

"This lipid, named sphingomyelin, is one of the critical components involved in nanodomain formation in cell membrane. An enzyme can convert a sphingomyelin molecule to ceramide," Lu said. "We believe this conversion alters the way the probe molecule rotates in the membrane. Our imaging method can discriminate between the two, even if they stay in the same nanodomain."

This resolution, a single molecule in model lipid bilayer, cannot be accomplished with conventional imaging techniques.

This new SMOLM technique can resolve interactions between various lipid molecules, enzymes and fluorescent probes with detail that has never been achieved previously. This is important particularly in the realm of soft matter chemistry.

"At this scale, where molecules are constantly moving, everything is self-organized," Lew said. It's not like solid-state electronics where each component is connected in a specific and importantly static way.

"Every molecule feels forces from those surrounding it; that's what determines how a particular molecule will move and perform its functions."

Individual molecules can organize into these nanodomains that, collectively, can inhibit or encourage certain things -- like allowing something to enter a cell or keeping it outside.

"These are processes that are notoriously difficult to observe directly," Lew said. "Now, all you need is a fluorescent molecule. Because it's embedded, its own movements tell us something about what's around it."

Credit: 
Washington University in St. Louis

Cyberintimacy: Technology-mediated romance in the digital age

image: Explores the psychological and social issues surrounding the Internet and interactive technologies

Image: 
Mary Ann Liebert, Inc., publishers

New Rochelle, NY, August 25, 2020--Digital technology has had a transformative effect on our romantic lives. This scoping review reports on measurable outcomes for the three stages of the romantic relationship lifecycle - initiation, maintenance, and dissolution -- as described in the peer-reviewed journal Cyberpsychology, Behavior, and Social Networking. Click here to read the article now.

"As our knowledge of human-computer interactions mature, future research could explore the potential of behavior-change interventions specifically designed to enhance cyberintimacy," state coauthors Ian Kwok and Annie Wescott, Feinberg School of Medicine, Northwestern University.

"Providing new opportunities for microinteractions, such as texting, and new challenges, such as privacy issues, digital media has altered real world romantic relationships. This review helps to identify emergent themes in this important area of research," says Editor-in-Chief Brenda K. Wiederhold, PhD, MBA, BCB, BCN, Interactive Media Institute, San Diego, California and Virtual Reality Medical Institute, Brussels, Belgium.

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

NUS researchers develop new system for accurate telomere profiling in less than 3 hours

image: A magnified image captured by the device used to perform STAR assay. Different fluorescent intensities reflect the length variations in individual telomere molecules.

Image: 
National University of Singapore

The plastic tips attached to the ends of shoelaces keep them from fraying. Telomeres are repetitive DNA (deoxyribonucleic acid) sequences that serve a similar function at the end of chromosomes, protecting its accompanying genetic material against genome instability, preventing cancers and regulating the aging process.

Each time a cell divides in our body, the telomeres shorten, thus functioning like a molecular "clock" of the cell as the shortening increases progressively with aging. An accurate measure of the quantity and length of these telomeres, or "clocks", can provide vital information if a cell is aging normally, or abnormally, as in the case of cancer.

To come up with an innovative way to diagnose telomere abnormalities, a research team led by Assistant Professor Cheow Lih Feng from the NUS Institute for Health Innovation & Technology (iHealthtech) has developed a novel method to measure the absolute telomere length of individual telomeres in less than three hours. This unique telomere profiling method can process up to 48 samples from low amounts (

Their work was published in the journal Science Advances on 21 August 2020.

"Our innovation could greatly enhance the speed of diagnosis and simultaneously provide critical telomere information for age-related diseases and cancers. Such a clinically reliable tool that is able to provide accurate telomere profiling will allow for precision therapy and targeted treatments for patients," explained Asst Prof Cheow, who is also from the NUS Department of Biomedical Engineering.

Overcoming the limitations of conventional telomere tests

Conventional methods for telomere measurements in clinical settings are often time-consuming and require skilled operators. These methods are also lacking in precise information on individual telomere lengths and quantities required to accurately diagnose, or determine telomere abnormalities.

To overcome the major technical impediments in performing telomere profiling, Asst Prof Cheow and his team have developed a unique system called Single Telomere Absolute-length Rapid (STAR) assay.

Using this method, individual telomere molecules are first distributed into thousands of nanolitre chambers in a microfluidic chip. Real-time polymerase chain reaction (PCR) of single telomere molecules is then performed across all the chambers in a massively parallel manner. The PCR amplification kinetics in each nanolitre chamber reflects the telomere repeat number, which directly correlate to the length of a single telomere molecule.

Using the STAR assay, the researchers can accurately determine the telomere maintenance mechanism in cancer cells and obtain a high level of detailed information in the measurements. Patients who have cancers activated by the Alternative Lengthening of Telomere (ALT) pathway - such as certain sarcomas (cancer of connective tissues) and gliomas (cancer of brains) - are often shown to have poor prognosis that is linked to longer than average telomere length and a high percentage of critically short telomeres. In addition, their cancer cells also were found to possess extra copies of telomere molecules.

To test the invention, the NUS iHealthtech team collaborated with Dr Amos Loh from the Department of Paediatric Surgery at KK Women's and Children's Hospital (KKH). The team chose to work with KKH to validate the assay as telomere alteration mechanisms like ALT are best studied in tumours like sarcomas, and neuronal tumours like gliomas and neuroblastoma that have a higher incidence among children. The validation has proven the STAR assay to be effective in diagnosing the ALT status in paediatric neuroblastoma, which can serve as a useful prognosis indicator for this cancer.

Cancer cells utilise a modified mechanism to abnormally modify and maintain the length of their telomeres, as a means to allow them to grow continually. Mechanisms like ALT are exploited particularly by cancers like neuroblastoma. Neuroblastoma is a cancer that arises from nerves in various parts of the body and is the most common solid malignant tumour in children. It is also responsible for a disproportionate number of childhood deaths from cancer.

"Previously less recognised in patients, telomere abnormalities like ALT have been recently identified to be a new risk marker in neuroblastoma. Since neuroblastoma with telomere abnormalities have poorer outcomes, this new method of measuring telomeres can now facilitate simpler and more rapid identification of ALT in patients to more accurately define their disease prognosis," said Dr Amos Loh, Senior Consultant, who is from KKH's Department of Paediatric Surgery.

The development of the STAR assay is supported by the National Medical Research Council.

Enabling effective treatment strategies for patients

Asst Prof Cheow shared, "The combination of rapid workflow, scalability and single-molecule resolution makes our system unique in enabling the use of telomere length distribution as a biomarker in disease and population-wide studies. It will be particularly useful for diagnosing telomere maintenance mechanisms within clinical time scales, to determine personalised, therapeutic or preventive strategies for patients".

The NUS iHealthtech team is looking to extend their research and apply the STAR assay platform for use in hospital settings, to facilitate the diagnosis of aging-related diseases.

Credit: 
National University of Singapore

High human population density negative for pollinators

Population density, and not the proportion of green spaces, has the biggest impact on species richness of pollinators in residential areas. This is the result of a study from Lund University in Sweden of gardens and residential courtyards in and around Malmö, Sweden.
The result surprised the researchers, who had expected that the vegetation cover would be more significant.

"We have found that, in cities, the higher the population density, the fewer species of wild bees and hoverflies we find in gardens and residential courtyards. We also see that areas with enclosed courtyards and tall buildings have fewer species of wild bees than areas with semi-detached and detached houses, even when there are large green spaces between the buildings", says Anna Persson, one of the researchers behind the study.

It is believed that the result is due to two things, firstly, that tall buildings and enclosed courtyards constitute physical barriers for insects and secondly, that green environments in densely populated areas often are insufficient for pollinators, as they may only consist of for example a lawn and a few ornamental shrubs.

"Urban green spaces often look very different and the quality can vary a lot. A space can be green and still be a poor habitat for pollinators. In multi-family areas these spaces are usually simplified and maintained by an external contractor, compared to detached houses where there is often personal engagement and a greater variation of both plants and management practices", says Anna Persson.

Another interesting discovery the researchers made was that urban gardens contain different species of wild bees than those found in agricultural landscapes.

"Therefore, the city complements the countryside", says Anna Persson, who contends that this is important knowledge, particularly in regions with intense farming, as this means that the city constitutes an important environment for the regional diversity of bees. It also means that measures for the conservation of bees are needed both in urban and rural areas, to reach different species.

For hoverflies, however, the result was different - the species found in urban areas were just a fraction of the species in rural areas, probably due to the fact that hoverfly larval habitats are scarcer in the city, for example, aquatic environments and plant debris.

Urbanisation is one of the main causes of biodiversity decline. This is due both to urban land expansion and to denitrification through infill development. The researchers wanted to study which factor affected the species richness of pollinators to the greatest extent ¬- population density or vegetation cover. In addition, they wanted to find out if the built urban form had any effect on the species richness and what residential areas with high diversity of pollinators look like. The study was carried out by comparing species richness in areas with varying degrees of population density and vegetation cover - in total, forty gardens and courtyards across nearly all of Malmö were studied. Researchers also made comparisons between gardens in the urban areas and the intensively farmed agricultural landscape surrounding Malmö.

"Pollinators are interesting and important to study in cities as they are crucial to the functioning of the ecosystem and, in addition, they are necessary for us to be able to achieve good harvests in our vegetable gardens and community allotments", says Anna Persson.

She hopes the study will contribute new knowledge about how to plan and build cities in a way that reduces their negative impact on species richness.

"We show that the urban form is significant. By reducing the physical barriers between residential courtyards and by combining different kinds of built environments it is possible to benefit pollinators. In addition, we demonstrate that there is scope for improvements to the existing green spaces, particularly in areas with multi-family buildings. Green spaces in these areas are often of low quality, both for biodiversity and for human recreation. One way to upgrade them is to let them grow a little more 'wild', with less intensive maintenance and more native plants", she concludes.

Credit: 
Lund University

Mineral dust ingested with food leaves characteristic wear on herbivore teeth

image: Microscopic images of surfaces of guinea pig teeth show the typical abrasions caused by different foods.

Image: 
ill./©: Daniela E. Winkler

Mineral dust ingested with food causes distinct signs of wear on the teeth of plant-eating vertebrates, which can differ considerably depending on the type of dust. This is what paleontologists at Johannes Gutenberg University Mainz (JGU) have discovered in a controlled feeding study of guinea pigs. As they report in the current issue of Proceedings of the National Academy of Sciences of the United States of America (PNAS), their findings could lead to a more accurate reconstruction of the eating habits of extinct animals as well as a reconstruction of their habitats. "Analyzing fossil teeth is a common method of drawing conclusions about the diet and habitat of certain animals, because it has long been understood that eating different plants, such as grass or leaves, can cause different wear patterns," said Dr. Daniela Winkler of the Institute of Geosciences at JGU, the first author of the study. "However, there has been hardly any research into the extent to which the consumption of mineral dust contributes to this abrasion."

Over several weeks, the researchers fed 12 groups of guinea pigs with essentially the same plant-based pellets which contained different types and amounts (zero to eight percent) of natural mineral dust. The researchers then used a high-resolution microscope to examine the surface of the tooth enamel of each animal's molars. "We were able to identify some significant differences," added Winkler. For example, larger quartz particles (sand grains) caused severe abrasion on the enamel surface. The same applied to volcanic ash, which, due to its sharp edges, also produced a more irregular wear pattern. Small quartz particles generated a smooth, almost polished surface. On the other hand, there were no subsequent distinctive signs of wear features in the case of other particles. "Our results should improve the accuracy of diet reconstruction on the basis of fossil teeth," concluded Winkler. To date, it has been assumed that smooth tooth surfaces are related to the respective animal feeding on leaves that, unlike grass, leave hardly any traces of wear on the tooth surface; hence, this animal would have lived in a forest environment. However, it now seems possible that smooth tooth enamel wear patterns could have also developed because the animal ate grass, for example, to which tiny quartz grains were attached. These particles would have eliminated any irregularities on teeth, leaving an even, polished surface. "It is normal that animals ingest mineral dust along with their food," said Winkler. This is even more likely to be the case the drier the habitat is and the closer the food is ingested to the ground.

The study was undertaken as part of the Vertebrate Herbivory research project led by Professor Thomas Tütken of the Institute of Geosciences at JGU, which is funded by a Consolidator Grant from the European Research Council (ERC). The study also involved researchers of the Clinic for Zoo Animals, Exotic Pets, and Wildlife at the University of Zurich, of Leipzig University, of the Max Planck Institute for Evolutionary Anthropology in Leipzig, and of the Center of Natural History at Universität Hamburg.

Credit: 
Johannes Gutenberg Universitaet Mainz

Ecologists put biodiversity experiments to the test

image: Aerial view of the BioDIV Experiment in Minnesota, USA.

Image: 
Forest Isbell

Much of our knowledge of how biodiversity benefits ecosystems comes from experimental sites. These sites contain combinations of species that are not found in the real world, which has led some ecologists to question the findings from biodiversity experiments. But the positive effects of biodiversity for the functioning of ecosystems are more than an artefact of experimental design. This is the result of a new study led by an international team of researchers from the German Centre for Integrative Biodiversity Research (iDiv), Leipzig University (UL), the University of Bern and the Senckenberg Biodiversity and Climate Research Centre. For their study, they removed 'unrealistic' communities from the analysis of data from two large-scale experiments. The results that have now been published in Nature Ecology & Evolution show that previous findings are, indeed, reliable.

To most it might not matter much if a handbag is a costly original or an affordable counterfeit, but when it comes to nature, imitations could be a whole other matter. Much of what we know about the consequences of biodiversity loss for the ecosystem functions that support life has been gathered from biodiversity experiment sites in which vegetation types of differing plant species richness are created to imitate biodiversity loss. However, the insights gained here have been repeatedly questioned because the design of such experiments includes vegetation types that are rare or non-existent in the real world. "Previously, there was little information on how much the plant communities in biodiversity experiments quantitatively differ from those in the real world. We simply did not know what impact such differences might have on the conclusions drawn from the experiments," said lead author Dr Malte Jochum, researcher at iDiv and UL and previously at the University of Bern.

A collaborative and international study has now put nature's counterfeits to the test. The researchers compared the vegetation of two of the largest and longest-running grassland biodiversity experiments globally with equivalent 'real-world' sites. One of the sites investigated is the Jena Experiment in Germany. It was compared to semi-natural grasslands nearby and a large set of scientifically monitored agricultural sites across Germany, known as the Biodiversity Exploratories.

"We first looked at the sites to see how much they differ in terms of how many species they had, how related they were and what types of functional properties were seen. To our surprise the experimental sites turned out to be much more varied than the real world and to have certain types of vegetation which you would find not in the wild. At the Jena Experiment only 28 per cent of the experimental plots could be considered similar enough to the natural vegetation that we could class them as realistic," said Dr Peter Manning, co-author of the study and researcher at the Senckenberg Biodiversity and Climate Research Centre.

Next, they compared the results of the entire biodiversity experiments to a subset of the experimental data that contained only the realistic plots. "Remarkably, the results hardly changed. For ten out of twelve relationships between species richness and ecosystem functioning, the results do not differ significantly between all experiment sites and the subset of only the realistic ones. This suggests that the relationship between biodiversity and ecosystem function seen in these experiments is likely also operating in the more complex real world", explained Jochum.

The researchers conclude that their results show the validity of insights about the effects of biodiversity loss gained by investigating biodiversity experiment sites. "In recent years the public have become increasingly aware that biodiversity underpins the Earths life support systems and that its loss threatens humanity. What they might be less aware of is debate among the scientific community about just how important a role biodiversity plays. By resolving a long running debate these results give us even greater confidence that biodiversity really is a major player, and that conserving it is essential if we are to live well in the future," Manning said.

Credit: 
German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig

Dementia kills nearly three times more people than previously thought: BU study

Dementia may be an underlying cause of nearly three times more deaths in the U.S. than official records show, according to a new Boston University School of Public Health (BUSPH) study.

Published in the journal JAMA Neurology, the study estimates that 13.6% of deaths are attributable to dementia, 2.7 times more than the 5.0% of death certificates that indicate dementia as an underlying cause of death.
Understanding what people die of is essential for priority setting and resource allocation," says study lead author Dr. Andrew Stokes, assistant professor of global health at BUSPH.

"In the case of dementia, there are numerous challenges to obtaining accurate death counts, including stigma and lack of routine testing for dementia in primary care," he says. "Our results indicate that the mortality burden of dementia may be greater than recognized, highlighting the importance of expanding dementia prevention and care."

The researchers found that the underestimation varies greatly by race, with 7.1 times more Black older adults, 4.1 times more Hispanic older adults, and 2.3 times more white older adults dying from dementia than government records indicate. Dementia related deaths were also underreported more for men than women, and more for individuals without a high school education. Previous research has shown that dementia is disproportionately common among older adults who are Black, male, and/or have less education.

"In addition to underestimating dementia deaths, official tallies also appear to underestimate racial and ethnic disparities associated with dementia mortality. Our estimates indicate an urgent need to realign resources to address the disproportionate burden of dementia in Black and Hispanic communities," Stokes says.

The researchers used data from a nationally-representative cohort of 7,342 older adults in the Health and Retirement Study (HRS), which gathers data from individuals starting when they move into nursing homes. For the current study, the researchers used data from older adults who entered the cohort in 2000 and followed them up to 2009, analyzing the association between dementia and death when adjusting for other variables including age, sex, race/ethnicity, education level, region of the U.S., and medical diagnoses.

"These findings indicate that dementia represents a much more important factor in U.S. mortality than previously indicated by routine death records," says study senior author Dr. Eileen Crimmins, professor and AARP Chair in Gerontology at the University of Southern California Leonard Davis School of Gerontology and a co-investigator on the Health and Retirement Study.

Credit: 
Boston University School of Medicine

New syringe technology could enable injection of highly concentrated biologic drugs

image: A new device could help administer powerful drug formulations that are too viscous to be injected with conventional syringes and needles.

Image: 
Images courtesy of the researchers and edited by Jose-Luis Olivares, MIT.

MIT researchers have developed a simple, low-cost technology to administer powerful drug formulations that are too viscous to be injected using conventional medical syringes.

The technology, which is described in a paper published today in the journal Advanced Healthcare Materials, makes it possible to inject high-concentration drugs and other therapies subcutaneously. It was developed as a solution for highly effective, and extremely concentrated, biopharmaceuticals, or biologics, which typically are diluted and injected intravenously.

"Where drug delivery and biologics are going, injectability is becoming a big bottleneck, preventing formulations that could treat diseases more easily," says Kripa Varanasi, MIT professor of mechanical engineering. "Drug makers need to focus on what they do best, and formulate drugs, not be stuck by this problem of injectability."

Leaders at the Bill and Melinda Gates Foundation brought the injectability problem to Varanasi after reading about his previous work on dispensing liquids, which has attracted the attention of industries ranging from aviation to makers of toothpaste. A main concern of the foundation, Varanasi says, was with providing high-concentration vaccines and biologic therapies to people in developing countries who could not travel from remote areas to a medical setting.

In the current pandemic, Varanasi adds, being able to stay home and subcutaneously self-administer medication to treat diseases such as cancer or auto-immune disorders is also important in developed countries such as the United States.

"Self-administration of drugs or vaccines can help democratize access to health care," he says.

Varanasi and Vishnu Jayaprakash, a graduate student in MIT's mechanical engineering department who is the first author on the paper, designed a system that would make subcutaneous injection of high-concentration drug formulations possible by reducing the required injection force, which exceeded what is possible with manual subcutaneous injection with a conventional syringe.

In their system, the viscous fluid to be injected is surrounded with a lubricating fluid, easing the fluid's flow through the needle. With the lubricant, just one-seventh of the injection force was needed for the highest viscosity tested, effectively allowing subcutaneous injection of any of the more than 100 drugs otherwise considered too viscous to be administered in that way.

"We can enable injectability of these biologics," Jayaprakash says. "Regardless of how viscous your drug is, you can inject it, and this is what made this approach very attractive to us."

Biologic drugs include protein-based formulations and are harvested from living cells. They are used to treat a wide range of diseases and disorders, and can bind with specific tissues or immune cells as desired, provoking fewer unwanted reactions and bringing about particular immune responses that don't occur with other drugs.

"You can tailor very specific proteins or molecules that bind to very specific receptors in the body," says Jayaprakash. "They enable a degree of personalization, specificity, and immune response that just isn't available with small-molecule drugs. That's why, globally, people are pushing toward biologic drugs."

Because of their high viscosities, administering the drugs subcutaneously has involved methods that have turned out to be impractical and expensive. Generally, the drugs are diluted and given intravenously, which requires a visit to a hospital or doctor's office. Jet injectors, which shoot the drugs through the skin without a needle, are expensive and prone to contamination from backsplash. Injecting encapsulated drugs often results in their clogging the needle and additional complexity in drug manufacturing and release profiles. EpiPen-style syringes are also too expensive to be used widely.

To develop their technology, the MIT researchers began by defining theoretical parameters and testing them before designing their device. The device consists of a syringe with two barrels, one inside of the other, with the inner tube delivering the viscous drug fluid and the surrounding tube delivering a thin coating of lubricant to the drug as it enters the needle.

Because the lubricated fluid passes more easily through the needle, the viscous payload undergoes minimal shear stress. For this reason, Jayaprakash says, the system could also be useful for 3D bioprinting of tissues made of natural components and administering cell therapies, both cases where tissues and cells can be destroyed by shear damage.

Therapeutic gels -- used in bone and join therapies, as well as for timed-release drug delivery, among other uses -- could also be more easily administered using the syringe developed by the researchers.

"The technique works as a platform for all of these other applications," Jayaprakash says.

Whether the technology will make a difference as researchers hunt for Covid-19 vaccine possibilities and treatments is unclear. The researchers say, however that it widens the options as different drug formulations are considered.

"Once you have the story about the technology out there, the industry might say they could consider things that had previously been impossible," Varanasi says.

With his previous work having spurred the creation of four companies, Varanasi says he and his team are hopeful this technology will also be commercialized.

"There should be no reason why this approach, given its simplicity, can't help solve what we've heard from industry is an emerging problem," he says. "The foundational work is done. Now it's just applying it to different formulations."

Credit: 
Massachusetts Institute of Technology

Roadmap for linking neurological and locomotor deficits

image: Scientists capture highly-detailed "locomotor signatures" of mouse models of neurological disease.

Image: 
Megan Carey

Locomotion deficits, such as lack of coordination, a shuffling gait, or loss of balance, can result from neurological conditions, specifically those that affect motor areas of the nervous system. To develop treatments, scientists often turn to animal models of disease. This strategy is crucial not only for designing potential therapies, but also for gaining insight into fundamental questions about the organisation and function of the nervous system.

Until recently, scientists did not have tools to systematically characterise specific walking deficits in different mouse models of neurological conditions. To solve this problem, Megan Carey's lab, at the Champalimaud Centre for the Unknown in Portugal, developed LocoMouse: an automated movement-tracking system that captures the fine details of locomotion in mice.

"It's like Tolstoy said", remarks Carey, "all normal locomotion is alike, but every type of abnormal locomotion has its own neural basis." In this free-style paraphrasing of Tolstoy's opening sentence of the novel "Anna Karenina", she captures the essence of her group's most recent research project.

In a new study published in the scientific journal eLife, Carey's team uses LocoMouse to identify highly-detailed "locomotor signatures" for two different mouse models. These signatures effectively capture the full pattern of walking deficits of each mouse. Analysing the relationship between the locomotor signatures and the patterns of affected neural circuitry then provides a roadmap for linking neurological and locomotor deficits.

Linking neurological and locomotor deficits

Two mice walk across a linear path. They walk slowly with heavy and uncertain steps. It's clear that both mice suffer from motor deficits, but in a somewhat different way which is difficult to pinpoint by eye. What can their unique walking patterns say about the underlying cause?

"We published our initial results with LocoMouse a few years ago, focusing on one mouse model called pcd, which stands for Purkinje Cell Degeneration. These mice lack one of the main cell-types in the cerebellum (Purkinje cells). While the initial results of that study were interesting, we didn't know how general, or how specific, the pattern of deficits would turn out to be," Carey explains.

"Both pcd and reeler have inherited genetic mutations that affect a brain area called the cerebellum", says Ana Machado, a lead author of the study. "The cerebellum is important for coordinated movement and for normal walking across species."

Whereas the neural damage in pcd mice is restricted to the cerebellum, reeler mice have altered development and circuitry throughout the brain. "The locomotor behaviour of reeler and pcd mice is clearly different, reeler being more severely affected. Still, the locomotion deficits of both models appear broadly 'cerebellar' to a trained eye. We asked ourselves: 'can we quantitatively capture shared and specific features of locomotion in these mice?'," Machado recounts.

The answer was "yes." The researchers found remarkably similar impairments in how movement was coordinated across the limbs and body. Their tail movements were also affected. Typically, mice control their tail to ensure overall stability, but both pcd and reeler are unable to do that. As a result, their tails passively oscillate as a consequence of their limb movement. "We think this shared pattern of deficits reflects core features of cerebellar damage," says Carey.

In addition to the shared features, the team also identified specific walking deficits that were unique to reeler mice. "Movement variability was overall much higher in these mice. Also, they support their body weight with their front, rather than hind, limbs. As a consequence, they can't use them for steering, which results in an unstable trajectory", Machado explains. The researchers attribute these additional impairments to differences in brain circuitry both in the cerebellum and across the brain.

A roadmap for studying locomotor deficits

"When we began studying neural circuits for locomotion, there was always a tradeoff between specificity and interpretability of behavioral measurements," Carey recalls. "With LocoMouse, we have tried to provide both a comprehensive, quantitative description of locomotor behaviour as well as a conceptual framework within which to interpret that high-dimensional data."

"Now, we have a new roadmap that will allow us to move beyond these two mouse models and study many more," says Carey. "We have a quantitative way to map huge high-dimensional sets of behavioural data onto the intricacies of the underlying neural circuits. This approach can be extended to many more mouse models, with different manipulations of various brain areas and cell types," she concludes.

Video: https://youtu.be/jPy13PA8G-Q

Credit: 
Champalimaud Centre for the Unknown

Living at higher altitudes associated with higher levels of child stunting

August 24, 2020, Addis Ababa, Ethiopia: Residing at higher altitude is associated with greater rates of stunting, even for children living in "ideal-home environments" according to a new study from researchers at the International Food Policy Research Institute (IFPRI) and Addis Ababa University. The study provides new insight in the relationship between altitude and undernutrition and the additional efforts needed to ensure policy interventions are appropriately tailored to high altitude contexts.

"More than 800 million people live at 1,500 meters above sea level or higher, with two-thirds of them in Sub-Saharan Africa, and Asia. These two regions host most of the world's stunted children so it is important to understand the role that altitude plays in growth" said IFPRI Senior Research Fellow and co-author of the study, Kalle Hirvonen.

"If children living at altitude are, on average, more stunted than their peers at sea level, then a more significant effort to address high altitude stunting is needed."

The study, "Evaluation of Linear Growth at Higher Altitudes," co-authored by Hirvonen and Addis Ababa University Associate Professor Kaleab Baye, was published in the Journal of the American Medical Association (JAMA), Pediatrics. The study analyzed height-for-age data of more than 950,000 children from 59 countries. The data were compiled through the Advancing Research on Nutrition and Agriculture (AReNA) project funded by the Bill & Melinda Gates Foundation.

Children were classified as having lived in an ideal-home environment if they were born to highly educated mothers, had good health-service coverage and high living conditions. Global tracking of growth rates relies on the assumption that children living in such environments have the same growth potential, irrespective of genetic makeup or geographic location.

"The data clearly indicated that those residing in ideal-home environments grew at the same rate as the median child in the growth standard developed by the World Health Organization (WHO), but only until about 500 meters above sea level (masl). After 500 masl, average child height-for-age significantly deviated from the growth curve of the median child in the reference population", said Hirvonen. The research further shows that these estimated growth deficits are unlikely to be due to common risk factors such as poor diet and disease.

The study suggests that the effects of altitude were most pronounced during the perinatal period i.e., the time leading up to, and immediately after, the birth. "Pregnancies at high-altitudes are characterized by chronic hypoxia, or an inadequate supply of oxygen, which is consistently associated with a higher risk of fetal growth restriction. Restricted growth in the womb is in turn a leading risk factor for linear growth faltering" said Hirvonen.

There is some evidence to suggest that residing at high altitude over multiple generations may lead to some genetic adaption, but these findings did not hold for women with only a few generations of high-altitude ancestry. "Women of high-altitude ancestry were able to partially cope with the hypoxic conditions through increased uterine artery blood flow during pregnancy, but it may take more than a century before such adaptions are developed", said Baye.

Hirvonen and Baye conclude that the WHO growth standards for children should not be adjusted because growth faltering at high altitudes is unlikely to be the result of physiological adaptations. Instead, they call for greater attention and health-care guidance for managing pregnancies in high-altitude settings.

"A first step is to unravel the complex relationship linking altitude, hypoxia and fetal growth to identify effective interventions. Failing to address altitude-mediated growth deficits urgently can fail a significant proportion of the world population from meeting the Sustainable Development Goals and World Health Assembly nutrition targets" said Baye.

Credit: 
International Food Policy Research Institute

Large molecules need more help to travel through a nuclear pore into the cell nucleus

image: Model of a large molecule (blue, PDB ID:2MS2), bound to multiple transporter proteins (orange dots) that interact with the nuclear pore complex barrier (gray, EMD-8087), a process essential for import into the cell nucleus

Image: 
Ill./©: Giulia Paci (CC BY 4.0)

A new study in the field of biophysics has revealed how large molecules are able to enter the nucleus of a cell. A team led by Professor Edward Lemke of Johannes Gutenberg University Mainz (JGU) has thus provided important insights into how some viruses, for example, can penetrate into the nucleus of a cell, where they can continue to proliferate and infect others. They have also demonstrated that the efficiency of transport into a cell decreases as the size of the molecules increases and how corresponding signals on the surface can compensate for this. "We have been able to gain new understanding of the transport of large biostructures, which helped us develop a simple model that describes how this works," said Lemke, a specialist in the field of biophysical chemistry. He is Professor of Synthetic Biophysics at JGU and Adjunct Director of the Institute of Molecular Biology (IMB) in Mainz.

Nuclear localization signals facilitate rapid entry

A typical mammalian cell has about 2,000 nuclear pores, which act as passageways from the cell cytoplasm into the cell nucleus and vice versa. These pores in the nuclear envelope act as gatekeepers that control access and deny entry to larger molecules of around five nanometers in diameter and greater. Molecules that have certain nuclear localization sequences on their surface can bind to structures within nuclear pores, allowing them to enter into the nucleus rapidly. "Nuclear pores are remarkable in the diversity of cargoes they can transport. They import proteins and viruses into the nucleus and export ribonucleic acids and proteins into the cell cytoplasm," explained Lemke, describing the function of these pores. "Despite the fundamental biological relevance of the process, it has always been an enigma how large cargoes greater than 15 nanometers are efficiently transported, particularly in view of the dimensions and structures of nuclear pores themselves."

With this is mind and as part of their project, the researchers designed a set of large model transport cargoes. These were based on capsids, i.e., protein "shells" in viruses that enclose the viral genome. The cargo models ranging from 17 to 36 nanometers in diameter were then fluorescently labeled, allowing them to be observed on their way through cells. Capsid models without nuclear localization signals on their surface remained in the cell cytoplasm and did not enter the cell nucleus. As the number of nuclear localization signals increased, the accumulation of the model capsid in the nucleus became more efficient. But even more interestingly, the researchers found that the larger the capsid, the greater was the number of nuclear localization signals needed to enable efficient transport into the nucleus.

The research team looked at a range of capsids of various viruses including the hepatitis B capsid, the largest cargo used in this study. But even increasing the number of nuclear localization signals to 240 did not result in the transport of this capsid into the nucleus. This corresponds with the results of earlier studies of the hepatitis B virus that have indicated that only the mature infectious virus is capable of passage through a nuclear pore into the nucleus.

Cooperation enabled the development of a mathematical model

In cooperation with Professor Anton Zilman of the University of Toronto in Canada, a mathematical model was developed to shed light on the transport mechanism and to establish the main factors determining the efficiency of transport. "Our simple two-parameter biophysical model has recreated the requirements for nuclear transport and revealed key molecular determinants of the transport of large biological cargoes on cells," concluded first author Giulia Paci, who carried out the study as part of her PhD thesis at the European Molecular Biology Laboratory (EMBL) in Heidelberg.

Credit: 
Johannes Gutenberg Universitaet Mainz