Earth

Russia's regions and their preferences for strong alcohol

HSE University economists (Ludmila Zasimova and Marina Kolosnitsyna analyzed two data sets for Russian regions in 2010-2016: the official statistics of the Russian Statistics Agency on alcohol sales and estimates of unregistered alcohol consumption modeled by the study's authors relying on the Ministry of Health's own methodology. It appeared to be that, despite a steady decline in alcohol consumption in the country, it varies greatly from region to region (from 1.1 up to 17-20+ liters of pure alcohol per adult). Moreover, unlicensed alcohol still remains a huge problem, specifically in those regions where people prefer stronger alcohol. On average, adult Russians drink nine liters of registered alcohol per year (in terms of pure alcohol) and six liters of unregistered beverages. This research was published in the International Journal of Drug Policy

Despite all the restrictive measures taken by the authorities, Russia still remains one of the most active alcohol consumers in the world. Moreover, excessive alcohol consumption is among the most common reasons for the low life expectancy among men. Nevertheless, a positive trend emerged in the mid-2000s. According to the figures provided by the WHO, the sharpest decrease in alcohol consumption in the country was recorded in 2005-2016--from 18.7 down to 11.7 liters of ethyl alcohol per member of adult population.

The preferences of Russians changed over time, too. According to Russian Statistics Agency, vodka constituted 81.4% of total alcohol sales back in 1995, and only 36.4% in 2018. The share of beer consumption also grew over the same period from 12.8% up to 43.3%, just like the share of other beverages (primarily, wine)--from 5.8% up to 20.3%.

One of the key factors for reduced alcohol consumption and its updated pattern are the measures gradually introduced by the government. For example, following 2009, excise taxes on alcohol were gradually increased, along with prices shortly thereafter. In 2011, mandatory constraints were introduced on alcohol sales in the evening and at night. Prior to that, Russian regions could independently impose restrictions at their own discretion or not introduce them at all. The studies have shown that the regions with the more stringent measures have demonstrated more significantly reduced alcohol consumption.

The anti-alcohol policy in Russia now is rather versatile: minimum retail prices for strong alcohol and the restrictions imposed on alcohol sales during the evening, night, and morning. According to the authors of the study, in 2016, 28 out of 83 regions with 45% of the country's population applied uniform "milder" federal rules prohibiting sales from 11pm to 8pm. All other regions introduced stricter schedules for alcohol sales.

Therefore, state anti-alcohol policy has led to a partial success, namely, reduced consumption of registered alcohol. However, a major part of the regional alcohol markets still remains in the shadows, with people still consuming unregistered alcohol.

Apart from that, a distinctive feature of Russia is a diverse ethnic make-up in different regions; their geographic, climate, and economic conditions that all have a major impact on both variability and quantitative characteristics of the population's 'drinking' preferences and strong alcohol share in the consumption pattern. Therefore, it is important to understand what is going on with the officially registered volumes and real alcohol consumption (estimated according to indirect data) depending on a region.

How was it all researched?

The researchers analyzed data on alcohol consumption per capita in 78 Russian regions in 2010-2016 among citizens aged 15 years and older. Two region-based data sets were used for the research purposes: official statistics by the Russian Statistics Agency on alcohol sales per region and the findings by the Federal Medical Research Center for Psychiatry and Addictology named after V. Serbskiy on alcoholic psychosis and the Statistical Agency's data on mortality from external causes. The first data set captured the officially registered level of alcohol consumption, while the second one served as the basis for modeling its actual or real level. Based on the data collected using the assessment model, the volumes of unregistered alcohol consumption were thus evaluated.

Using the resulting data, the researchers built an econometric model that allowed them to assess how various factors affect the overall level of alcohol consumption in different regions. These factors were preference for strong drinks, vodka prices, climate, daily duration of permitted legal alcohol sale, citizens' average monthly income, the social, economic, and demographic characteristics of the region (e.g., urban population predominance, unemployment rate, etc.).

Alcohol consumption appears to vary greatly from region to region. For instance, the volume of registered alcohol ranged from 1.1 up to 17.8 liters, and unregistered alcohol--from almost zero up to 21 liters of pure alcohol per adult. Russians drink on average about nine liters of registered and six liters of unregistered alcohol in terms of pure alcohol per year, which amounts to 15 liters in total.

The situation with unregistered alcohol also varies a lot from region to region. For example, back in 2010, Moscow, the Moscow Region, and St. Petersburg had the lowest (approaching zero) rates of unregistered alcohol consumption. The highest rate was found in the Chukotka Autonomous District, namely 21.2 liters.

Although the regional averages steadily declined, they were still uneven. As a result, economists applied the WHO scale to divide the regions into six groups according to the alcohol consumption level. It turned out that the number of the heavy-drinking regions--from 10 liters of ethyl alcohol per adult per year--decreased from 47 in 2010 down to 10 in 2016 (registered alcohol was only considered). Meanwhile, the number of regions characterized by a high total consumption (registered and unregistered alcohol) decreased over the same time from 73 down to 66.

The lowest alcohol consumption rate is specific for the European part of Russia, including the regions with a Muslim majority (Karachay-Cherkessia, Kabardino Balkariya, Adygea, and Dagestan). The highest drinking regions (10 liters or more of registered alcohol) are all located primarily in the north of the European part of Russia and in the Far East. However, the authors note that high levels of 'legal' alcohol consumption were also recorded in Moscow and the Moscow Region in 2010-2016, in St. Petersburg in 2010-2013, and in the Leningrad Region in 2010-2015.

The authors were able to reach the following conclusions:

The higher the price of vodka in a region, the lower the consumption of registered alcohol and the higher the consumption of unregistered products;

Extension of hours for alcohol sales by 1% resulted in an almost equal decrease in unregistered alcohol consumption, but this had no major overall impact on the popularity of legal alcohol;

The share of the working-age population among adults is the most critical factor associated with alcohol addiction: its increase by 1% results in a decrease of registered alcohol consumption by 3.3% and an increase in unregistered alcohol by 23%. This supports previous research findings, stating that young people drink lessand prefer beer or wine to stronger beverages;

An increase in the share of the urban population by 1% has resulted in an increase in registered alcohol consumption by 2%;

The coefficients of the independent variable--the average temperature in January--turned out to be statistically insignificant, i.e., the climate does not really affect alcohol consumption;

The unemployment rate has no impact on alcohol consumption;

In regions where the population prefers stronger beverages, consumption of both registered and unregistered alcohol is usually higher.

According to the authors, their findings show that the anti-alcohol policy in Russia needs to be consistent in its implementation. There are two key points here. Since the share of strong alcohol determines the overall level of alcohol consumption in a region per capita in terms of pure alcohol, it should be made less and less accessible. The most essential tool is further boosting prices. 'To replace vodka, ethyl alcohol in strong drinks must be taxed higher than beer and wine,' the authors comment.

Moreover, the researchers believe that regions where people prefer strong drinks, should extend evening/nighttime restrictions, as well as reduce the total number of stores selling alcohol. However, they note that such a clampdown can affect the consumption of registered and unregistered alcohol in different ways. As soon as the registered alcohol becomes less affordable, the consumption of the latter may skyrocket in reaction to the imposed restrictions. Therefore, together with economic and administrative restrictions on alcohol sales, special tools and system-based policies are essential to dealing with the shadow economy.

In addition, Russia's anti-alcohol policy should take into account economic cycles. A new study by economists at HSE University demonstrates that, during the years of economic recession and declining incomes, unregistered alcohol becomes especially popular.

Credit: 
National Research University Higher School of Economics

Researchers make tiny, yet complex fiber optic force sensor

video: The researchers also used the miniature force sensor to measure the force during deflection of a thin rod.

Image: 
Denis Donlagic, University of Maribor

WASHINGTON -- Researchers have developed a tiny fiber optic force sensor that can measure extremely slight forces exerted by small objects. The new light-based sensor overcomes the limitations of force sensors based on micro-electro-mechanical sensors (MEMS) and could be useful for applications from medical systems to manufacturing.

"Applications for force sensing are numerous, but there is a lack of thoroughly miniature and versatile force sensors that can perform force measurements on small objects," said research team leader Denis Donlagic from the University of Maribor in Slovenia. "Our sensor helps meet this need as one of the smallest and most versatile optical-fiber force sensors designed thus far."

In The Optical Society (OSA) journal Optics Letters, Donlagic and Simon Pevec describe their new sensor, which is made of silica glass formed into a cylinder just 800 microns long and 100 microns in diameter -- roughly the same diameter as a human hair. They demonstrate the new sensor's ability to measure force with a resolution better than a micronewton by using it to measure the stiffness of a dandelion seed or the surface tension of a liquid.

"The high resolution force sensing and broad measuring range could be used for sensitive manipulation and machining of small objects, surface tension measurements on very small volumes of liquid, and manipulating or examining the mechanical properties of biological samples on the cellular level," said Donlagic.

Creating an all-glass sensor

Although MEMS-based sensors can provide miniature force sensing capabilities, their applications are limited because they require application-specific protective packaging and multiple electrical connections. Without proper packaging, MEMS devices also aren't biocompatible and can't be immersed in water.

To develop a more versatile miniature force sensor, the researchers created an all-optical fiber optic sensor completely made of glass. The complex undertaking was made possible by a special etching process the researchers had previously developed to create complicated all-fiber microstructures. They used this micromachining process to create a sensor based on a Fabry-Perot interferometer -- an optical cavity made from two parallel reflecting surfaces.

The end of the sensor's lead-in fiber together with a thin flexible silica diaphragm were used to create the tiny interferometer. When external force is exerted onto a silica post with either a round or cylindrical force sensing probe on the end, it changes the length of the interferometer in a way that can be measured with subnanometer resolution.

The way the sensor's structures were fabricated created an air-sealed cavity that is protected from contamination and amenable for use in biochemical environments. Not only can it be immersed in a variety of liquids, but it can also measure positive and negative forces and doesn't need any additional packaging for most applications.

Measuring tiny forces

After evaluating and calibrating the sensor, the researchers used it to measure Young modulus - a measure of stiffness - of a human hair and common dandelion seed. They also measured surface tension of a liquid by measuring the retraction force when a miniature cylinder was removed from a liquid. The researchers were able to measure force with a resolution of about 0.6 micronewtons and a force range of about 0.6 millinewtons.

"The force sensing tip can be made substantially smaller -- down to about 10 microns in diameter -- and can be adapted to perform various force sensing tasks," said Donlagic. "The miniature force sensor can also be used to create more complex sensors such as sensors that measure magnetic and electric fields or determine the surface tension or flow of a liquid."

The researchers say that the current version of the sensor is ready for use. However, improving the overload robustness, producing probe tips with other shapes or adding miniaturized packaging could further expand potential applications. The researchers are also working to automate the processes used to fabricate the sensor to make it more practical.

Credit: 
Optica

Is APM the best way to evaluate NBA players?

Syracuse, N.Y. - A recent study by sport analytics professors shows the Adjusted Plus-Minus (APM) statistic used to evaluate the performance of NBA players is sometimes misleading because it does not accurately account for the quality of a player's teammates.

The study "Measuring individual worker output in a complementary team setting: Does regularized adjusted plus minus isolate individual NBA player contributions?" was published recently in the PLOS ONE scientific journal.

Lead author Justin Ehrlich, an assistant professor of Sport Management at Syracuse University, said most NBA players remain on the same team and in the same types of lineups during the season. The study closely examines the impact of the quality of a team's lineup on a player's APM score.

"We find evidence of complementarity effects: The better are your teammates, the better you will look according to adjusted plus-minus," Ehrlich said. "This is interesting because adjusted plus-minus purportedly controls out teammate and opponent effects. However, it does not have the benefit of out-of-sample information about how a player would play when put in other lineups and on other teams."

APM is considered by many to be the best single statistic for rating players. The idea is that to get an accurate feel for a player's value, you need to account for the presence of other players, both on offense and defense.

For example, a player with a +1 means an average lineup would score 1 more point per 100 possessions with this player added. Milwaukee's Giannis Antetokounmpo led the NBA at 10.3 during the pandemic-shortened 2019-20 regular season.

The PLOS ONE study says that in all likelihood, some of Antetokounmpo's teammates have higher APM ratings than they would without his presence.

According to the study, "APM measures use seasonal play-by-play data to estimate individual player contributions. If a team's overall score margin success is figuratively represented by a pie, APM measures are well-designed to slice the pie and attribute individual contributions accordingly. However, they do not account for the possibility that better players can increase the overall size of the pie and thus increase the size of the slice (overall APM value) for teammates."

The study's research team included Syracuse University Assistant Professor of Sport Management Justin Ehrlich; Syracuse University Professor of Sport Management Shane Sanders; and Western Illinois University Assistant Professor of Economics Shankar Ghimire.

The researchers say that based on this study, more adjustments to the APM rating methodology should be explored to determine an NBA player's true value. In the meantime, team officials should understand the limits of the current methodology when making decisions about trades and free agents.

"If a player is traded or picked up as a free agent and changes from a good team to a poor team, his APM is expected to take a hit," Sanders said. "If a player changes from a poor team to a good team, his APM is expected to rise.

"This also has player rotation implications," Sanders added. "If a player changes from a bench to starting role, his APM is expected to rise. If a player changes from a starting to a bench role, his APM is expected to take a hit."

Conclusions from Study

The results provide strong evidence that regularized adjusted plus minus player productivity measures are not, in fact, "teammate-independent." Rather, we find evidence that lineup-teammate productivity positively influences a given player's real plus minus value. As this result is conditional upon a given player's baseline productivity via player fixed effects and age, we interpret this as a significant and fairly strong complementarity effect that is uncontrolled in adjusted plus minus measures such as real plus minus.

While real plus minus may control for in-sample teammate effects well, it appears that the measure does not control for out-of-sample lineup-teammate quality effects. We find this within a model that accounts for teammate quality changes from season to season. We note that basketball leagues are not natural experiments in which players are randomly paired and resampled. Rather, players are organized into often stable team environments and resampling occurs infrequently such that players have often aged by the time they receive a new set of teammates with whom to play. In such an environment, counterfactuals concerning player value will go largely unobserved. From this estimation, further (out-of-sample) adjustments to the APM estimation methodology can be explored in future work.

Credit: 
Syracuse University

BEAT-HIV Delaney collaboratory issues recommendations measuring persistent HIV reservoirs

image: L-R: Wistar Drs. Mohamed Abdel-Mohsen and Luis Montaner

Image: 
The Wistar Institute

PHILADELPHIA -- (Sept. 7, 2020) -- Spearheaded by scientists at The Wistar Institute, top worldwide HIV researchers from the BEAT-HIV Martin Delaney Collaboratory to Cure HIV-1 Infection by Combination Immunotherapy (BEAT-HIV Collaboratory) compiled the first comprehensive set of recommendations on how to best measure the size of persistent HIV reservoirs during cure-directed clinical studies. This perspective article was published today in Nature Medicine.

Cure-directed studies seek to control or eradicate HIV beyond current antiretroviral therapy (ART) which can only suppress but not eliminate HIV. Long-term viral persistence on ART continues to cause immune activation, chronic inflammation and progressive damage to multiple organs. Multiple cure-directed studies are underway worldwide but no consensus statement was available to prioritize and interpret the many strategies available today to measure persistent HIV on ART.

"Bringing together many of the original investigators who developed current assays used to measure HIV, the BEAT-HIV Collaboratory has now issued recommendations for priority in HIV measures as a guide for cure-directed studies," said Luis J. Montaner, D.V.M., D.Phil., the Herbert Kean, M.D., Family Professor and director of the HIV-1 Immunopathogenesis Laboratory at Wistar's Vaccine & Immunotherapy Center, co-leader of the Delaney Collaboratory and corresponding author on the article. "A major obstacle to eradication is the virus hiding in some compartments of the immune system where it's difficult to target and measure. The BEAT-HIV guidelines now provide specific information on the strengths and limitations of each assay available today."

The ability to accurately measure the size of these HIV reservoirs is critical when evaluating potential therapeutic strategies to cure HIV. It is also necessary for monitoring viral levels and guide ART interruption.

"We systematically reviewed the state of the science in the field and provided a collective and comprehensive view on which viral measurements to prioritize in clinical trials," said Mohamed Abdel-Mohsen, Ph.D., assistant professor in Wistar's Vaccine & Immunotherapy Center and one of the authors of the paper. "I think this is a crucial step to take the best advantage of the most valuable resource available to researchers in their quest to find a cure for HIV, the blood and tissue samples from people living with HIV who generously participate in the HIV cure-focused clinical trials all over the world."

In current HIV cure-directed studies in ART-suppressed people living with HIV, viral levels are monitored in peripheral blood cells obtained either by phlebotomy or leukapheresis (a laboratory procedure to separate white blood cells from whole blood) and biopsies from gut-associated lymphoid tissue or lymph nodes, though most trials only use peripheral blood because it is easier to collect.

Credit: 
The Wistar Institute

Comparing the controllability of young hand-raised wolves and dogs

image: Hungarian researchers at the Family Dog Project, Eötvös Loránd University assessed the development of tractability in hand-raised wolves and similarly raised, 3-24-week-old dogs during fetching, calling, obeying sit signal, hair brushing and walking in a muzzle.

Image: 
Járdány / Eötvös Loránd University

During domestication, dogs most probably have been selected for increased tractability (meant as controllability, handleability). If so, then considerable differences should be found between domestic dogs and their closest wild relatives, wolves, in this trait. To reveal if such a difference exists, researchers at the Family Dog Project, Eötvös Loránd University assessed the development of tractability in hand-raised wolves and similarly raised, 3-24-week-old dogs during fetching, calling, obeying sit signal, hair brushing and walking in a muzzle. They found that despite intensive socialization, wolves remained less tractable than dogs, especially in contexts involving access to a resource. Dogs also appeared to be more prepared to follow human initiation of action than wolves. Based on these results they suggest that tractability is indeed a major factor in the making of "man's best friend".

Dogs live in 45% of households, integrated into various human groups across societies. Dogs' increased tractability, i.e. how easily the animals' behavior can be managed through control and handling, might account for a crucial difference to wolves and explain why dogs are so widespread. Researchers in Hungary were interested if the role of tractability in dogs' domestication is reflected in species differences today. "We hand-raised 16 wolves and 11 dogs and regularly tested their behaviour from the age of 3 to 24 weeks. We combined a variety of behavioural tests to measure how the animals control their impulses in everyday situations, such as during hair brushing" - said Dorottya Ujfalussy, first author of the study and postdoctoral researcher at the Family Dog Project, Department of Ethology, ELTE.

Hand-raised puppies, both wolf and dog, were individually assigned to caregivers before their eyes opened. The pups received intensive socialisation, spending 22-24 hours a day in close contact with their assigned caretaker. Socialisation is the process by which puppies learn to relate appropriately to people and other animals, and to become used to a wide range of events, environments and situations.

They were carried in pouches and accompanied their caregivers throughout their everyday activities. Pups also had the opportunity to meet and socialize with each other 2-3 times a week. The researchers avoided competitive, dominating situations, or aggressive interactions with the animals; similarly to wolf mothers and adult pack members under natural circumstances. The behaviour of the subjects was experimentally tested regularly and the present study is part of a larger research series.

In the present study, published in Scientific Reports, the animals participated in five tests: fetching, calling, obeying the sit signal, hair brushing and walking in a muzzle. The researchers found that at 9 weeks of age hand-raised dogs retrieved an object (a paper ball) to the experimenter more often than wolves. If wolf pups grabbed the ball, they tended to carry it away. Furthermore, unlike any of the dogs, 4 out of 16 wolves showed aggressive behaviour when the experimenter tried to take away the ball. In contrast to fetching, dogs and wolves behaved largely similarly when being called, walked in muzzles or requested to sit down for a piece of food. Only at older ages (16 and 24 weeks) and in a social context (in the presence of mates) were wolves more difficult to call back than dogs. When being brushed, wolves made more biting attempts than dogs at the age of 12 weeks, however, this difference diminished by the age of 16 weeks when dogs also attempted to bite more often than at a younger age.

"The intensively socialized wolves reacted to calls, sat down upon request and walked in muzzles nicely, but they remained less manageable and controllable than dogs, especially in contexts involving access to a resource (e.g. toy or food reward). Dogs appeared to be more prepared to follow human guidance. When we tested mother-reared dogs in fetching and calling, we found no evidence that different rearing conditions (i.e., intensive socialization with humans vs. mother-rearing) would affect controllability in dogs. This confirmed our hypothesis that during domestication dogs have been selected for increased tractability." said Enik? Kubinyi, lead author of the study and a senior researcher at ELTE, Department of Ethology.

The strength of the current study is that pups were compared at young ages, thus the differences and similarities detected here had not yet been modified by developmental processes as strongly as in adult individuals. However, comparing very young subjects carries the risk of detecting developmental differences if dogs and wolves develop at a different pace. For example, the fact that wolves attempted more bites at a younger age in the brushing task may be due to differences in the pace of maturation, since older dog puppies behaved similarly. It is important to note that, although wolves can be controlled and trained to perform tasks (for example to sit down, as the results of the current study showed), they are not suitable as pets. For captive wolves in zoos and other licensed facilities, however, socialization and training are valuable means of enrichment as well as useful to husbandry related welfare.

Credit: 
Eötvös Loránd University

Brain astrocytes show metabolic alterations in Parkinson's disease

image: PD patients' astrocytes manifest several hallmarks of the disease, these include: (1) increased production of alpha-synuclein, (2) increased reactivity upon inflammatory stimulation, (3) increased astrocytic Ca2+ levels, (4) mtDNA maintenance defects and (5) metabolomic changes.

Image: 
Tuuli-Maria Sonninen

A new study using induced pluripotent stem cell (iPSC) technology links astrocyte dysfunction to Parkinson's disease (PD) pathology. The study carried out at the University of Eastern Finland and published in Scientific Reports highlights the role of brain astrocyte cells in PD pathology and the potential of iPSC-derived cells in disease modelling and drug discovery.

PD affects more than 6 million people worldwide, making PD the second most common neurodegenerative disease. PD's exact cause is still unknown, but several molecular mechanisms have been identified in PD pathology. These include neuroinflammation, mitochondrial dysfunction, dysfunctional protein degradation and alpha-synuclein (α-synuclein) pathology. The disease's major hallmarks comprise the loss of dopaminergic neurons and the presence of Lewy bodies and Lewy neurites. The loss of dopaminergic neurons and the subsequent decrease in dopamine levels are considered responsible for PD's typical movement symptoms. There is no cure for PD and currently, the treatments are targeted to alleviate the motor symptoms with dopamine replacement therapy and surgery.

The greatest risk factor for PD is high age, but some environmental factors, such as toxins and pesticides, have been shown to increase PD risk. Though most PD cases are late-onset and sporadic with no evidence for an inheritance, approximately 3-5 % are monogenic. The most common cause for monogenic PD is mutations in the leucine-rich repeat kinase 2 (LRRK2) gene. LRRK2-associated PD is clinically closest to sporadic forms of the disease regarding the age of onset, disease progression and motor symptoms. Additionally, mutations in the GBA (glucosylceramidase beta) gene are the most significant risk factor for PD identified to date. The molecular mechanisms by which GBA mutations result in this increased risk are currently the focus of substantial research efforts.

Astrocytes from patients expressed several hallmarks of Parkinson's disease

While studies focusing on dopaminergic neurons have brought new insight into PD pathology, astrocyte contribution to PD has been investigated only sparsely. Astrocytes are glial cells and the most abundant cell type in the human brain. It was long thought that the astrocytes worked solely as supporting cells for neurons, but today the role of astrocytes is known to be far more extensive. Until now, only a few studies have used iPSC-derived astrocytes obtained from PD patients. The current study used iPSC-derived astrocytes from two PD patients carrying a mutation in the LRRK2 gene, one of them presenting with an additional mutation in GBA to further characterize the PD astrocyte phenotype.

The researchers found out that astrocytes from PD patients produced significantly higher levels of α-synuclein, a protein that accumulates in PD patients' brain. One of the key pathological features caused by α-synuclein aggregation is the disruption of calcium homeostasis, and the study showed increased calcium levels in PD astrocytes. As inflammation is considered to be an important contributor to PD pathology, the response to inflammatory stimuli was studied in astrocytes. The PD patient astrocytes were highly responsive to inflammatory stimuli and more sensitive to inflammatory reactivation than control astrocytes. Additionally, PD astrocytes showed altered mitochondrial function and lower mitochondrial DNA copy number. Furthermore, PD astrocytes showed increased levels of polyamines and polyamine precursors while lysophosphatidylethanolamine levels were decreased, both of these have been reported altered in PD brain.

"The results provide evidence that LRRK2 and GBA mutant astrocytes are likely to contribute to PD progression and offer new perspectives for understanding the roles of astrocytes in the pathogenesis of PD," says Early Stage Researcher Tuuli-Maria Sonninen, the lead author of the study.

Credit: 
University of Eastern Finland

Massively parallel sequencing unlocks cryptic diversity of eye parasites in fish

image: Big eyed perch

Image: 
Estonian University of Life Sciences Chair of Aquaculture

Researchers at the Estonian University of Life Sciences and Swedish University of Agricultural Sciences in collaboration with Finnish scientists, developed a methodology that uses next-generation sequencing technology for fast and efficient screening of genetic diversity of fish eye parasites.

According to Anti Vasemägi, scientist who led the study, this is an important milestone as the developed methodology helps to better understand the inter- and intra-species diversity, life cycle and ecology of eye parasites belonging to the Diplostomidae family of trematodes. Diplostomid parasites are extremely difficult to distinguish morphologically, thus the use of molecular genetic methods allows us to understand the hitherto hidden diversity of parasites and their impact on the host, Vasemägi explained. "The developed methodology enables to analyze more material within one study than all previous molecular genetic research combined," said Vasemägi.

Kristina Noreikiene, the postdoctoral researcher at Estonian University of Life Sciences and the first author of the published work explained: diplostomid parasites have a complex life cycle, after hatching from the egg, parasite larvae infects snails that live in water bodies. The next intermediate host for diplostomids are fish, which get infected by the free moving parasite larvae after it leaves the snail. In fish, diplostomids have around 24 hours to evade fish immune system and to reach ocular tissues that are presumed to be immunopriviledged and as such are safe havens for eye parasites. Depending on the species eye parasites prefer specific niches within the eye. Some parasites infect the lens and fish develops cataract which renders it blind. In other cases, parasites actively swim in vitreous humor and as a result fish behaviour is affected. Regardless, infection with diplostomid parasites makes fish more susceptible to becoming a prey for fish eating birds or mammals thus completing the parasite cycle.

Noreikiene added that the results of the study showed a surprisingly strong relationship between diplostomid infection prevalence in perch and the concentration of humic substances in water bodies. Humic substances make water darker and more acidic. Low pH is not suitable for most aquatic snail species. Thus, dark and acidic lakes which are common in Norther Europe do not have the first intermediate hosts of the parasite and therefore fishes do not get infected by diplostomids. Contrary to humic lakes, clear-water lakes provide suitable habitats for aquatic snails and the infection prevalence of fish eye parasites is extremely high. She added that this study is one of the first of its kind which describes the fish immune response against the parasite within the eye tissue using whole transcriptome analysis. According to Noreikiene, gene expression results published in the study hints that there may be an immune response in the eye against the parasite infection.

Credit: 
Estonian Research Council

Children use both brain hemispheres to understand language, unlike adults

image: Examples of individual activation maps in each of the age groups. Strong activation in right-hemisphere homologs of the left-hemisphere language areas is evident in the youngest children, declines over age, and is entirely absent in most adults.

Image: 
Elissa Newport

WASHINGTON -- Infants and young children have brains with a superpower, of sorts, say Georgetown University Medical Center neuroscientists. Whereas adults process most discrete neural tasks in specific areas in one or the other of their brain's two hemispheres, youngsters use both the right and left hemispheres to do the same task. The finding suggests a possible reason why children appear to recover from neural injury much easier than adults.

The study published Sept. 7, 2020 in PNAS focuses on one task, language, and finds that to understand language (more specifically, processing spoken sentences), children use both hemispheres. This finding fits with previous and ongoing research, led by Georgetown neurology professor Elissa L. Newport, PhD, a former postdoctoral fellow Olumide Olulade, MD, PhD, and neurology assistant professor Anna Greenwald, PhD.

"This is very good news for young children who experience a neural injury," says Newport, director of the Center for Brain Plasticity and Recovery, a joint enterprise of Georgetown University and MedStar National Rehabilitation Network. "Use of both hemispheres provides a mechanism to compensate after a neural injury. For example, if the left hemisphere is damaged from a perinatal stroke - one that occurs right after birth - a child will learn language using the right hemisphere. A child born with cerebral palsy that damages only one hemisphere can develop needed cognitive abilities in the other hemisphere. Our study demonstrates how that is possible."

Their study solves a mystery that has puzzled clinicians and neuroscientists for a long time, says Newport.

In almost all adults, sentence processing is possible only in the left hemisphere, according to both brain scanning research and clinical findings of language loss in patients who suffered a left hemisphere stroke.

But in very young children, damage to either hemisphere is unlikely to result in language deficits; language can be recovered in many patients even if the left hemisphere is severely damaged. These facts suggest that language is distributed to both hemispheres early in life, Newport says. However, traditional scanning had not revealed the details of these phenomena until now. "It was unclear whether strong left dominance for language is present at birth or appears gradually during development," explains Newport.

Now, using functional magnetic resonance imaging (fMRI) analyzed in a more complex way, the researchers have shown that the adult lateralization pattern is not established in young children and that both hemispheres participate in language during early development.

Brain networks that localize specific tasks to one or the other hemisphere start during childhood but are not complete until a child is about 10 or 11, she says. "We now have a better platform upon which to understand brain injury and recovery."

The study, originally run by collaborators William D. Gaillard, MD, and Madison M. Berl, PhD, of Children's National Medical Center, enrolled 39 healthy children, ages 4-13; Newport's lab added 14 adults, ages 18-29, and conducted a series of new analyses of both groups. The participants were given a well-studied sentence comprehension task. The analyses examined fMRI activation patterns in each hemisphere of the individual participants, rather than looking at overall lateralization in group averages. Investigators then compared the language activation maps for four age groups: 4-6, 7-9, 10-13, and 18-29. Penetrance maps revealed the percentage of subjects in each age group with significant language activation in each voxel of each hemisphere. (A voxel is a tiny point in the brain image, like a pixel on a television monitor.) Investigators also performed a whole-brain analysis across all participants to identify brain areas in which language activation was correlated with age.

Researchers found that, at the group level, even young children show left-lateralized language activation. However, a large proportion of the youngest children also show significant activation in the corresponding right-hemisphere areas. (In adults, the corresponding area in the right hemisphere is activated in quite different tasks, for example, processing emotions expressed with the voice. In young children, areas in both hemispheres are each engaged in comprehending the meaning of sentences as well as recognizing the emotional affect.)

Newport believes that the "higher levels of right hemisphere activation in a sentence processing task and the slow decline in this activation over development are reflections of changes in the neural distribution of language functions and not merely developmental changes in sentence comprehension strategies."

She also says that, if the team were able to do the same analysis in even younger children, "it is likely we would see even greater functional involvement of the right hemisphere in language processing than we see in our youngest participants (ages 4-6 years old).

"Our findings suggest that the normal involvement of the right hemisphere in language processing during very early childhood may permit the maintenance and enhancement of right hemisphere development if the left hemisphere is injured," Newport says.

The investigators are now examining language activation in teenagers and young adults who have had a major left hemisphere stroke at birth.

Credit: 
Georgetown University Medical Center

Genetic study of proteins is a breakthrough in drug development for complex diseases

image: Comparing the genetic inferred causal relationships of proteins on human diseases with historic drug development programs, this study, for the first time, showed that protein-disease pairs with genetic predicted causal evidence is more likely to be approved drugs for the same indications.

To support open science, the working group built up a graphical database, EpiGraphDB Proteome PheWAS browser (www.epigraphdb.org/pqtl/), which makes the over 220,000 pairs of protein-disease associations openly accessible to the public. The team also shared the analysis protocol with public via GitHub (https://github.com/MRCIEU/epigraphdb-pqtl).

Image: 
Dr Jie Zheng

An innovative genetic study of blood protein levels, led by researchers in the MRC Integrative Epidemiology Unit (MRC-IEU) at the University of Bristol, has demonstrated how genetic data can be used to support drug target prioritisation by identifying the causal effects of proteins on diseases.

Working in collaboration with pharmaceutical companies, Bristol researchers have developed a comprehensive analysis pipeline using genetic prediction of protein levels to prioritise drug targets, and have quantified the potential of this approach for reducing the failure rate of drug development.

Genetic studies of proteins are in their infancy. The aim of this research, published in Nature Genetics, was to establish if genetic prediction of protein target effects could predict drug trial success. Dr Jie Zheng, Professor Tom Gaunt and colleagues from the University of Bristol, worked with pharmaceutical companies to set up a multi-disciplinary collaboration to address this scientific question.

Using a set of genetic epidemiology approaches, including Mendelian randomization and genetic colocalization, the researchers built a causal network of 1002 plasma proteins on 225 human diseases. In doing so, they identified 111 putatively causal effects of 65 proteins on 52 diseases, covering a wide range of disease areas. The results of this study are accessible via EpiGraphDB: http://www.epigraphdb.org/pqtl/

Lead author, Dr Zheng, said their estimated effects of proteins on human diseases could be used to predict the effects of drugs targeting these proteins.

"This analysis pipeline could be used to validate both efficacy and potential adverse effects of novel drug targets, as well as provide evidence to repurpose existing drugs to other indications.

"This study lays a solid methodological foundation for future genetic studies of omics. The next step is for the analytical protocol to be used in early drug target validation pipeline by the study's pharmaceutical collaborators. We hope that these findings will support further drug development?to increase the success rate of drug trials, reduce drug cost and benefit patients," said Dr Zheng.

Tom Gaunt, Professor of Health and Biomedical Informatics, University of Bristol, and a member of the NIHR Bristol Biomedical Research Centre, added: "Our study used publicly available data published by many researchers around the world (collated by the MRC-IEU OpenGWAS database), and really demonstrates the potential of open data sharing in enabling novel discoveries in health research. We have demonstrated that this re-use of existing data offers an efficient approach to reducing drug development costs with anticipated benefits for health and society."

Credit: 
University of Bristol

Multinationals' supply chains account for a fifth of global emissions

A fifth of carbon dioxide emissions come from multinational companies' global supply chains, according to a new study led by UCL and Tianjin University that shows the scope of multinationals' influence on climate change.

The study, published in Nature Climate Change, maps the emissions generated by multinationals' assets and suppliers abroad, finding that the flow of investment is typically from developed countries to developing ones - meaning that emissions are in effect outsourced to poorer parts of the world.

The research shows the impact that multinationals can have by encouraging greater energy efficiency among suppliers or by choosing suppliers that are more carbon efficient.

The authors proposed that emissions be assigned to countries where the investment comes from, rather than countries where the emissions are generated.

Professor Dabo Guan (UCL Bartlett School of Construction & Project Management) said: "Multinational companies have enormous influence stretching far beyond national borders. If the world's leading companies exercised leadership on climate change - for instance, by requiring energy efficiency in their supply chains - they could have a transformative effect on global efforts to reduce emissions.

"However, companies' climate change policies often have little effect when it comes to big investment decisions such as where to build supply chains.

"Assigning emissions to the investor country means multinationals are more accountable for the emissions they generate as a result of these decisions."

The study found that carbon emissions from multinationals' foreign investment fell from a peak of 22% of all emissions in 2011 to 18.7% in 2016. Researchers said this was a result of a trend of "de-globalisation", with the volume of foreign direct investment shrinking, as well as new technologies and processes making industries more carbon efficient.

Mapping the global flow of investment, researchers found steady increases in investment from developed to developing countries. For instance, between 2011 and 2016 emissions generated through investment from the US to India increased by nearly half (from 48.3 million tons to 70.7 million tons), while in the same years emissions generated through investment from China to south-east Asia increased tenfold (from 0.7 million tons to 8.2 million tons).

Lead author Dr Zengkai Zhang, of Tianjin University, said: "Multinationals are increasingly transferring investment from developed to developing countries. This has the effect of reducing developed countries' emissions while placing a greater emissions burden on poorer countries. At the same time it is likely to create higher emissions overall, as investment is moved to more 'carbon intense' regions."

The study also examined the emissions that the world's largest companies generated through foreign investment. For instance, Total S.A.'s foreign affiliates generated more than a tenth of the total emissions of France.

BP, meanwhile, generated more emissions through its foreign affiliates than the foreign-owned oil industry in any country except for the United States; Walmart, meanwhile, generated more emissions abroad than the whole of Germany's foreign-owned retail sector, while Coca-Cola's emissions around the world were equivalent to the whole of the foreign-owned food and drink industry hosted by China.

Credit: 
University College London

Gene test can predict risk of medications causing liver injury

image: Color-enhanced confocal microscope image of a liver organoid used in a study in Nature Medicine that reports success at developing a polygenic risk score for predicting whether a medication will cause liver injury.

Image: 
Cincinnati Children's

CINCINNATI--Scientists who were working on a way to determine the viability of batches of tiny liver organoids have discovered a testing method that may have far broader implications.

Their study published Sept. 7, 2020, in Nature Medicine, reports identifying a "polygenic risk score" that shows when a drug, be it an approved medication or an experimental one, poses a risk of drug-induced liver injury (DILI).

The work was conducted by consortium of scientists from Cincinnati Children's, Tokyo Medical and Dental University, Takeda Pharmaceutical Co. in Japan, and several other research centers in Japan, Europe and the US. The findings take a large step toward solving a problem that has frustrated drug developers for years.

"So far we have had no reliable way of determining in advance whether a medication that usually works well in most people might cause liver injury among a few," says Jorge Bezerra, MD, Director, Division of Gastroenterology, Hepatology and Nutrition at Cincinnati Children's.

"That has caused a number of promising medications to fail during clinical trials, and in rare cases, also can cause serious injury from approved medications. If we could predict which individuals would be most at-risk, we could prescribe more medications with more confidence," says Bezerra, who was not involved with the study.

Now that reliable test might be just around the corner.

"Our genetic score will potentially benefit people directly as a consumer diagnostic-like application, such as 23andMe and others. People could take the genetic test and know their risk of developing DILI," says corresponding author Takanori Takebe, MD, an organoid expert at Cincinnati Children's who has been studying ways to grow liver "buds" for large-scale use in research.

The team developed the risk score by re-analyzing hundreds of genome-wide association studies (GWAS) that had identified a long list of gene variants that might indicate a likelihood of a poor reaction in the liver to various compounds. By combining the data and applying several mathematical weighting methods, the team found a formula that appears to work.

The risk score takes more than 20,000 gene variants into account.

The team confirmed the score's prediction power in cell culture, in organoid tissue and by using patient genomic data already on file.

The score was valid in tests involving more than a dozen medications: cyclosporine, bosentan, troglitazone, diclofenac, flutamide, ketoconazole, carbamazepine, amoxicillin-clavulanate, methapyrilene. tacrine, acetaminophen and tolcapone.

The test works for different types of drugs because the score focuses on a set of common mechanisms involved in how the liver metabolizes a drug, including oxidant stress pathways in liver cells and endoplasmic reticulum (ER) stress--a disruption of cell function that happens when proteins cannot fold properly.

How can a risk score help?

For clinicians, this would allow them to run a quick genetic test to identify patients at higher risk of liver injury before prescribing medications. The results might prompt a doctor to change the dosage, order more frequent follow-up tests to catch early signs of liver damage, or switch medications entirely.

For drug research, the test could help exclude people of high risk of liver injury from a clinical trial so that the benefits of a medication can be more accurately assessed.

Liver toxicity has caused a number of drug failures over the years. Takebe says both patients and the drug maker were disappointed when a potential diabetes treatment called fasigliam was withdrawn in 2014 during phase 3 clinical trials. Some of the participants (at a rate equivalent to about 1 in 10,000) experienced elevated enzyme levels that suggested potential liver injury.

While such risks may appear low, at the time there was no way to predict which people would develop DILI, making the drug unacceptably dangerous. But the new polygenic risk score would make it possible to produce liver organoids that exhibit key risk variants to determine if a drug is harmful before people ever take it.

What's next?

Takebe and colleagues demonstrated how to produce liver buds on a mass scale in 2017 in a study published in Cell Reports. The team has improved upon the process since, reporting success in 2019 in Cell Metabolism at engineering liver organoids that model disease.

However, more research involving a more-diverse population is needed to confirm the initial findings and to scale up a DILI screening test for potentially widespread use, Takebe says.

Credit: 
Cincinnati Children's Hospital Medical Center

'Wrong-way' migrations stop shellfish from escaping ocean warming

Ocean warming is paradoxically driving bottom-dwelling invertebrates - including sea scallops, blue mussels, surfclams and quahogs that are valuable to the shellfish industry - into warmer waters and threatening their survival, a Rutgers-led study shows.

In a new study published in the journal Nature Climate Change, researchers identify a cause for the "wrong-way" species migrations: warming-induced changes to their spawning times, resulting in the earlier release of larvae that are pushed into warmer waters by ocean currents.

The researchers studied six decades of data on 50 species of bottom-dwelling invertebrates, and found that about 80 percent have disappeared from the Georges Bank and the outer shelf between the Delmarva Peninsula and Cape Cod, including off the coast of New Jersey.

Many species of fish respond to the warming ocean by migrating to cooler waters. But the "wrong-way" migrators - which include shellfish, snails, starfish, worms and others - share a few crucial traits. As larvae, they are weak swimmers and rely on ocean currents for transportation. As adults, they tend to remain in place, sedentary or fixed to the seafloor.

The researchers found that the warming ocean have caused these creatures to spawn earlier in the spring or summer, exposing their larvae to patterns of wind and water currents they wouldn't experience during the normal spawning season. As a result, the larvae are pushed toward the southwest and inland, where waters are warmer and they are less likely to survive. The adults stay in those areas and are trapped in a feedback loop in which even warmer waters lead to even earlier spawning times and a further shrinking of their occupied areas.

The researchers compared this phenomenon to "elevator-to-extinction" events in which increasing temperatures drive birds and butterflies upslope until they are eliminated from areas they once inhabited. The effect on bottom-dwelling invertebrates is more insidious, however, because these creatures could potentially thrive in cooler regions, but earlier-spring currents prevent weak-swimming larvae from reaching that refuge.

The researchers noted that these effects are influenced by localized wind and current patterns. Further research is needed to determine whether the effects are similar on the U.S. Pacific coast or other ocean areas.

Credit: 
Rutgers University

NASA satellite finds Haishen now a super typhoon

image: NASA-NOAA's Suomi NPP satellite captured a visible image of Super Typhoon Haishen moving through the Philippine Sea on Sept. 4.

Image: 
NASA Worldview, Earth Observing System Data and Information System (EOSDIS)

NASA-NOAA's Suomi NPP satellite passed over the Philippine Sea on Sept. 4 and provided a visible image of Haishen that had strengthened into a super typhoon.

The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument aboard Suomi NPP provided a visible image of Haishen that revealed a large, circular, organized structure of strong thunderstorms circling the open eye. The eyewall, the strong thunderstorms that circle the open eye, is estimated to be 81 nautical miles-wide. Satellite data indicate the eye is about 31 nautical miles wide. The storm is at least 450 miles in diameter, as tropical storm-force winds extend up to 225 miles from the center.

The Joint Typhoon Warning Center (JTWC) in Honolulu, Hawaii noted at 5 a.m. EDT (0900 UTC) on Sept. 4, that Super Typhoon Haishen had maximum sustained winds near 135 knots (155 mph/250 kph). It is currently a Category 4 hurricane/typhoon. It was centered near latitude 22.2 degrees north and longitude 134.3 degrees east, about 439 nautical miles southeast of Kadena Air Base, Okinawa Island, Japan. The storm was moving to the northwest.

JTWC forecasters expect Haishen will turn to the north-northwest while intensifying more. It is expected to peak later on Sept. 4 with sustained winds near 140 knots (161 mph/259 kph) which would make it equivalent to a Category 5 hurricane on the Saffir-Simpson Hurricane Wind Scale. JWTC forecasts Haishen to pass west of Kyushu, Japan and make landfall in South Korea after 3 days.

Credit: 
NASA/Goddard Space Flight Center

Unraveling the secrets of Tennessee whiskey

image: UT Department of Food Science graduate students collecting whiskey distillate samples for chemical analysis at the Sugarlands Distilling Company in Gatlinburg, Tennessee. Pictured are co-author Trenton Kerley, Melissa Dein, and Jordan Lopez.

Image: 
Photo by J. Munafo, courtesy UTIA.

KNOXVILLE, Tenn. -- More than a century has passed since the last scientific analyses of the famed "Lincoln County [Tennessee] process" was published, but the secrets of the famous Tennessee whiskey flavor are starting to unravel at the University of Tennessee Institute of Agriculture. The latest research promises advancements in the field of flavor science as well as marketing.

Conducted by John P. Munafo, Jr., assistant professor of flavor science and natural products, and his graduate student, Trenton Kerley, the study "Changes in Tennessee Whiskey Odorants by the Lincoln County Process" was recently published in the Journal of Agricultural and Food Chemistry (JAFC).

The study incorporated a combination of advanced flavor chemistry techniques to probe the changes in flavor chemistry occurring during charcoal filtration. This type of filtration is a common step in the production of distilled beverages, including vodka and rum, but it's a required step for a product to be labeled "Tennessee whiskey." The step is called the Lincoln County Process (LCP), after the locale of the original Jack Daniel's distillery. It is also referred to as "charcoal mellowing."

The LCP step is performed by passing the fresh whiskey distillate through a bed of charcoal, usually derived from burnt sugar maple, prior to barrel-aging the product. Although no scientific studies have proved such a claim, it is believed that the LCP imparts a "smoother" flavor to Tennessee whiskey. In addition, by law for the distinction of having "Tennessee whiskey" on the label, the liquor must be produced in the state of Tennessee from at least 51% corn after having been aged in Tennessee for at least 2 years in unused charred oak barrels.

The actual LCP differs from distiller to distiller, and, as the details are generally held as a trade secret, the process has been historically shrouded in mystery. There are no regulations as to how the process is performed, only that the step is required. In other words, all a manufacturer needs to do is pass the distillate over charcoal (an undefined amount--possibly even just one piece). Thus, depending on how it's conducted, the LCP step may not impact the whisky flavor at all. On the other hand, even small adjustments to the LCP can modify the flavor profile of the whiskey positively or negatively, potentially causing any number of surprises.

Munafo and Kerley describe how distillers adjust parameters empirically throughout the whiskey production process, then rely on professional tasters to sample products, blending subtly unique batches to achieve their target flavor. Munafo says, "By gaining a fundamental understanding of the changes in flavor chemistry occurring during whiskey production, our team could advise distillers about exactly what changes are needed to make their process produce their desired flavor goals. We want to give distillers levers to pull, so they are not randomly or blindly attempting to get the precise flavor they want."

Samples used in the study were provided by the Sugarlands Distilling Company (SDC), in Gatlinburg, Tennessee, producers of the Roaming Man Whiskey. SDC invited the UTIA researchers to visit their distillery and collect in-process samples. Munafo says SDC prioritizes transparency around their craft and takes pride in sharing the research, discovery and distillation process of how their whiskey is made and what makes Tennessee whiskey unique.

Olfactory evaluations--the good ole smell test--revealed that the LCP treatment generally decreased malty, rancid, fatty and roasty aromas in the whiskey distillates. As for the odorants (i.e., molecules responsible for odor), 49 were identified in the distillate samples using an analytical technique called gas chromatography-olfactometry (GC-O). Nine of these odorants have never been reported in the scientific whiskey literature.

One of the newly found whiskey odorants, called DMPF, was originally discovered in cocoa. It is described as having a unique anise or citrus-like smell. Another of the newly discovered whiskey odorants (called MND) is described as having a pleasant dried hay-like aroma. Both odorants have remarkably low odor thresholds in the parts-per-trillion range, meaning that the smells can be detected at very low levels by people but are difficult to detect with scientific instrumentation.

The only previous investigation into how charcoal treatment affects whiskey was published in 1908 by William Dudley in the Journal of the American Chemical Society. The new study revealed fresh knowledge for optimizing Tennessee whiskey production. Thirty-one whiskey odorants were measured via a technique called stable isotope dilution assay (SIDA), all showing a decrease in concentration as a result of LCP treatment, albeit to different degrees. That is to say, while the LCP appears to be selective in removing certain odorants, the process didn't increase or add any odorants to the distillate. This new knowledge can be used to optimize Tennessee whiskey production. For instance, the process can be optimized for the removal of undesirable aromas, while maintaining higher levels of desirable aromas, thus "tailoring" the flavor profile of the finished whiskey.

"We want to provide the analytical tools needed to help enable distillers to have more control of their processes and make more consistent and flavorful whiskey, says Dr. Munafo. "We want to help them to take out some of the guesswork involved in whiskey production."

Additional studies are now underway at the UT Department of Food Science to characterize both the flavor chemistry of different types of whiskey and their production processes. The ultimate aim of the whiskey flavor chemistry program is to aid whiskey manufacturers in producing a consistent product with the exact flavor profile that they desire. Even with the aid of science Munafo says, "Whiskey making will 'still' remain an impressive art form." Pun intended.

Credit: 
University of Tennessee Institute of Agriculture

Autophagy: the beginning of the end

image: An Atg9 vesicle serves as a platform for the recruitment of the autophagic machinery. It thereby forms a seed for the formation of an autophagosome around the cargo by accepting lipids that are transferred by the Atg2 protein from the neighbouring endoplasmic reticulum (ER).

Image: 
© Verena Baumann

Autophagosomes first form as cup-shaped membranes in the cell, which then grow to engulf the cellular material designated for destruction. The formation of these membranes is catalyzed by a complex machinery of proteins. "We have a very good knowledge of the factors involved in autophagosomes formation", explains group leader Sascha Martens, "but how they come together to initiate the formation of these membranes has so far been enigmatic".

One of the factors is Atg9, a protein whose importance in the process was known, but whose role was not clear. Atg9 is found in small intracellular vesi-cles. Researchers Justyna Sawa-Makarska, Verena Baumann and Nicolas Coudevylle from the Martens lab now show that they form a platform on which the autophagy machinery can assemble to build the autophagosome. "Atg9 vesicles are abundant in the cell, which means they can be rapidly recruited when autophagosomes are needed", explains group leader Sascha Martens.

Cells encapsulate cargo in vesicles, so that they can be correctly transported and degraded in a chemical environment that is different to the one normally found in cells. Autophagosomes therefore consist of a double membrane made of phospholipids. This greasy envelope creates a waterproof package that separates material from the aqueous surroundings of the cell and marks it for degradation. However, Atg9 vesicles do not supply the bulk of the lipids to the growing autophagosome.

To understand a complex machinery like the cell, it often helps to take it apart and rebuild it. The biogenesis of autophagosome involves numerous proteins. By isolating and characterizing 21 of these components, the scientists have been able to rebuild parts of the autophagy machinery in the 'test tube' - an ar-duous process that took Sascha Martens and his team almost ten years. "With this approach we could reconstitute the early steps of autophagosome biogen-esis in a controlled manner", he says. With the elaborate toolkit the Martens lab has developed, the scientists now aim to unravel the next steps in the biogene-sis of the autophagosome. The research project was a collaboration of the Martens lab with Gerhard Hummer and Soeren von Bülow from the Max Planck Institute for Biophysics in Frankfurt and Martin Graef from the Max Planck In-stitute for Biology of Ageing in Cologne.

Credit: 
University of Vienna