Culture

Non-invasive sensor shows correlation between blood pressure and intracranial pressure

image: Researchers have simultaneously demonstrated the mechanism linking high blood pressure to elevated intracranial pressure. The discovery can lead to novel treatments for intracranial hypertension and its complications, such as stroke

Image: 
Casa da Árvore

Brazilian researchers have simultaneously demonstrated the mechanism linking high blood pressure to elevated intracranial pressure, validated a non-invasive intracranial pressure monitoring method, and proposed a treatment for high blood pressure that does not affect intracranial hypertension.

The study was supported by FAPESP and involved collaboration between researchers at São Paulo State University (UNESP) and Brain4care, a startup based in São Carlos. It could result in novel treatments for intracranial hypertension and its complications, including stroke. The main findings are reported in the journal Hypertension.

The researchers monitored blood pressure and intracranial pressure in rats for six weeks. “We set out to investigate what happened to intracranial pressure during the period in which the animals were becoming hypertensive. We were the first to succeed in monitoring this process non-invasively, tracking changes in the shape of the intracranial pressure curve. Our study suggests that intracranial hypertension can be prevented if diagnosed early and treated with losartan, a drug widely used by patients with high blood pressure. It blocks the action of angiotensin II [a naturally occurring peptide that can cause vasoconstriction and an increase in blood pressure], which we also show to be important to control intracranial pressure,” said Eduardo Colombari, principal investigator for the study. Colombari is a professor at UNESP’s Dental School in Araraquara (FOAr).

Intracranial pressure typically increases because of a tumor, encephalitis, meningitis, aneurysm or similar problems, but the researchers showed that chronic high blood pressure can also impair cerebral compliance, leading to a rise in intracranial pressure.

In the study the researchers used vascular clips to simulate renal artery obstruction in rats, restricting the flow of blood to one kidney. The reduced irrigation triggered the pressure-controlling renin-angiotensin system, leading the kidney to release peptides, enzymes and receptors that constrict the blood vessels and raise blood pressure throughout the organism. In the third week of monitoring, when the rats were considered hypertensive, blood pressure rose even more, causing fluid retention and above all boosting cerebral blood flow.

“If the hypertension isn’t treated, the disorder can worsen,” Colombari said. “The rise in intracranial pressure caused by systemic hypertension impairs the brain’s ability to stabilize the pressure [cerebral autoregulation]. This can also lead to blood-brain barrier rupture. Our study showed that the rats’ blood-brain barrier was compromised in the third week. When the barrier is breached, substances and products from the renin-angiotensin system as well as pro-inflammatory substances present in the blood vessels can enter the interstitial space, where the neurons reside, especially regions important to integrative neurohumoral adjustment, such as the cardiovascular, respiratory, and renal systems, among others.”

Treating intracranial hypertension

Blood-brain barrier disruption endangers areas of the nervous system that are important to control cardiovascular pressure as a whole. “How is intracranial hypertension treated now? By inducing a coma or administering a diuretic to resolve fluid retention in the skull. These methods are relatively unspecific and highly systemic. Deeper understanding of the link between high blood pressure and intracranial hypertension points to the possibility of a new field of study in pharmacology,” said Gustavo Frigieri, Brain4care’s Scientific Director.

Part of the study involved a comparison between intracranial pressure measured by the non-invasive sensor and by the invasive method. The wearable sensor developed by Brain4care has been used to measure intracranial pressure in patients with systemic impairments and has been licensed by the National Health Surveillance Agency (ANVISA) in Brazil and the Food and Drug Administration (FDA) in the United States.

Frigieri also sees plenty of opportunities for applications in basic research. “By comparing the non-invasive and invasive methods, we validated our technology for use in scientific research with small animals,” he said. “It can close gaps left open owing to the aggressiveness of the conventional method, which entails a significant risk of infection because a hole is drilled in the skull to insert a sensor.”

Blood flow and hormones

At the end of the study, the researchers treated the animals with losartan, reducing blood pressure and intracranial pressure. “It’s not a cause-and-effect relationship because intracranial pressure wasn’t affected when we lowered blood pressure with a vasodilator [hydralazine]. We observed a major impairment of the brain, and the angiotensin inhibitor [losartan] improved both blood pressure and cerebral blood flow,” Colombari said.

In the sixth week of the experiment, before administration of any drugs, blood pressure was high (190 per 100 mmHg) and intracranial pressure had risen significantly. The researchers discovered alterations in the intracranial pressure pulse waveforms. Each heartbeat (systolic or diastolic) pumps blood to the brain, originating the first peak (P1). A second wave (P2) correlates directly with intracranial arterial volume and cerebral compliance, important factors observed immediately before ventricular diastole.

According to the researchers, the second wave is associated with brain tissue compliance and arterial elasticity in the skull so that the energy of the first wave is absorbed. However, blood-brain barrier disruption and loss of cerebral compliance hinders control of P2, and the first wave becomes stronger than the second.

“At this point we found P2 to be higher than P1, which is the opposite of the normal situation. This is due to loss of protection by the blood-brain barrier so that the brain expands and fluid leaks into the interstitium,” Colombari said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

In youth, COVID-19 causes more complications than flu; fatality is rate

NEW YORK, NY--A new global study of 30-day outcomes in children and adolescents with COVID-19 found that while death was uncommon, the illness produced more symptoms and complications than seasonal influenza.

The study, "30-day outcomes of Children and Adolescents with COVID-19: An International Experience," published online in the journal Pediatrics, also found significant variation in treatment of children and adolescents hospitalized with COVID-19.

Early in the pandemic, opinions around the impact of COVID-19 on children and adolescents ranged from it being no more than the common flu to fear of its potential impact on lesser-developed immune systems. This OHDSI global network study compared the real-world observational data of more than 242,000 children and adolescents diagnosed with COVID-19, including nearly 10,000 hospitalized youths, to more than 2,000,000 diagnosed with influenza across five countries (France, Germany, South Korea, Spain, and the United States) to provide a clearer picture of its impact.

Asthma and obesity -- a common finding among a general pediatric population -- were the most common baseline comorbidities. There was also a higher prevalence of rare conditions, including congenital malformations, neurodevelopmental disorders, and heart disease, among those hospitalized with COVID-19. Pediatric patients with COVID-19 also showed higher rates of symptoms such as labored breathing, loss of smell and gastrointestinal symptoms than those with influenza, which could help improve early diagnosis of COVID-19 among this population.

Adjunctive therapies were the most common treatment options in children and adolescents, though there was global heterogeneity on which particular therapies were used (systemic corticosteroids and famotidine were most common).

The most common 30-day complications for hospitalized youths with COVID-19 were hypoxemia and pneumonia, both of which occurred at a higher rate than hospitalized influenza pediatric patients.

There was limited knowledge of the COVID-19 impact on children and adolescents around the world during the first half of 2020, when the OHDSI community collaborated on the CHARYBDIS Project. Findings at the time ranged from a 5.7 % hospitalization rate to another that reported a 63% hospitalization rate. There was a need for reliable evidence on the demographics, comorbidities, symptoms, in-hospital treatments, and health outcomes among children and adolescents to inform clinical decision-making.

"This study addressed critical questions that were weighing down on both the healthcare community and the general population -- how was COVID-19 impacting our youngest population," said study lead Talita Duarte-Salles, PhD, an epidemiologist at IDIAP Jordi Gol in Barcelona, Spain, and first author of the study. "While some last year claimed that COVID-19 was no different than the flu, the real-world evidence we generated through open science showed something quite different. It was a relief to see that fatality was rare, but clearly both complications and symptoms showed the COVID-19 was no flu in children and adolescents."

The study was developed and executed by the OHDSI (Observational Health Data Sciences and Informatics) community, a multi-stakeholder, interdisciplinary network that collaborates globally to bring out the value of health data through open science and large-scale analytics. This study resulted from the CHARYBDIS Project, which has resulted in several published studies, including ones on general COVID-19 phenotyping, patients with autoimmune disease, and use of repurposed and adjunctive drug therapies. Several others are undergoing peer review and have been posted to a preprint server; OHDSI's work on COVID-19 can be found here.

Columbia University serves as the Central Coordinating Center for the OHDSI community.

"Generating reliable evidence that can inform clinical decision-making for children and adolescents was so important, and it doesn't happen without collaboration and the foundation of open-source tools and practices developed for years in this network," Duarte-Salles said. "It was truly inspiring the way our OHDSI community rallied together globally in the face of this unprecedented pandemic and collaborated together."

Credit: 
Columbia University Irving Medical Center

Pandemic prevention measures linked to lower rates of Kawasaki disease in children

Research Highlights:

Rates of Kawasaki disease - a condition that creates inflammation in blood vessels in the heart and is more common in children of Asian/Pacific Island descent - have substantially decreased in South Korea during the COVID-19 pandemic.

The decrease could be due to mask-wearing, hand-washing, school closures and physical distancing, suggesting Kawasaki disease may be prompted by infectious agents.

The cause of Kawasaki disease is unknown, though it may be an immune response to acute infectious illness.

DALLAS, June 7, 2021 -- The rate of Kawasaki disease in South Korea has substantially decreased during the COVID-19 pandemic, possibly due to pandemic prevention efforts, such as mask-wearing, hand-washing and physical distancing, according to new research published today in the American Heart Association's flagship journal Circulation.

Kawasaki disease is the most common cause of heart disease that develops after birth in children, creating inflammation in blood vessels, particularly heart arteries. Kawasaki disease usually occurs before age 5 and is more common among children of Asian descent, although it affects children of all races and ethnicities. South Korea has the second-highest incidence of Kawasaki disease in the world, after Japan.

According to the 2021 American Heart Association's heart disease and stroke statistics, the incidence of Kawasaki disease in 2006 was 20.8 per 100,000 U.S. children under age 5, the most recent national estimate available and is limited by reliance on weighted hospitalization data from 38 states. Although Kawasaki disease can occur into adolescence (and rarely beyond), 76.8% of U.S. children with the condition are age 5 or younger. Boys have a 1.5-fold higher incidence of Kawasaki disease than girls. The rate of Kawasaki disease appears to be rising worldwide, possibly due to improved awareness and recognition of the disease, more frequent diagnosis of incomplete Kawasaki disease and true increasing incidence.

Symptoms of Kawasaki disease include fever, rash, red lips and strawberry tongue (bumpy and red with enlarged taste buds). Prompt treatment is critical to prevent significant heart problems, and most children recover fully with treatment. Although the cause of Kawasaki disease is unknown, it may be an immune response to an acute infectious illness based in part on genetic susceptibilities.

South Korean researchers noted that efforts to prevent COVID-19 provided a unique opportunity to analyze the possible effects of mask-wearing and social distancing on Kawasaki disease. Since February 2020, South Korea has required strict mask-wearing, periodic school closures, physical distancing, and frequent testing and isolation for people with COVID-19 symptoms.

Researchers reviewed health records from January 2010 to September 2020 in a South Korean national health insurance database to identify Kawasaki disease cases among children from birth to 19 years old. They identified 53,424 cases of Kawasaki disease during the 10 years studied, and 83% of cases occurred in children younger than 5 years of age.

Researchers compared the rate of Kawasaki disease from February 2020 to September 2020, a time of significant COVID-19 prevention efforts, to pre-COVID-19 rates of Kawasaki disease. Their analysis found the number of Kawasaki disease cases dropped substantially -- by about 40% -- after COVID-19 prevention efforts were implemented in Feb. 2020. Before 2020, the average number of cases of Kawasaki disease between February and September was 31.5 per 100,000 people, compared to 18.8 per 100,000 people for the same months in 2020 during the COVID-19 pandemic. The greatest decrease in cases occurred among children up to age 9, while no decrease occurred among 10- to 19-year-olds.

"Our findings emphasize the possible impact of environmental triggers on the occurrence of Kawasaki disease," said study lead author Jong Gyun Ahn, M.D., Ph.D., associate professor of pediatrics at Severance Children's Hospital at Yonsei University College of Medicine in Seoul, South Korea. "The decrease in the incidence of Kawasaki disease after the implementation of non-pharmaceutical interventions is very clear, and it is unlikely that other independent interventions were accidentally involved.

"The broad and intensive COVID-19 prevention interventions had the additional effect of lowering the incidence of respiratory infections, which have previously been suggested as triggering agents for Kawasaki disease," Ahn noted. "Additionally, the seasonality of the Kawasaki disease epidemic disappeared in South Korea. It is usually most prevalent in the winter, with a second peak in late spring-summer."

American Heart Association volunteer expert Jane W. Newburger, M.D., M.P.H., FAHA, notes the research findings are consistent with the hypothesis that Kawasaki disease is an immunologic reaction elicited in genetically susceptible people when exposed to viruses or other infectious agents in the environment. Newburger is a member of the American Heart Association's Young Hearts Council, associate cardiologist-in-chief, academic affairs; medical director of the neurodevelopmental program; and director of the Kawasaki Program at Boston Children's Hospital; and Commonwealth Professor of Pediatrics at Harvard Medical School.

"During the COVID pandemic, children were exposed to fewer viruses and other infectious agents. So the 'natural experiment' that occurred from isolation and masking of children

supports the likelihood that Kawasaki disease is triggered by viruses or other infectious agents in the environment," Newburger said. "However, these dramatic changes in lifestyle would be difficult to sustain if the sole purpose was to prevent Kawasaki disease. Kawasaki disease is a very rare illness in children and does not represent a public health emergency like COVID."

Newburger also noted that many Kawasaki disease experts in the U.S. also have noticed fewer cases in their centers during the pandemic.

This study has some limitations: cases of Kawasaki disease among patients who did not submit insurance claims are not included in the national insurance database; and the study was observational and could not control for other factors such as whether or not patients sought and received health care including screening and treatment for Kawasaki disease.

It is also important to note that Kawasaki disease is not the same as multi-system inflammatory syndrome in children (MIS-C), the new condition identified this past year during the COVID-19 pandemic. While the two conditions have some overlapping symptoms, they also have some distinct differences including more profound inflammation and more gastrointestinal symptoms with MIS-C, and MIS-C is associated with COVID-19 infection.

Credit: 
American Heart Association

Tiny particles power chemical reactions

CAMBRIDGE, MA -- MIT engineers have discovered a new way of generating electricity using tiny carbon particles that can create a current simply by interacting with liquid surrounding them.

The liquid, an organic solvent, draws electrons out of the particles, generating a current that could be used to drive chemical reactions or to power micro- or nanoscale robots, the researchers say.

"This mechanism is new, and this way of generating energy is completely new," says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. "This technology is intriguing because all you have to do is flow a solvent through a bed of these particles. This allows you to do electrochemistry, but with no wires."

In a new study describing this phenomenon, the researchers showed that they could use this electric current to drive a reaction known as alcohol oxidation -- an organic chemical reaction that is important in the chemical industry.

Strano is the senior author of the paper, which appears today in Nature Communications. The lead authors of the study are MIT graduate student Albert Tianxiang Liu and former MIT researcher Yuichiro Kunai. Other authors include former graduate student Anton Cottrill, postdocs Amir Kaplan and Hyunah Kim, graduate student Ge Zhang, and recent MIT graduates Rafid Mollah and Yannick Eatmon.

Unique properties

The new discovery grew out of Strano's research on carbon nanotubes -- hollow tubes made of a lattice of carbon atoms, which have unique electrical properties. In 2010, Strano demonstrated, for the first time, that carbon nanotubes can generate "thermopower waves." When a carbon nanotube is coated with layer of fuel, moving pulses of heat, or thermopower waves, travel along the tube, creating an electrical current.

That work led Strano and his students to uncover a related feature of carbon nanotubes. They found that when part of a nanotube is coated with a Teflon-like polymer, it creates an asymmetry that makes it possible for electrons to flow from the coated to the uncoated part of the tube, generating an electrical current. Those electrons can be drawn out by submerging the particles in a solvent that is hungry for electrons.

To harness this special capability, the researchers created electricity-generating particles by grinding up carbon nanotubes and forming them into a sheet of paper-like material. One side of each sheet was coated with a Teflon-like polymer, and the researchers then cut out small particles, which can be any shape or size. For this study, they made particles that were 250 microns by 250 microns.

When these particles are submerged in an organic solvent such as acetonitrile, the solvent adheres to the uncoated surface of the particles and begins pulling electrons out of them.

"The solvent takes electrons away, and the system tries to equilibrate by moving electrons," Strano says. "There's no sophisticated battery chemistry inside. It's just a particle and you put it into solvent and it starts generating an electric field."

Particle power

The current version of the particles can generate about 0.7 volts of electricity per particle. In this study, the researchers also showed that they can form arrays of hundreds of particles in a small test tube. This "packed bed" reactor generates enough energy to power a chemical reaction called an alcohol oxidation, in which an alcohol is converted to an aldehyde or a ketone. Usually, this reaction is not performed using electrochemistry because it would require too much external current.

"Because the packed bed reactor is compact, it has more flexibility in terms of applications than a large electrochemical reactor," Zhang says. "The particles can be made very small, and they don't require any external wires in order to drive the electrochemical reaction."

In future work, Strano hopes to use this kind of energy generation to build polymers using only carbon dioxide as a starting material. In a related project, he has already created polymers that can regenerate themselves using carbon dioxide as a building material, in a process powered by solar energy. This work is inspired by carbon fixation, the set of chemical reactions that plants use to build sugars from carbon dioxide, using energy from the sun.

In the longer term, this approach could also be used to power micro- or nanoscale robots. Strano's lab has already begun building robots at that scale, which could one day be used as diagnostic or environmental sensors. The idea of being able to scavenge energy from the environment to power these kinds of robots is appealing, he says.

"It means you don't have to put the energy storage on board," he says. "What we like about this mechanism is that you can take the energy, at least in part, from the environment."

Credit: 
Massachusetts Institute of Technology

Controlling insulin production with a smartwatch

image: This is how the green light-regulated gene network works

Image: 
ETH Zurich

Many modern fitness trackers and smartwatches feature integrated LEDs. The green light emitted, whether continuous or pulsed, penetrates the skin and can be used to measure the wearer's heart rate during physical activity or while at rest.

These watches have become extremely popular. A team of ETH researchers now wants to capitalise on that popularity by using the LEDs to control genes and change the behaviour of cells through the skin. The team is led by Martin Fussenegger from the Department of Biosystems Science and Engineering in Basel. He explains the challenge to this undertaking: "No naturally occurring molecular system in human cells responds to green light, so we had to build something new."

Green light from the smartwatch activates the gene

The ETH professor and his colleagues ultimately developed a molecular switch that, once implanted, can be activated by the green light of a smartwatch.

The switch is linked to a gene network that the researchers introduced into human cells. As is customary, they used HEK 293 cells for the prototype. Depending on the configuration of this network - in other words, the genes it contains - it can produce insulin or other substances as soon as the cells are exposed to green light. Turning the light off inactivates the switch and halts the process.

Standard software

As they used the standard smartwatch software, there was no need for the researchers to develop dedicated programs. During their tests, they turned the green light on by starting the running app. "Off-the-shelf watches offer a universal solution to flip the molecular switch," Fussenegger says. New models emit light pulses, which are even better suited to keeping the gene network running.

The molecular switch is more complicated, however. A molecule complex was integrated into the membrane of the cells and linked to a connecting piece, similar to the coupling of a railway carriage. As soon as green light is emitted, the component that projects into the cell becomes detached and is transported to the cell nucleus where it triggers an insulin-producing gene. When the green light is extinguished, the detached piece reconnects with its counterpart embedded in the membrane.

Controlling implants with wearables

The researchers tested their system on both pork rind and live mice by implanting the appropriate cells into them and strapping a smartwatch on like a rucksack. Opening the watch's running program, the researchers turned on the green light to activate the cascade.

"It's the first time that an implant of this kind has been operated using commercially available, smart electronic devices - known as wearables because they are worn directly on the skin," the ETH professor says. Most watches emit green light, a practical basis for a potential application as there is no need for users to purchase a special device.

According to Fussenegger, however, it seems unlikely that this technology will enter clinical practice for at least another ten years. The cells used in this prototype would have to be replaced by the user's own cells. Moreover, the system has to go through the clinical phases before it can be approved, meaning major regulatory hurdles. "To date, only very few cell therapies have been approved," Fussenegger says.

Credit: 
ETH Zurich

Simple blood test can accurately reveal underlying neurodegeneration

Levels of a protein called neurofilament light chain (NfL) in the blood can identify those who might have neurodegenerative diseases such as Down's syndrome dementia, motor neuron disease (ALS) and frontotemporal dementia, when clinical symptoms are not definitive.

Published in Nature Communications and part-funded by the NIHR Maudsley Biomedical Research Centre, the research determined a set of age-related cut-off levels of NfL which could inform its potential use in primary care settings through a simple blood test.

Joint Senior Author on the study, Dr Abdul Hye from the NIHR Maudsley Biomedical Research Centre at King's College London and South London and Maudsley NHS Foundation Trust said: 'For the first time we have shown across a number of disorders that a single biomarker can indicate the presence of underlying neurodegeneration with excellent accuracy. Though it is not specific for any one disorder, it could help in services such as memory clinics as a rapid screening tool to identify whether memory, thinking or psychiatric problems are a result of neurodegeneration.'

Neurodegenerative diseases are debilitating conditions that result in ongoing degeneration or death of nerve cells, leading to problems in thought, attention and memory. There are currently around 850,000 people with dementia in the UK which is projected to rise to 1.6 million by 2040. In order to help identify the onset of these debilitating diseases and put in place preventative measures as early as possible there has been a drive to develop reliable and accessible biomarkers that can recognise or rule out whether the processes in the brain that are responsible for neurodegeneration are occurring.

Current biomarkers used to identify neurodegenerative disorders are taken from the fluid that surrounds the brain and spinal column (cerebrospinal fluid - CSF) which has to be extracted using an invasive procedure called lumbar puncture. Advances have been made to use biomarkers from the blood which would provide a more accessible and comfortable assessment. A central and irreversible feature in many neurodegenerative disorders is damage to the nerve fibre which results in the release of neurofilament light chain (NfL). Using ultrasensitive tests, NfL can be detected in blood at low levels and is increased in a number of disorders, unlike phosphorylated tau which is specific for Alzheimer's disease. This means NfL can be of use in the diagnostic process of many neurodegenerative diseases most notably in this study Down's syndrome dementia, ALS and frontotemporal dementia.

Co-author Professor Ammar Al-Chalabi from at King's College London and co-lead of the Psychosis and Neuropsychiatry research theme at the NIHR Maudsley BRC. said
'For neurodegenerative diseases like Alzheimer's, Parkinson's or motor neuron disease, a blood test to allow early diagnosis and help us monitor disease progression and response to treatment would be very helpful. Neurofilament light chain is a promising biomarker that could speed diagnosis of neurodegenerative diseases and shorten clinical trials.'

The study examined 3138 samples from King's College London, Lund University and Alzheimer's Disease Neuroimaging Initiative, including people with no cognitive impairment, people with neurodegenerative disorders, people with Down syndrome and people with depression. The study showed that concentrations of NfL in the blood were higher across all neurodegenerative disorders compared to those with no cognitive problems, the highest being in people with Down's syndrome dementia, motor neuron disease and frontotemporal dementia.

The study also showed that although blood based NfL could not differentiate between all the disorders, it could provide insight into different groups within certain disorders. For example, in those with Parkinson's a high concentration of NfL indicated atypical Parkinson's disorder and in patients with Down syndrome, NfL levels differentiated between those with and without dementia.

Co-author Andre Strydom, Professor in Intellectual Disabilities at King's College London said: 'This study shows that neurofilament light chain levels were particularly increased in adults with Down syndrome who have a genetic predisposition for Alzheimer's disease. Furthermore, we showed that those individuals with a dementia diagnosis following onset of Alzheimer's disease had higher levels than those who did not. This suggests that the new marker could potentially be used to improve the diagnosis of Alzheimer's in people with Down syndrome, as well as to be used as biomarker to show whether treatments are effective or not. It is exciting that all that could be needed is a simple blood test, which is better tolerated in Down syndrome individuals than brain scans.'

The study assessed age-related thresholds or cut-offs of NfL concentrations that could represent the point at which an individual would receive a diagnosis. These age-related cut-off points were 90% accurate in highlighting neurodegeneration in those over 65 years of age and 100% accurate in detecting motor neurone disease and Down syndrome dementia in the King's College London samples, with a very similar result in the Lund samples. Importantly, NfL was able to distinguish individuals with depression from individuals with neurodegenerative disorders which commonly present with primary psychiatric disorder in the onset of disease development such as frontotemporal dementia.

Joint-Senior author Professor Oskar Hansson from Lund University said 'Blood tests have great potential to improve the diagnosis of dementia both in specialised memory clinics and in primary care. Plasma NfL can be extremely useful in a number of clinical scenarios which can greatly inform doctors, as shown in this large study'.

Dr Hye said 'Blood-based NfL offers a scalable and widely accessible alternative to invasive and expensive tests for dementia. It is already used as a routine assessment in some European countries such as Sweden or Netherlands, and our age-related cut-offs can provide a benchmark and quick accessible test for clinicians, to indicate neurodegeneration in people who are exhibiting problems in thinking and memory.'

Lead author Dr Nicholas Ashton from King's College London concludes 'We are entering an exciting period where blood tests like plasma NfL, in combination with other emerging blood biomarkers like phosphorylated tau (p-tau), are starting to give us a meaningful and non-invasive insight into brain disorders'.

Credit: 
King's College London

Experiment evaluates the effect of human decisions on climate reconstructions

The first double-blind experiment analysing the role of human decision-making in climate reconstructions has found that it can lead to substantially different results.

The experiment, designed and run by researchers from the University of Cambridge, had multiple research groups from around the world use the same raw tree-ring data to reconstruct temperature changes over the past 2,000 years.

While each of the reconstructions clearly showed that recent warming due to anthropogenic climate change is unprecedented in the past two thousand years, there were notable differences in variance, amplitude and sensitivity, which can be attributed to decisions made by the researchers who built the individual reconstructions.

Professor Ulf Büntgen from the University of Cambridge, who led the research, said that the results are "important for transparency and truth - we believe in our data, and we're being open about the decisions that any climate scientist has to make when building a reconstruction or model."

To improve the reliability of climate reconstructions, the researchers suggest that teams make multiple reconstructions at once so that they can be seen as an ensemble. The results are reported in the journal Nature Communications.

Information from tree rings is the main way that researchers reconstruct past climate conditions at annual resolutions: as distinctive as a fingerprint, the rings formed in trees outside the tropics are annually precise growth layers. Each ring can tell us something about what conditions were like in a particular growing season, and by combining data from many trees of different ages, scientists are able to reconstruct past climate conditions going back hundreds and even thousands of years.

Reconstructions of past climate conditions are useful as they can place current climate conditions or future projections in the context of past natural variability. The challenge with a climate reconstruction is that - absent a time machine - there is no way to confirm it is correct.

"While the information contained in tree rings remains constant, humans are the variables: they may use different techniques or choose a different subset of data to build their reconstruction," said Büntgen, who is based at Cambridge's Department of Geography, and is also affiliated with the CzechGlobe Centre in Brno, Czech Republic. "With any reconstruction, there's a question of uncertainty ranges: how certain you are about a certain result. A lot of work has gone into trying to quantify uncertainties in a statistical way, but what hasn't been studied is the role of decision-making.

"It's not the case that there is one single truth - every decision we make is subjective to a greater or lesser extent. Scientists aren't robots, and we don't want them to be, but it's important to learn where the decisions are made and how they affect the outcome."

Büntgen and his colleagues devised an experiment to test how decision-making affects climate reconstructions. They sent raw tree ring data to 15 research groups around the world and asked them to use it to develop the best possible large-scale climate reconstruction for summer temperatures in the Northern hemisphere over past 2000 years.

"Everything else was up to them - it may sound trivial, but this sort of experiment had never been done before," said Büntgen.

Each of the groups came up with a different reconstruction, based on the decisions they made along the way: the data they chose or the techniques they used. For example, one group may have used instrumental target data from June, July and August, while another may have only used the mean of July and August only.

The main differences in the reconstructions were those of amplitude in the data: exactly how warm was the Medieval warming period, or how much cooler a particular summer was after a large volcanic eruption.

Büntgen stresses that each of the reconstructions showed the same overall trends: there were periods of warming in the 3rd century, as well as between the 10th and 12th century; they all showed abrupt summer cooling following clusters of large volcanic eruptions in the 6th, 15th and 19th century; and they all showed that the recent warming since the 20th and 21st century is unprecedented in the past 2000 years.

"You think if you have the start with the same data, you will end up with the same result, but climate reconstruction doesn't work like that," said Büntgen. "All the reconstructions point in the same direction, and none of the results oppose one another, but there are differences, which must be attributed to decision-making."

So, how will we know whether to trust a particular climate reconstruction in future? In a time where experts are routinely challenged, or dismissed entirely, how can we be sure of what is true? One answer may be to note each point where a decision is made, consider the various options, and produce multiple reconstructions. This would of course mean more work for climate scientists, but it could be a valuable check to acknowledge how decisions affect outcomes.

Another way to make climate reconstructions more robust is for groups to collaborate and view all their reconstructions together, as an ensemble. "In almost any scientific field, you can point to a single study or result that tells you what to hear," he said. "But when you look at the body of scientific evidence, with all its nuances and uncertainties, you get a clearer overall picture."

Credit: 
University of Cambridge

Researchers identify a molecule critical to functional brain rejuvenation

image: In young adult mice (left), TET1 is active in oligodendroglial cells especially after injury and this leads to new myelin formation and healthy brain function. In old mice (right), the age-related decline of TET1 levels impairs the ability of oligodendroglial cells to form functional new myelin. The authors are currently investigating whether increasing TET1 levels in older mice could rejuvenate the oligodendroglial cells and restore their regenerative functions.

Image: 
Sarah Moyon

NEW YORK, June 7, 2021--Recent studies suggest that new brain cells are being formed every day in response to injury, physical exercise, and mental stimulation. Glial cells, and in particular the ones called oligodendrocyte progenitors, are highly responsive to external signals and injuries. They can detect changes in the nervous system and form new myelin, which wraps around nerves and provides metabolic support and accurate transmission of electrical signals. As we age, however, less myelin is formed in response to external signals, and this progressive decline has been linked to the age-related cognitive and motor deficits detected in older people in the general population. Impaired myelin formation also has been reported in older individuals with neurodegenerative diseases such as Multiple Sclerosis or Alzheimer's and identified as one of the causes of their progressive clinical deterioration.

A new study from the Neuroscience Initiative team at the Advanced Science Research Center at The Graduate Center, CUNY (CUNY ASRC) has identified a molecule called ten-eleven-translocation 1 (TET1) as a necessary component of myelin repair. The research, published today in Nature Communications, shows that TET1 modifies the DNA in specific glial cells in adult brains so they can form new myelin in response to injury.

"We designed experiments to identify molecules that could affect brain rejuvenation," said Sarah Moyon, Ph.D., a research assistant professor with the CUNY ASRC Neuroscience Initiative and the study's lead author. "We found that TET1 levels progressively decline in older mice, and with that, DNA can no longer be properly modified to guarantee the formation of functional myelin."

Combining whole-genome sequencing bioinformatics, the authors showed that the DNA modifications induced by TET1 in young adult mice were essential to promote a healthy dialogue among cells in the central nervous system and for guaranteeing proper function. The authors also demonstrated that young adult mice with a genetic modification of TET1 in the myelin-forming glial cells were not capable of producing functional myelin, and therefore behaved like older mice.

"This newly identified age-related decline in TET1 may account for the inability of older individuals to form new myelin," said Patrizia Casaccia, founding director of the CUNY ASRC Neuroscience Initiative, a professor of Biology and Biochemistry at The Graduate Center, CUNY, and the study's primary investigator. "I believe that studying the effect of aging in glial cells in normal conditions and in individuals with neurodegenerative diseases will ultimately help us design better therapeutic strategies to slow the progression of devastating diseases like multiple sclerosis and Alzheimer's."

The discovery also could have important implications for molecular rejuvenation of aging brains in healthy individuals, said the researchers. Future studies aimed at increasing TET1 levels in older mice are underway to define whether the molecule could rescue new myelin formation and favor proper neuro-glial communication. The research team's long-term goal is to promote recovery of cognitive and motor functions in older people and in patients with neurodegenerative diseases.

Credit: 
Advanced Science Research Center, GC/CUNY

A few common bacteria account for majority of carbon use in soil

image: Bacterial "miners" shown in relief working to process soil nutrients, some more efficiently than others. Bradyrhizobium, one of the three top nutrient processors identified in the study, is shown here consolidating its control of carbon from a glucose addition, processing the nutrients with industrial efficiency (in the form of a bucket wheel excavator).

Image: 
Victor O. Leshyk, Center for Ecosystem Science and Society, Northern Arizona University

Just a few bacterial taxa found in ecosystems across the planet are responsible for more than half of carbon cycling in soils. These new findings, made by researchers at Northern Arizona University and published in Nature Communications this week, suggest that despite the diversity of microbial taxa found in wild soils gathered from four different ecosystems, only three to six groups of bacteria common among these ecosystems were responsible for most of the carbon use that occurred.

Soil contains twice as much carbon as all vegetation on earth, and so predicting how carbon is stored in soil and released as CO2 is a critical calculation in understanding future climate dynamics. The research team, which included scientists from Pacific Northwest National Laboratory, Lawrence Livermore National Laboratory, University of Massachusetts-Amherst, and West Virginia University, is asking how such key bacterial processes should be accounted for in earth system and climate models.

"We found that carbon cycling is really controlled by a few groups of common bacteria," said Bram Stone, a postdoctoral researcher at the Center for Ecosystem Science and Society at Northern Arizona University who led the study. "The sequencing era has delivered incredible insight into how diverse the microbial world is," said Stone, who is now at Pacific Northwest National Laboratory. "But our data suggest that when it comes to important functions like soil respiration, there might be a lot of redundancy built into the soil community. It's a few common, abundant actors who are making the most difference."

Those bacteria—Bradyrhizobium, the Acidobacteria RB41, and Streptomyces—were better than their rarer counterparts at using both existing soil carbon and nutrients added to the soil. When carbon and nitrogen were added, these already dominant lineages of bacteria consolidated their control of nutrients, gobbling up more and growing faster relative to other taxa present. Though the researchers identified thousands of unique organisms, and hundreds of distinct genera, or collections of species (for example, the genus Canis includes wolves, coyotes, and dogs), only six were needed to account for more than 50 percent of carbon use, and only three were responsible for more than half the carbon use in the nutrient-boosted soil.

Using water labeled with special isotopes of oxygen, Stone and his team sequenced DNA found in soil samples, following the oxygen isotopes to see which taxa incorporated it into their DNA, a signal that indicates growth. This technique, called quantitative stable isotope probing (qSIP), allows scientists to track which bacteria are growing in wild soil at the level of individual taxa. Then the team accounted for the abundance of each taxon and modeled how efficiently bacteria consume soil carbon. The model that included taxonomic specificity, genome size, and growth predicted the measured CO2 release much more accurately than models that looked only at how abundant each bacterial group was. It also showed that just a few taxa produced most of the CO2 that the researchers observed.

"Better understanding how individual organisms contribute to carbon cycling has important implications for managing soil fertility and reducing uncertainty in climate change projections," said Kirsten Hofmockel, Microbiome Science Team Lead at Pacific Northwest National Laboratory and a co-author of the study. "This research teases apart taxonomic and functional diversity of soil microorganisms and asks us to consider biodiversity in a new way."

"The microbial demographic data that this technique reveals lets us ask more nuanced questions," said Stone. "Where we used to characterize a microbial community by its dominant function, the way a whole state is often reported to have voted 'for' or 'against' a ballot proposition, now, with qSIP, we can see who is driving that larger pattern—the 'election results,' if you will—at the level of individual microbial neighborhoods, city blocks.

"In this way, we can start to identify which soil organisms are performing important functions, like carbon sequestration, and study those more closely."

Credit: 
Northern Arizona University

High caffeine consumption may be associated with increased risk of blinding eye disease

Consuming large amounts of daily caffeine may increase the risk of glaucoma more than three-fold for those with a genetic predisposition to higher eye pressure according to an international, multi-center study. The research led by the Icahn School of Medicine at Mount Sinai is the first to demonstrate a dietary - genetic interaction in glaucoma. The study results published in the June print issue of Ophthalmology may suggest patients with a strong family history of glaucoma should cut down on caffeine intake.

The study is important because glaucoma is the leading cause of blindness in the United States. It looks at the impact of caffeine intake on glaucoma, and intraocular pressure (IOP) which is pressure inside the eye. Elevated IOP is an integral risk factor for glaucoma, although other factors do contribute to this condition. With glaucoma, patients typically experience few or no symptoms until the disease progresses and they have vision loss.

"We previously published work suggesting that high caffeine intake increased the risk of the high-tension open angle glaucoma among people with a family history of disease. In this study we show that an adverse relation between high caffeine intake and glaucoma was evident only among those with the highest genetic risk score for elevated eye pressure," says lead/corresponding author Louis R. Pasquale, MD, FARVO, Deputy Chair for Ophthalmology Research for the Mount Sinai Health System.

A team of researchers used the UK Biobank, a large-scale population-based biomedical database supported by various health and governmental agencies. They analyzed records of more than 120,000 participants between 2006 and 2010. Participants were between 39 and 73 years old and provided their health records along with DNA samples, collected to generate data. They answered repeated dietary questionnaires focusing on how many caffeinated beverages they drink daily, how much caffeine-containing food they eat, the specific types, and portion size. They also answered questions about their vision, including specifics on if they have glaucoma or a family history of glaucoma. Three years into the study later they had their IOP checked and eye measurements.

Researchers first looked at the relationship looked between caffeine intake, IOP and self-reported glaucoma by running multivariable analyses. Then they assessed if accounting for genetic data modified these relationships. They assigned each subject an IOP genetic risk score and performed interaction analyses.

The investigators found high caffeine intake was not associated with increased risk for higher IOP or glaucoma overall; however, among participants with the strongest genetic predisposition to elevated IOP - in the top 25 percentile - greater caffeine consumption was associated with higher IOP and higher glaucoma prevalence. More specifically, those who consumed the highest amount of daily caffeine- more than 480 milligrams which is roughly four cups of coffee - had a 0.35 mmHg higher IOP. Additionally, those in the highest genetic risk score category who consumed more than 321 milligrams of daily caffeine - roughly three cups of coffee - had a 3.9-fold higher glaucoma prevalence when compared to those who drink no or minimal caffeine and in lowest genetic risk score group.

"Glaucoma patients often ask if they can help to protect their sight through lifestyle changes, however this has been a relatively understudied area until now. This study suggested that those with the highest genetic risk for glaucoma may benefit from moderating their caffeine intake. It should be noted that the link between caffeine and glaucoma risk was only seen with a large amount of caffeine and in those with the highest genetic risk," says co-author Anthony Khawaja, MD, PhD, Associate Professor of Ophthalmology University College London (UCL) Institute of Ophthalmology and ophthalmic surgeon at Moorfields Eye Hospital. "The UK Biobank study is helping us to learn more than ever before about how our genes affect our glaucoma risk and the role that our behaviors and environment could play. We look forward to continuing to expand our knowledge in this area."

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Bioinspired acid-catalyzed C2 prenylation of indole derivatives

image: Biomimetic catalysis is an emerging concept that emulates key features of enzymatic process. Prenylation is a ubiquitous process found in almost all living organisms. Inspired by the enzymatic mechanism, researchers developed a selective C2 prenylation of indoles via chemical catalysis, which can be applied to late-stage diversification of tryptophan-based peptides and concise synthesis of tryprostatin B.

Image: 
<em>Chinese Journal of Catalysis</em>

Terpenoids are omnipresent in almost all living organisms. Prenylated indoles are prominent representatives that usually display potent medicinal properties (e.g. tryprostatin B). Therefore, significant efforts have been devoted to indole prenylation over the past decades. The known protocols often require a multi-step procedure and rely on the use of stoichiometric promoters. From the viewpoint of step- and atom-economy, developing a direct catalytic C2 prenylation of indoles is highly desirable yet challenging, because the nucleophilicity of C2 site is weaker than that of other two positions (N, C3).

In biosynthesis, enzymatic indole prenylation proceeds through a Friedel-Crafts SN1-type alkylation with a prenyl cation-pyrophosphate ion (PPi) derived from dimethylallyl pyrophosphate (DMAPP). Inspired by this mechanism, recently, a team led by Prof. Qing-An CHEN from Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences (CAS) developed a regioselective C2 prenylation of indoles enabled by Lewis acid catalysis. By employing cheap 2-methyl-3-buten-2-ol (tert-prenol) as precursor and Lewis acid AlCl3 as catalyst, various tryptophol and tryptamine derivatives can undergo C2 prenylation with high selectivity. Notably, this practical strategy can be applied to the late-stage diversification of tryptophan-based peptides. These results were published in Chinese Journal of Catalysis.

Prof. CHEN stated: "Our work represents an old reaction for new use. For anyone engaged in chemistry, it is difficult to imagine that tryptophol and tryptophan-based peptides can undergo Friedel-Crafts reaction with high selectivity, because of the presence of diverse free NH and OH. More importantly, this strategy can greatly shorten the synthesis of indole alkaloid tryprostatin B."

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

Popularity runs in families

image: Rice University bioscientists Eric Wice (left) and Julia Saltz found a genetic basis for social popularity by studying the positions fruit flies occupy within social networks. They used cameras to record the social interactions of 98 genetically identical groups of fruit flies living under different conditions and found the same clones occupied the same positions of social popularity in each case, even when they inhabited enclosures with different living conditions.

Image: 
Photo by Jeff Fitlow/Rice University

HOUSTON - (June 7, 2021) - If identical versions of 20 people lived out their lives in dozens of different worlds, would the same people be popular in each world?

If you substitute "fruit flies" for "people" in that question, you have a fair description of a Rice University study showing that the evolution of social structures and the positions of individuals within those structures are based partly on genetics.

Cloned fruit flies played a starring role in the study that researchers jokingly likened to "The Truman Show," with video cameras observing how the flies behaved in a controlled environment.

In the study published online this week in Nature Communications, Rice bioscientists Eric Wice and Julia Saltz measured the social interactions between individual flies in 98 genetically identical groups. Each group contained 20 clones. The 20 differed from one another genetically, but the same clones were included in each of the 98 groups, which lived in separate enclosures under different environmental conditions.

Wice and Saltz found the same clones occupied the same social positions in each enclosed "world," regardless of variation in living conditions.

"Social structure varies tremendously across the animal world, and the big question we're interested in is 'How did this variation evolve?'" said Saltz, associate professor of biosciences at Rice. "For evolution to occur, social structure must be heritable, and we showed that it is."

Wice, a Ph.D. student in Saltz's lab, said even though genetic variation explained just part the variation in flies' social network positions, the heritability estimates that he and Saltz discovered are enough to fuel evolutionary change.

"For us to know whether or not the structure of social groups and the structure of networks will evolve over time, we have to know the genetic basis of how individuals are nested within their social networks and we also have to know how natural selection acts on social group structure," Wice said. "We studied both of those things simultaneously in this experiment, which hadn't been done before."

Wice said previous research had shown that the structure of social networks can evolve by natural selection, but few genetic components of social network structure had been described.

"We kind of integrated those things simultaneously to see how the structure of social groups will evolve and how it could potentially respond to selection," Wice said.

Wice said the study found the amount of variation in social position that was explained by genetics "was on the order of like 2.4% to 16.6%." And given that virtually all living creatures exhibit some form of social organization, the researchers' findings could apply to species as varied as humans and bacteria, Wice said.

Some studies have explored whether human popularity might also be partly explained by genetics, Saltz said. But she added it is essentially impossible to design an empirical test.

"The best way to do that would be to get identical twins, and 'Truman Show' them," she said, referring to the 1998 film in which the title character unwittingly lives in a controlled environment as part of a reality TV show. "You'd put one twin in one Truman Show bubble and the other twin in the other Truman Show bubble, and then see if they end up having the same identical-twin friends."

In essence, that describes the experiment she and Wice conducted with the flies.

"Any two genotypes in our sample are as closely related genetically as two randomly chosen people," Wice said. "But we can essentially make photocopies of each one of the flies and test, basically, the identical twins over and over."

And because Wice gathered data on social interactions by videotaping flies in each enclosure, running the experiment was a lot like producing 98 simultaneous Truman Shows, but "with 20 Trumans in each show," Wice said.

Positions within social networks were measured in five ways based on thousands of interactions between flies that were tallied and cataloged by computer software that "watched" dozens of hours of video of the 98 groups. Though the word "popularity" doesn't appear in the paper, Saltz said it's apt because some of the key parameters measured were equivalent to "how many friends you have and whether your friends are friends with each other or not."

Remarkably, Wice and Saltz found that social position within networks remained the same, even when they varied the environment by changing the quality of food in the enclosures. In some, food contained more protein and in others more carbohydrates. In other instances, flies had fewer available calories. In all, there were five types of food, and roughly 20 groups of flies living on each type.

"Our findings show that we expect social structure to evolve differently in different nutritional environments," Wice said. "That's significant, but further research is needed to determine what kind of changes arise from nutritional differences."

Ultimately, Wice and Saltz would like to know more about the ways that nutrition, aggressive behavior and other factors influence the evolution of social structure.

"What creates social structure?" Saltz said. "Group structures are inherently emergent properties of many different individuals, and there must be some underlying principles that shape the evolution of those structures."

The fact that an individual's position within a social network is dependent on the behavior of other individuals complicates that study of evolutionary social structure, Wice said, noting that some of the tools used in their analysis did not exist when Wice began Ph.D. studies five years ago.

"It's not independent data, and that violates a lot of statistical tests and assumptions," Wice said. "The tools are improving all the time, and it wouldn't surprise me if new tools came out in the next five years that will allow us and learn even more from the data we're collecting today."

This research was supported by the National Science Foundation (1856577), a Rosemary Grant from the Society for the Study of Evolution and the Houston Livestock Show and Rodeo.

Credit: 
Rice University

COVID-19: Long-term consequences for the kidneys can be expected

The kidneys are a target organ of COVID-19 and are affected very early in the course. However, this is precisely where there is strong prognostic potential: As early as last spring, COVID-19-associated nephritis was identified as an early warning signal for severe courses of the infectious disease and studies to that effect were published [1]. In that regard, the research group led by Professor Oliver Gross, Department of Nephrology and Rheumatology at Göttingen University Medical Center (UMG), screened 223 patients in a study and included 145 of them as a predictive cohort. Study endpoints were ICU admission or mortality. As a result, early urinary changes that are easily detectable using test strips indicated a more severe COVID-19 course. When combined as a predictive system (urine and serum markers), it was possible to predict outcomes. "This means that kidney values are a seismograph for the course of COVID-19 disease," explained Prof. Gross, the head of the study, at the Opening Press Conference of the 2021 ERA-EDTA Congress.

The S3 guideline [2] provides general recommendations for inpatient therapy of patients with COVID-19 recommendations and states, inter alia, that "in the case of proven 'SARSCoV-2 infection' and hospitalization, urinalysis (repeated where necessary) including determination of albuminuria, hematuria, and leukocyturia should be performed."

Kidney involvement is more than just a predictive marker for the course of the disease, however, but also a very important risk factor for mortality. Several studies [3, 4] have shown that in patients with COVID-19, kidney involvement, i.e., albuminuria (and/or hematuria), often occurs early on in the course of the disease. A Chinese study [5] concluded that kidney involvement in COVID-19 patients dramatically worsened the outcome of the novel viral disease and increased mortality by a factor of ten (1.25% of patients without kidney involvement died vs. 11.2% of patients with kidney involvement). Until now, the occurrence of acute kidney injury (AKI) was the only known independent predictor of mortality [2], but it seems that early signs of kidney involvement, such as proteinuria, hypoproteinemia and antithrombin III deficiency have predictive importance [1]. This raises the question of whether and what specific long-term impacts on the kidneys can be expected after COVID-19.

The data on acute kidney injury (AKI) are relatively clear-cut: In cases of AKI, kidney function recovers after seven days, as distinguished from AKD ('acute kidney disease'), in which recovery of kidney function takes longer, namely up to 90 days. However, there are also many patients in whom kidney function does not recover at all, but gradually deteriorates over the further course of disease, i.e., who develop chronic kidney disease. Kellum et al. [6] showed that there was no recovery of kidney function in a total of 41.2% of patients with stage 2 and stage 3 AKI. Relapses and long-term restriction of kidney function occurred in as many as 14.7% of those who initially recovered. The same working group also showed that these patients had a significantly worse outcome (mortality, need for dialysis) one year after acquiring the AKI. Similar warning signals about chronic kidney failure after COVID-19 are now reaching us from China [7]: "So we can say that just over half of the patients who acquire an AKI will subsequently develop chronic kidney disease. This rate can also be expected after a COVID-19-associated AKI. It's important to bring those affected into nephrological aftercare so that the loss of kidney function is slowed down or, if possible, stopped by giving adequate therapy," explains Prof. Gross.

But what about patients who have not experienced acute kidney failure, but 'only' some initial renal dysfunction? Here too, the expert advises caution and aftercare: "There are ongoing studies with results still pending, but molecular SARS-CoV-2-associated tissue changes have already been detected in various organs in which viral replication has been detected." In that respect, long-term damage to the affected organs and post-COVID entities can be expected.

The most important conclusion drawn by the expert is that, "The kidney must be at the center of COVID-19 aftercare, in addition to the lungs, the heart and the nervous system. This is all the more important because early treatment can halt the loss of kidney function, and in recent years, especially, some new, effective therapies such as SGLT-2 inhibitors have been launched on the market to meet that need. Nowadays, the need for dialysis can often be delayed for years, even decades, if treatment is rigorously provided from the outset. Given that kidney disease does not produce symptoms until very late, we would like to make people who have had COVID-19 disease aware of the possibility of long-term consequences on the kidneys. It's important that general practitioners check their patients' kidney values (GFR, albuminuria) on a regular basis - similarly to other groups at risk of kidney disease, such as patients with diabetes mellitus and high blood pressure."

Credit: 
ERA – European Renal Association

COVID-19 as systemic disease: What does that mean for kidneys?

It was clear at a relatively early stage of the pandemic that SARS-CoV-2 causes a wide range of symptoms; in addition to typical respiratory symptoms, patients also had neurological symptoms (starting with anosmia), gastrointestinal symptoms, elevated liver values, and renal, urinary or hematological changes, for example. The fact that such findings occurred not only in severely ill patients with general organ dysfunction suggested that the virus may potentially cause disorders in various organs directly, i.e. that it causes a multi-system disease.

In spring 2020, at the very beginning of the pandemic, the authorities in Hamburg ordered autopsies be performed on all patients who had died with COVID-19. This resulted in one of the world's largest autopsy databases in which data on all organ systems were gathered. The autopsies carried out by forensic pathologists in Hamburg have provided the basis for many organ-related COVID-19 research projects.

One of last year's most frequently cited nephrologic studies, led by Prof. Tobias Huber, UKE Hamburg, showed [1] that the viral load (measured as the number of copies per cell) in the deceased was highest in the respiratory tract and second highest in the kidneys, followed by lower levels in heart and liver, brain and blood. Old age and the number of comorbidities, above all, were associated with multi-organ and renal tropism.

All in all, the Hamburg studies indicate that the diversity of symptoms in SARS-CoV-2 infections might also be related to the wide range of organs infected by the virus.

Association of SARS-CoV-2 organotropism with acute kidney damage

Compared to other intensive care patients with severe infections or sepsis, COVID-19 patients have a significantly higher rate of acute kidney injury (AKI) (56.9% versus 37.2%) [10]. Renal replacement therapy is required by 4.9%, compared to 1.6%. A study [3] of 5,216 US veterans with COVID-19 showed an average AKI incidence of 32%, which means that one in three developed acute kidney injury. 12% needed kidney replacement therapy (dialysis), and nearly half (47%) of the patients had not recovered renal function by discharge. "These are dramatic figures", explained Professor Tobias Huber, UKE Hamburg, at the Opening Press Conference of the 2021 ERA-EDTA Congress. "We know from major AKI studies that some patients whose kidney function does not recover after AKI subsequently transition to chronic kidney disease."

AKI was associated with the risk of mechanical ventilation (OR 6.46), the risk of mortality (OR 6.71) and prolonged in-patient treatment (OR 5.56). Predictors of acute kidney failure were age, obesity, diabetes mellitus, hypertension, restriction of kidney function (lower filtration rate/ eGFR), male gender and Afro-American descent.

Another major autopsy study conducted by Prof. Huber's team in Hamburg [4] explored the hypothesis that renal tropism may be independently correlated with outcome. In 63 autopsies with a similarly high comorbidity rate as in other studies, the virus was directly detected in the kidneys of 60% of the cases. This also correlated with age, number of coexisting diseases and male gender. The time interval between COVID-19 diagnosis and date of death was significantly shorter when the virus was detectable in the kidneys (approx. 14 versus 21 days). Renal tropism was also associated with the incidence of acute kidney injury: Of a total of 40 patients, seven had no AKI (three of the seven having renal viral infection), compared to 33 patients with AKI (23 of the 33 having virus detected in the kidney). The virus was isolated from tubule cells of autopsied kidney tissue, and subsequent cell infection experiments showed that the viruses were active and capable of replication, increasing a thousand-fold in 48 hours.

Together these studies identified SARS CoV-2 as a multi-tropic virus with affinity for the kidney potentially explaining why kidney injury is so frequently occurring in COVID-19 patients.

Credit: 
ERA – European Renal Association

Targeted COVID-19 therapy: What can we learn from autoimmune kidney diseases?

Various viruses and bacteria have long been known to cause autoimmune diseases where there is such a predisposition. This phenomenon also seems to play a major role in SARS-CoV-2, especially in severe courses. The body's own immune cells are activated, with the formation of autoantibodies that attack the body's own healthy cell structures (proteins, autoantigens); deposits of immune complexes can then trigger severe inflammatory processes and cell destruction in the body.

Some nephrological diseases are likewise of autoimmunological etiology, one example being systemic lupus erythematosus (SLE), a chronic, mostly relapsing-remitting inflammatory disease with life-threatening courses in some cases. Manifestations occur on the skin and in organs such as the lungs, heart, CNS, muscles/joints - and the kidneys. Lupus nephritis (kidney inflammation) occurs in almost three out of four cases and determines the outcomes of SLE. Many SLE patients are therefore treated or co-managed by nephrologists, with the aim of avoiding chronic kidney disease and the necessity for chronic dialysis treatment. The causes of SLE are multifactorial (e.g. genetic predisposition, hormonal and environmental triggers). In SLE, antiphospholipid antibodies (aPLs; i.e., autoantibodies against phospholipid-binding proteins) are often found, but also in other autoimmune diseases of the vascular system presenting variable clinical pictures. aPLs can interfere with the clotting system, so there is usually a tendency to thrombosis, and severe complications in pregnancy are also possible in affected women.

More and more similarities between severe COVID-19 and SLE or autoimmune diseases have meanwhile been described. An increase in autoantibody-forming lymphocytes (B cells) and their activation are also observed in critically ill COVID-19 patients, as in acute SLE relapses [2]. aPLs have also been detected in COVID-19 patients, and aPL concentrations correlated with the severity of the disease [2]. There are also some interesting clinical parallels: A pioneering study from Germany [3] shows that early kidney involvement (proteinuria, hematuria) can determine outcomes in COVID-19 patients - as is the case with SLE.

A new study on the subject has now been published by the working group led by Prof. Wolfram Ruf, Mainz, in the renowned journal Science [1]. The study showed for the first time that antiphospholipid antibodies bind to the 'EPCR LBPA' complex. This molecule complex is located at the biochemical interface of the innate immune or pathogen defense system and the clotting system. It is a lipid-protein receptor complex consisting of endosomal LBPA (lysobisphosphatidic acid from endosomes) and the EPC (endothelial protein C) receptor located on the interior surface (endothelium) of the blood vessels. In this complex, the EPC receptor presents LBPA as a pathogenic cell surface antigen. aPL binding to the EPCR-LBPA complex then activates both the endosomal inflammatory pathway and the coagulation cascade. This leads to interferon production in immune cells and to a special expansion of B cells, which then produce further autoantibodies in a self-reinforcing autoimmune signaling loop. With regard to therapy, the study also showed that, in the lupus mouse model, the specific pharmacological blockade of this EPCR-LBPA signaling inhibited severe aPL-related damage.

"Even if the pathogenic mechanism and significance of autoantibody formation in COVID-19 are not yet fully understood, it is possible that the autoimmune response, once triggered, could be the real cause of many severe COVID-19 courses", commented Prof. Dr. Julia Weinmann-Menke, Mainz, the DGfN Press Officer, at the Opening Press Conference of the ERA-EDTA Congress. She and her colleagues at the universities of Mainz, Greifswald (Prof. Dr. Jens Fielitz) and Berlin are therefore planning a cooperative clinical research project to further investigate this autoimmune disease and to find new approaches for immunological COVID-19 therapies. "Our project is based on the hypothesis that an infection-associated autoimmune response by autoantibodies is implicated in many cases of organ damage in patients with severe COVID-19," explains Prof. Weinmann-Menke. The study aims to establish a high-throughput test procedure (multiplex assay) that can be used to identify specific immune responses (immunoproteomics) to autoantigens (especially against cerebral, cardiac and renal proteins) that occur in COVID-19. Autoantibody-forming memory B cells, and the specificity of autoantibodies (tissue specificity and cross-reactivity with other organs) are to be analyzed by conducting in vitro tests. The glycosylation of autoantibodies, which is known to enhance their effect in many cases, is also to be investigated.

"Immunomodulatory therapies used or being tested in the treatment of nephrological autoimmune diseases such as SLE may also be successful in severe COVID-19 courses", concludes Prof. Weinmann-Menke. "We hope that new diagnostic options for patients will provide us with better risk assessment and more targeted therapeutic approaches, also for non-COVID-associated immune phenomena."

Credit: 
ERA – European Renal Association