Culture

More than just genetic code

image: Confocal microscope image of two different cyanobacterial strains: Autofluorescence of the pigments of the thylacoid membrane (red), the signals of mRNAs (green) and the colocalization of both signals (yellow).

Image: 
Figure: Conrad Mullineaux

In photosynthesis, solar energy is converted into chemical energy, which is then used in nature to produce organic molecules from carbon dioxide. In plants, algae and cyanobacteria, the key photosynthesis reactions take place in two complex structures known as photosystems. These are located in a special membrane system, the thylakoids. However, many details of their molecular structure and the way the proteins are incorporated into the membranes have yet to be explored. A team led by Professor Conrad Mullineaux from the Institute of Biology and Chemistry at Queen Mary University London, UK, Professor Annegret Wilde and Professor Wolfgang Hess from the Institute of Biology III at the University of Freiburg and Professor Satoru Watanabe from the Institute of Biosciences at the Agricultural University of Tokyo, Japan, has published a study in the current issue of Nature Plants: The mRNAs are transported to the thylakoid membranes and the respective proteins are produced there on the spot.

The researchers used molecular genetic, bioinformatics and high-resolution microscopic approaches at the single cell level for their investigations. The results confirm that mRNA molecules encode much more than just the sequence of the protein. They also carry signals that appear to control the position and coordination of the photosystem structure. The team was able to identify two proteins likely to be involved in this process by interacting with these mRNAs. The researchers say this opens the way to a detailed understanding of the molecular mechanisms involved and provides new approaches to make these processes useful for photobiotechnology.

Credit: 
University of Freiburg

Quantum light squeezes the noise out of microscopy signals

image: ORNL researchers developed a quantum, or squeezed, light approach for atomic force microscopy that enables measurement of signals otherwise buried by noise.

Image: 
Raphael Pooser, ORNL, U.S. Dept. of Energy

Researchers at the Department of Energy's Oak Ridge National Laboratory used quantum optics to advance state-of-the-art microscopy and illuminate a path to detecting material properties with greater sensitivity than is possible with traditional tools.

"We showed how to use squeezed light - a workhorse of quantum information science - as a practical resource for microscopy," said Ben Lawrie of ORNL's Materials Science and Technology Division, who led the research with Raphael Pooser of ORNL's Computational Sciences and Engineering Division. "We measured the displacement of an atomic force microscope microcantilever with sensitivity better than the standard quantum limit."

Unlike today's classical microscopes, Pooser and Lawrie's quantum microscope requires quantum theory to describe its sensitivity. The nonlinear amplifiers in ORNL's microscope generate a special quantum light source known as squeezed light.

"Imagine a blurry picture," Pooser said. "It's noisy and some fine details are hidden. Classical, noisy light prevents you from seeing those details. A 'squeezed' version is less blurry and reveals fine details that we couldn't see before because of the noise." He added, "We can use a squeezed light source instead of a laser to reduce the noise in our sensor readout."

The microcantilever of an atomic force microscope is a miniature diving board that methodically scans a sample and bends when it senses physical changes. With student interns Nick Savino, Emma Batson, Jeff Garcia and Jacob Beckey, Lawrie and Pooser showed that the quantum microscope they invented could measure the displacement of a microcantilever with 50% better sensitivity than is classically possible. For one-second long measurements, the quantum-enhanced sensitivity was 1.7 femtometers - about twice the diameter of a carbon nucleus.

"Squeezed light sources have been used to provide quantum-enhanced sensitivity for the detection of gravitational waves generated by black hole mergers," Pooser said. "Our work is helping to translate these quantum sensors from the cosmological scale to the nanoscale."

Their approach to quantum microscopy relies on control of waves of light. When waves combine, they can interfere constructively, meaning the amplitudes of peaks add to make the resulting wave bigger. Or they can interfere destructively, meaning trough amplitudes subtract from peak amplitudes to make the resulting wave smaller. This effect can be seen in waves in a pond or in an electromagnetic wave of light like a laser.

"Interferometers split and then mix two light beams to measure small changes in phase that affect the interference of the two beams when they are recombined," Lawrie said. "We employed nonlinear interferometers, which use nonlinear optical amplifiers to do the splitting and mixing to achieve classically inaccessible sensitivity."

The interdisciplinary study, which is published in Physical Review Letters, is the first practical application of nonlinear interferometry.

A well-known aspect of quantum mechanics, the Heisenberg uncertainty principle, makes it impossible to define both the position and momentum of a particle with absolute certainty. A similar uncertainty relationship exists for the amplitude and phase of light.

That fact creates a problem for sensors that rely on classical light sources like lasers: The highest sensitivity they can achieve minimizes the Heisenberg uncertainty relationship with equal uncertainty in each variable. Squeezed light sources reduce the uncertainty in one variable while increasing the uncertainty in the other variable, thus "squeezing" the uncertainty distribution. For that reason, the scientific community has used squeezing to study phenomena both great and small.

The sensitivity in such quantum sensors is typically limited by optical losses. "Squeezed states are fragile quantum states," Pooser said. "In this experiment, we were able to circumvent the problem by exploiting properties of entanglement." Entanglement means independent objects behaving as one. Einstein called it "spooky action at a distance." In this case, the intensities of the light beams are correlated with each other at the quantum level.

"Because of entanglement, if we measure the power of one beam of light, it would allow us to predict the power of the other one without measuring it," he continued. "Because of entanglement, these measurements are less noisy, and that provides us with a higher signal to noise ratio."

ORNL's approach to quantum microscopy is broadly relevant to any optimized sensor that conventionally uses lasers for signal readout. "For instance, conventional interferometers could be replaced by nonlinear interferometry to achieve quantum-enhanced sensitivity for biochemical sensing, dark matter detection or the characterization of magnetic properties of materials," Lawrie said.

Credit: 
DOE/Oak Ridge National Laboratory

Cholesterol's effects on cellular membranes

image: In this 2018 photo, Assistant Professor Rana Ashkar sits in her Roberson Hall office.

Image: 
Virginia Tech

For more than a decade, scientists have accepted that cholesterol - a key component of cell membranes - did not uniformly affect membranes of different types. But a new study led by Assistant Professor Rana Ashkar of the Virginia Tech Department of Physics finds that cholesterol actually does adhere to biophysical principles.

The findings, published recently in the Proceedings of the National Academy of Sciences, have far-reaching implications in the general understanding of disease, the design of drug delivery methods, and many other biological applications that require specific assumptions about the role of cholesterol in cell membranes.   

"Cholesterol is known to promote tighter molecular packing in cell membranes, but reports about how it stiffens membranes have been so conflicting," said Ashkar, who is a faculty member in the Virginia Tech College of Science. "In this work, we show that, at the nanoscale level, cholesterol indeed causes membrane stiffening, as predicted by physical laws. These findings affect our understanding of the biological function of cholesterol and its role in health and disease."

According to the study, cell membranes are thin layers of fatty molecules that define cell boundaries and regulate various biological functions, including how viruses spread and how cells divide. To enable such functions, membranes should be able to bend and permit shape changes. This bending propensity is determined by how packed the molecular building blocks are; tighter packing results in stiffer membranes that cannot bend so easily, Ashkar added.

Cholesterol's impact on cell membranes at the molecular level

Cholesterol is found in high quantities in bacon, egg, cheese, and many other comfort foods. While too much cholesterol can harm the body, regulated amounts of cholesterol in cell membranes are absolutely necessary for the normal function of cells. Anomalies in cholesterol amounts are often associated with various disease conditions.

Besides cholesterol, our cell membranes are primarily formed of lipids, which are small, fatty molecules that self-assemble into bilayer structures when present in water - and nearly 60 percent of the human body is made of water. Together, lipids and cholesterol form the barriers that define our cells and regulate the cellular exchange of nutrients.

At the molecular level, cholesterol possesses a slick and rigid structure. When it interacts with our cell membranes, it jams itself right in between lipids, which results in a more densely packed membrane. According to structure-property relations, this would naturally result in a stiffer membrane.

Yet, for the past 10 or so years, physicists and biologists have assumed that cholesterol had nearly no effect on the stiffness of membranes formed of cis-unsaturated lipids, a common type of lipid found in our cells, despite its well-documented effect on lipid packing.

"It defied our understanding of what cholesterol does to cell membranes," Ashkar said. "It also contradicts standard structure-property relationships in self-assembled materials."

These perceptions are important because in ideal circumstances, cell membranes should maintain a semi-rigid structure: rigid enough to keep its form, but flexible enough to allow for the dynamic movement of signaling proteins and functional domains. Misconceptions about how cholesterol stiffens cell membranes impact our understanding of membrane function.

The data initially made little sense, but as she probed deeper, Ashkar found a clear case of how soft materials can "apparently" exhibit different properties, depending on the parameters of the observation method. She found that over short length and time scales over which important signaling events occur -- we're talking nanometers and nanoseconds -- the added cholesterol induces membrane stiffening that one would expect.

Proving her point

To contradict an established doctrine in science requires more than just one set of data points. "We found these results a while back, but they were met with skepticism because they're so against the existing notions," Ashkar said.

Ashkar's first tests used neutron spin-echo spectroscopy, a unique probe that enables the study of materials on the nanoscale. These experiments were performed at the two major neutron scattering facilities in the United States, the NIST Center for Neutron Research and the Spallation Neutron Source at Oak Ridge National Laboratory.

Ashkar bolstered her evidence with computer modeling simulations, in collaboration with George Khelashvilli, an assistant professor at the Weill Cornell Medicine Department of Physiology & Biophysics, and further validated the experimental findings with recent nuclear magnetic resonance measurements, in collaboration with Michael Brown, a professor of chemistry and biochemistry at the University of Arizona. The data consistency in all three methods provided thorough evidence for Ashkar's hypothesis and confirmed standard structure-property relations in lipid membranes.

"These results call for a reassessment of existing constructs of how cholesterol affects lipid membranes," Ashkar said. "If we don't have the right assumptions, we cannot make the right predictions, and we will not have the right design for the treatment of viruses, diseases, or other biological anomalies."

Credit: 
Virginia Tech

Fossil growth reveals insights into the climate

image: Cross-section through a humerus of Panthasaurus maleriensis (above), the growth sequence is marked in blue. Below: Histological thin-section of the bone growth. Shown is the periodic occurrence of zones (zo) and annuli (an).

Image: 
Elzbieta M. Teschner

Panthasaurus maleriensis lived about 225 million years ago in what is now India. It is an ancestor of today's amphibians and has been considered the most puzzling representative of the Metoposauridae. Paleontologists from the universities of Bonn (Germany) and Opole (Poland) examined the fossil's bone tissue and compared it with other representatives of the family also dating from the Triassic. They discovered phases of slower and faster growth in the bone, which apparently depended on the climate. The results have now been published in the journal PeerJ.

Temnospondyli belong to the ancestors of today's amphibians. This group of animals became extinct about 120 million years ago in the Early Cretaceous. The Temnospondyli also include the Metoposauridae, a fossil group that lived exclusively in the Late Triassic about 225 million years ago. Remains of these ancestors are present on almost every continent. In Europe, they are found mainly in Poland, Portugal and also in southern Germany.

Panthasaurus maleriensis, the most puzzling representative of the Metoposauridae to date, lived in what is now India, near the town of Boyapally. "Until now, there were hardly any investigation opportunities because the fossils were very difficult to access," explains Elzbieta Teschner from the University of Opole, who is working on her doctorate in paleontology in the research group of Prof. Dr. Martin Sander at the University of Bonn. Researchers from the Universities of Bonn and Opole, together with colleagues from the Indian Statistical Institute in Kolkata (India), have now examined the tissue of fossil bones of a metoposaur from the Southern Hemisphere for the first time. The amphibian, which resembled a crocodile, could grow up to three meters in length.

Valuable insight into the bone interior

"The investigated taxon is called Panthasaurus maleriensis and was found in the Maleri Formation in Central India," notes Teschner with regard to the name. So far, the fossil has only been examined morphologically on the basis of its external shape. "Histology as the study of tissues, on the other hand, provides us with a valuable insight into the bone interior," says Dr. Dorota Konietzko-Meier from the Institute for Geosciences at the University of Bonn. The histological findings can be used to draw conclusions about age, habitat and even climate during the animal's lifetime.

The histological examinations revealed that the young animals had very rapid bone growth and that this growth decreased with age. The Indian site where the bones were found provides evidence of both young and adult animals, in contrast to Krasiejów (south-western Poland), where only young animals were found. Geological and geochemical data show that the Late Triassic consisted of alternating dry and rainy periods, as in the present monsoon climate of India. "This sequence is also reflected in the material examined," says Teschner. "There are phases of rapid growth, known as zones, and a slowdown, known as annulus." Normally, one can still observe stagnation lines in the bones, which develop during unfavorable phases of life, for example during very hot or very cold seasons.

In Panthasaurus maleriensis, however, growth never comes to a complete cessation. In comparison: the Polish Metoposaurus krasiejowensis shows the same alternation of zones and annuli in one life cycle and no stagnation lines, whereas the Moroccan representative of the metoposaurs Dutuitosaurus ouazzoui shows stagnation lines - that is, a complete stop in growth - in each life cycle.

The different growth phases in the bones allow for a comparison of climatic conditions. This means that the climate in the Late Triassic would have been milder in Central India than in Morocco, but not as mild as in the area that today belongs to Poland. Sander: "Fossil bones therefore offer a window into the prehistoric past."

Credit: 
University of Bonn

Amid fire and flood, Americans are looking for action

From wildfires in California to hurricanes battering the Gulf, the United States has been assailed by natural disasters from coast to coast. But how can the United States address, mitigate, and adapt to the widespread destruction from wildfires and floods as they intensify from unchecked climate change?

According to a new survey by researchers at Stanford University, Resources for the Future, and ReconMR, Americans overwhelmingly want leaders at the federal and state levels to enact policies to adapt to wildfires and floods.

The second in a six-part series, the natural disasters installment of Climate Insights 2020: Surveying American Public Opinion on Climate Change and the Environment explores how Americans see climate change in relation to wildfire and inland flood adaptation policies.The report gives policymakers and the public an idea of where Americans stand on prospective policies, the role of governments, and who should pay for prevention and adaptation.

“We’ve found that Americans favor action,” report author and Stanford University professor Jon Krosnick said. “Liberals and conservatives, wealthy and not, people want public policy that will protect future generations and the most vulnerable. This is a strong signal to lawmakers that the public is supportive of new policies.”

Topline Findings

The majority of Americans favor a mix of state and federal government efforts to protect people from future wildfire and flood damage. However, most Americans prefer that people in fire- and flood-prone areas shoulder the costs of prevention and adaptation policies.
Americans who believe in the existence of climate change and people who are told that there is a link between climate change and natural disasters are more likely to support adaptation policies.
People who think climate change threatens future generations are far more likely to support adaptation policies than those who do not. Belief in this threat is the strongest predictor of policy support studied in this survey.
Black and Hispanic Americans are more supportive of government efforts than white, non-Hispanic Americans. This may be explained by the fact that people in historically marginalized communities disproportionately live in areas that are and will be most affected by climate change.
Contrary to the luxury goods hypothesis, lower-income people (with incomes less than $35,000) were more likely to support government adaptation policies than people with incomes of $35,000 and more.

“While issues of climate change and the environment often feel divided—and even divisive—in the United States, it’s interesting to see that the majority of Americans support adaptation policies to help us remain resilient in the face of fire and flood,” RFF Senior Fellow Margaret Walls said. “What’s more, the relationship between support for these adaptation policies and belief in climate change drives home the fact that education and trust in science is the bedrock upon which public policy must be built.”

To learn more about these findings, read the natural disasters installment of Climate Insights 2020 by Jon Krosnick, social psychologist at Stanford University and RFF university fellow, and Bo MacInnis, lecturer at Stanford University and PhD economist. You can also try out our data tool, which allows users to explore the data in greater depth.

Future installments in the survey series will focus on green stimulus, political dynamics, electric vehicles, and an overall synthesis. The first installment of this report, which focused on overall trends, was published on August 24, 2020.

Credit: 
Resources for the Future (RFF)

A spillover effect: Medicaid expansion leads to healthier dietary choices

Besides providing healthcare to millions, Medicaid helps recipients make healthier food choices according to UConn research published in the journal Health Economics. UConn Professor of Agricultural and Resource Economics, Rigoberto Lopez, Rebecca Boehm now an economist with the Union of Concerned Scientists, and Xi He now a post-doctoral researcher at the Iowa State were interested in investigating the impact of Medicaid on food choices.

Medicaid is beneficial to recipients in a multitude of ways, by reducing emergency room visits, increasing access to preventive healthcare, while reducing out-of-pocket medical costs and debt, for instance. The program is highly politicized and is met with criticism and assumptions that it is too costly, yet research has shown the program actually saves states money.

He, Lopez, and Boehm were interested in looking at other potential benefits of the program and also hoped to bridge some gaps in the literature says He,

"There are many studies about the impact of Medicaid on mental health or on health spending but few studies have looked at how Medicaid affects food choices."

He explains that by virtue of spending less on healthcare, new Medicaid recipients would have more room in their budget for food and therefore may spend more money on the same unhealthy foods and beverages they have always purchased. On the other hand, with more access to healthcare and health education through contact with providers, the researchers surmised that purchasing patterns could improve, says He.

To see if this was the case, the researchers looked at purchases of beverages such as carbonated soft drinks, juice, milk and other non-alcoholic beverages before and after the expansion of Medicaid and compared purchases in states that did and did not expand the program under the Affordable Care Act. In a way, the states that did not expand Medicaid were the control group for their study. They also compared purchase preferences for sugar content of these beverages.

"We found that households in expansion states significantly increased their purchase of diet soda and bottled water, but there was no change in purchase of regular soda. But overall, these results indicate that Medicaid expansion, in states that did expand, shifted people's purchases to products with less sugar," says He.

Access to healthcare has wide-ranging positive effects on the lives and habits of recipients. The added benefit of knowledge resulting from access to healthcare is not a policy mechanism that is usually discussed says Boehm,

"With so many people working to help people eat healthier and to reduce obesity in the US, I don't hear a lot of talk about how the provision of healthcare through this income effect we proposed in this study can help people eat and drink healthier."

Lopez says programs like Medicaid are often unfairly attacked and those attacks are done so without the numbers and data, therefore it is vital that research like this reaches decision makers.

"Besides the obvious benefit of subsidized healthcare, there is an additional spillover of the program in promoting a healthy diet by reducing one of the three evils of the American diet - sugar -- which is bad in all respects from calories to cancer to obesity. The program contributes not just to cover the treatment of patients but also in a more preventive way," says Lopez.

The researchers add that now with the pandemic, prevention and access to healthcare is more vital than ever. This is especially true for those with pre-existing conditions and conditions that put people at an increased risk for corona virus, such as obesity.

Continued research on the implications of programs such as Medicaid are needed, says Lopez, who says policy decisions need to be made based on research, not politics.

"It's important to see if we spend this money on Medicaid, we're getting some of it back even if it's indirect," says Boehm. "Policy makers need to have this information. Not all states expanded Medicaid under the ACA, so if we have these results saying we see diet quality benefits that may help push other states to join the expansion.

Credit: 
University of Connecticut

High-intensity focused ultrasound for prostate cancer: First US study shows promising outcomes

September 8, 2020 - High-intensity focused ultrasound (HIFU) - a technology used to treat localized prostate cancer - has shown adequate control of prostate cancer while avoiding major side effects of surgery or radiation therapy, according to a new study in The Journal of Urology®, Official Journal of the American Urological Association (AUA). The journal is published in the Lippincott portfolio by Wolters Kluwer.

Approved by the US Food and Drug Administration (FDA) in 2015 for prostate tissue ablation, the HIFU technology has gained popularity and is becoming widely available in the United States. This new study - reflecting the authors' experience as "first adopters" - is the "initial and largest" series of HIFU focal therapy as primary treatment for localized prostate cancer in the United States.

With this non-invasive high-intensity focal ultrasound strategy, nearly 90 percent of men with localized prostate cancer were able to avoid or delay radical treatment (surgery or radiation), suggests the study by Andre Luis Abreu, MD, of University of Southern California, Los Angeles, and colleagues. They write, "Focal HIFU ablation is safe and provides excellent potency and continence preservation with adequate short-term cancer control."

Promising Results with HIFU in 100 Men with Localized Prostate Cancer

With several options available, men with localized prostate cancer can feel overwhelmed when making a decision about their treatment. While radical prostatectomy and radiation therapy can effectively treat the cancer, they have high rates of side effects, including impotence (partial or no erections) and incontinence (involuntary urine leak). For very carefully selected patients with low-risk and non-aggressive (indolent) prostate cancer, active surveillance may be utilized to monitor any growth of the disease.

For those seeking alternatives, HIFU is an option which enables the surgeon to precisely target the area of the prostate where the cancer is located. Using high-intensity focused ultrasound energy to rapidly heat and destroy the targeted area of the prostate, HIFU is a non-surgical and non-radiation, one-stop and outpatient treatment.

This approach, called partial gland ablation aims "to avoid or delay radical treatment and its inherent quality of life deterioration," Dr. Abreu and coauthors write. They reviewed their experience with HIFU in 100 men (average age 65 years) with localized prostate cancer.

Outcomes were assessed a median of 20 months after hemi-gland HIFU ablation of the prostate. The primary outcome of interest was "treatment failure," including recurrent prostate cancer, the need for radical treatment, namely radiation therapy or surgery (radical prostatectomy) to remove the entire prostate, or occurrence of prostate cancer metastases or death.

At follow-up, nearly three-fourths of the men (73 percent) were free of treatment failure. With 76 percent of patients having no evidence of "clinically significant" prostate cancer, the results suggested that HIFU provides "adequate control" of the cancer within the prostate.

Researchers also noted the use of HIFU avoided complications and side effects - including sexual (impotence) and urinary (incontinence) - associated with radical prostatectomy or radiation therapy. Although minor complications occurred after HIFU in 13 percent of the patients, there were no serious complications, no fistulas, no blood transfusion or deaths. For those who responded to validated questionnaires, sexual function (erections) was preserved and urinary symptoms improved. All patients were continent after treatment.

"We believe these data represent the actual clinical practice in the United States," Dr. Abreu and colleagues conclude. "This study provides the initial US HIFU data to prostate cancer stakeholders, including clinicians, patients, and the FDA."

Credit: 
Wolters Kluwer Health

Scientists develop low-cost chip to detect presence and quantity of COVID-19 antibodies

image: The antibody testing platform, developed by researchers from the Micro/Bio/Nanofluidics Unit at OIST.

Image: 
OIST

Robust and widespread antibody testing has emerged as a key strategy in the fight against SARS-CoV-2, the virus responsible for the COVID-19 pandemic. However current testing methods are too inaccurate or too expensive to be feasible on a global scale. But now, scientists at the Okinawa Institute of Science and Technology Graduate University (OIST) have developed a rapid, reliable and low-cost antibody test.

The device, described in a proof-of-concept study published this week in Biosensors and Bioelectronics, uses portable lab-on-a-chip technology to accurately measure the concentration of antibodies present in diluted blood plasma.

Antibodies are proteins produced by the immune system to neutralize the virus. Research has found that COVID-19 antibodies are present in the later stages of infection and can linger in the blood after the infection has cleared, allowing previously infected individuals to be identified. Antibody tests are thus an important means of determining the full spread of the coronavirus - information that is crucial to guide public health policies.

And yet many nations have so far failed to employ large-scale antibody testing.

"Many existing platforms for antibody tests are accurate and reliable, but they are costly and need to be carried out in a lab by trained operators. This means that it can take hours, or even days, to obtain results," said Dr. Riccardo Funari, first author and postdoctoral researcher in the Micro/Bio/Nanofluidics Unit at OIST. "Other tests are easier to use, portable and rapid, but are not sufficiently accurate, which hampers testing efforts."

The researchers avoided this trade-off between accuracy and accessibility by developing an alternative antibody testing platform that combines a powerful light-sensing technology with a microfluidic chip. The chip provides results within 30 minutes and is highly sensitive, detecting even the lowest clinically-relevant antibody concentration. Each chip is cheap to manufacture and negates the need for a lab or trained operators, increasing the feasibility of nation-wide testing.

And there's another distinctive advantage of this newly developed platform. "The test doesn't just detect whether the antibodies are present or absent - it also provides information about the quantity of antibodies produced by the immune system. In other words, it's quantitative," said Professor Amy Shen, who leads the Micro/Bio/Nanofluidics Unit. "This greatly expands its potential applications, from treating COVID-19 to use in developing vaccines."

Illuminating the antibodies

The antibody testing platform consists of a microfluidic chip which is integrated with a fiber optic light probe. The chip itself is made from a gold-covered glass slide with an embedded microfluidic channel. Using an electric voltage, the team fabricated tens of thousands of tiny spiky gold structures, each one smaller than the wavelength of light, on a glass slide.

The researchers then modified these gold nanospikes by attaching a fragment of the SARS-CoV-2 spike protein. This protein is crucial for helping the coronavirus infect cells and causes a strong reaction from an infected person's immune system.

In this proof-of concept study, the scientists demonstrated the principle behind how the test detects antibodies by using artificial human plasma sample spiked with COVID-19 antibodies that are specific to the spike protein.

Using a syringe pump, the sample is drawn through the chip. As the plasma flows past the protein-coated gold nanospikes, the antibodies bind to the spike protein fragments. This binding event is then detected by the fiber optic light probe.

"The detection principle is simple but powerful," said Dr. Funari. He explained that is it based on the unique behavior of electrons on the surface of the gold nanospikes, which oscillate together when hit by light. These resonating electrons are highly sensitive to changes in the surrounding environment, such as the binding of antibodies, which causes a shift in the wavelength of light absorbed by the nanospikes.

"The more antibodies that bind, the larger the shift in the wavelength of the absorbed light," added Dr. Funari. "The fiber optic probe is connected to a light detector which measures this shift. Using that information, we can determine the concentration of antibodies within the plasma sample."

A bright future

The large-scale roll-out of a quantitative test could greatly impact how COVID-19 is treated.

For example, quantitative tests could help doctors track how effectively a patient's immune system is fighting the virus. It could also be used to help identify suitable donors for a promising experimental treatment, called plasma transfusion therapy, where a recovered patient's antibody-rich blood is donated to currently infected patients to help them fight the virus.

Being able to measure the level of immune response can also aid vaccine development, allowing researchers to determine how effectively a trial vaccine triggers the immune system.

However, the researchers emphasized that the device is still undergoing active development. The unit aims to reduce the chip size to cut manufacturing costs and is also working on improving the reliability of the test.

"We have shown that the device works to detect different concentrations of the spike protein antibody in artificial human plasma samples. We now want to expand the test so that the chip can detect multiple different antibodies at the same time," said Dr. Funari. "Once the device is optimized, we plan to collaborate with local hospitals and medical institutions to perform tests on real patient samples."

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Research unravels what makes memories so detailed and enduring

image: The tiny red dots are inhibitory nerve cells within the brain's hippocampus. The optogenetic tool, shown in green, allows researchers to measure the strength of messages to other nerve cells, using flashes of light.

Image: 
Matt Udakis

In years to come, our personal memories of the COVID-19 pandemic are likely to be etched in our minds with precision and clarity, distinct from other memories of 2020. The process which makes this possible has eluded scientists for many decades, but research led by the University of Bristol has made a breakthrough in understanding how memories can be so distinct and long-lasting without getting muddled up.

The study, published in Nature Communications, describes a newly discovered mechanism of learning in the brain shown to stabilise memories and reduce interference between them. Its findings also provide new insight into how humans form expectations and make accurate predictions about what could happen in future.

Memories are created when the connections between the nerve cells which send and receive signals from the brain are made stronger. This process has long been associated with changes to connections that excite neighbouring nerve cells in the hippocampus, a region of the brain crucial for memory formation.

These excitatory connections must be balanced with inhibitory connections, which dampen nerve cell activity, for healthy brain function. The role of changes to inhibitory connection strength had not previously been considered and the researchers found that inhibitory connections between nerve cells, known as neurons, can similarly be strengthened.

Working together with computational neuroscientists at Imperial College London, the researchers showed how this allows the stabilisation of memory representations.

Their findings uncover for the first time how two different types of inhibitory connections (from parvalbumin and somatostatin expressing neurons) can also vary and increase their strength, just like excitatory connections. Moreover, computational modelling demonstrated this inhibitory learning enables the hippocampus to stabilise changes to excitatory connection strength, which prevents interfering information from disrupting memories.

First author Dr Matt Udakis, Research Associate at the School of Physiology, Pharmacology and Neuroscience, said: "We were all really excited when we discovered these two types of inhibitory neurons could alter their connections and partake in learning.

""It provides an explanation for what we all know to be true; that memories do not disappear as soon as we encounter a new experience. These new findings will help us understand why that is.

"The computer modelling gave us important new insight into how inhibitory learning enables memories to be stable over time and not be susceptible to interference. That's really important as it has previously been unclear how separate memories can remain precise and robust."

The research was funded by the UKRI's Biotechnology and Biological Sciences Research Council, which has awarded the teams further funding to develop this research and test their predictions from these findings by measuring the stability of memory representations.

Senior author Professor Jack Mellor, Professor in Neuroscience at the Centre for Synaptic Plasticity, said: "Memories form the basis of our expectations about future events and enable us to make more accurate predictions. What the brain is constantly doing is matching our expectations to reality, finding out where mismatches occur, and using this information to determine what we need to learn.

"We believe what we have discovered plays a crucial role in assessing how accurate our predictions are and therefore what is important new information. In the current climate, our ability to manage our expectations and make accurate predictions has never been more important."

"This is also a great example of how research at the interface of two different disciplines can deliver exciting science with truly new insights. Memory researchers within Bristol Neuroscience form one of the largest communities of memory-focussed research in the UK spanning a broad range of expertise and approaches. It was a great opportunity to work together and start to answer these big questions, which neuroscientists have been grappling with for decades and have wide-reaching implications."

Credit: 
University of Bristol

Investigational drug stops toxic proteins tied to neurodegenerative diseases

PHILADELPHIA -- An investigational drug that targets an instigator of the TDP-43 protein, a well-known hallmark of amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD), may reduce the protein's buildup and neurological decline associated with these disorders, suggests a pre-clinical study from researchers at Penn Medicine and Mayo Clinic. Results were published in Science Translational Medicine.

The work shows, for the first time, how toxic poly(GR) (glycine-arginine repeat) proteins produced by the mutated C9orf72 gene stimulate the clumping of TDP-43 found in ALS, also known as Lou Gehrig's disease, and FTD patients. In a mouse model, the researchers also show that treatment with a pipeline drug known as an antisense oligonucleotide (ASO) reduced the levels of poly(GR), TDP-43 clumps, and neurodegeneration along with it.

"A common genetic cause of ALS and FTD is a repeat expansion in the C9orf72 gene, which somehow leads to TDP-43 aggregation in degenerating neurons, but what remained unclear until now was how those two were connected," said co-senior author James Shorter, PhD, a professor of Biochemistry and Biophysics in the Perelman School of Medicine at the University of Pennsylvania. "We found that TDP-43 aggregates much more rapidly if these toxic poly(GR) proteins are around, suggesting a direct link between the mutation, poly(GR), and TDP-43."

ALS is the progressive degeneration of motor neurons that control people's muscles, speech, and ability to breathe. FTD, the most common form of dementia in people under 60, results in damage to the anterior temporal and/or frontal lobes of the brain; as it progresses, it becomes increasingly difficult for people to function and even care for oneself.

"This finding presents an exciting potential therapeutic target to treat these debilitating diseases by lowering poly(GR) levels," added Hana Odeh, PhD, a post-doctoral fellow in the Shorter lab and co-first author.

After researchers in the Shorter lab demonstrated the role of poly(GR) proteins in TDP-43 accumulation at the protein level, their colleagues at Mayo Clinic in Jacksonville, Fla., studied the interactions in both human cells and mice to support the initial bench side finding at Penn. Co-senior authors from Mayo Clinic include Yongjie Zhang, PhD, an assistant professor of Neuroscience, and Leonard Petrucelli, PhD, Ralph B. and Ruth K. Abrams Professor of Neuroscience at Mayo Clinic College of Medicine and Science.

They showed in a series of complementary experiments, including immunofluorescence staining and immuno-electron microscopy, that poly(GR) in human cells alone can sequester TDP-43 proteins, and in doing so induce the formation of dense protein clumps. This same mechanism was then demonstrated in a mouse model.

It's worth noting, the researchers said, that the burden of both TDP-43 and poly(GR) correlate with neurodegeneration in patients observed in past studies: the higher the protein levels, the worse the neurological function, providing further evidence that the two proteins are conspiring.

Next, the team delivered an ASO drug known as c9ASO, which is being investigated in clinical trials, into the brains of three-month old mice expressing the ALS/FTD-causing repeat-expansion and found that it had diminished the levels of both poly(GR) and TDP-43 aggregates. c9ASO has been shown to switch off the repeat expansions in the C9orf72 gene and reduce poly(GR), but this is the first time it's been shown to reduce TDP-43 clumping.

To assess the drug's neuroprotective ability, the researchers examined the amount of neurons and plasma neurofilament light (NFL), a known biomarker of neurodegeneration in patients, in treated mice. The drug prevented the reduction of cortical neurons and decreased levels of plasma NFL, they found, suggesting the drug helped confer neuroprotection. "If that extends to patients, the plasma NFL level provides a way to track how effective your therapeutic is," Odeh said.

The researchers plan to study in more detail how TDP-43 and poly(GR) and other similar toxic proteins associated with the mutated C9orf72 interact, and conduct further studies with ASO drugs to better understand their role in stopping the clumping of TDP-43.

"This exciting collaborative study sets the stage for continued teamwork in this space, which I see as being of great interest to the ALS and FTD community," Shorter said.

Credit: 
University of Pennsylvania School of Medicine

UCF researchers are developing models to predict storm surges

ORLANDO, Sept. 8, 2020 - Storm surges sometimes can increase coastal sea levels 10 feet or more, jeopardizing communities and businesses along the water, but new research from the University of Central Florida shows there may be a way to predict periods when it's more likely that such events occur.

In a study published recently in the Journal of Geophysical Research: Oceans, researchers developed models to predict extreme changes in sea level by linking storm surges to large-scale climate variability that is related to changes in atmospheric pressure and the sea surface temperature, such as El Niño.

El Niño is a periodic warming of sea surface temperatures in the Pacific Ocean between Asia and South America that can affect weather around the globe.

"If we were capable to predict in advance when we go through periods of relatively higher flood risk, that would be very useful information to have, for example in order to make available and deploy resources way in advance," says Mamunur Rashid, the study's lead author and a postdoctoral research associate in UCF's Department of Civil, Environmental and Construction Engineering.

"Our analysis was only the first step in this direction, and while we show that there is some capability in predicting storm surge variability over inter-annual to decadal time scales, we are not at the point yet where such a modeling framework can be used in an operational way or for making important decisions based on the results," he says.

The study was supported by the National Oceanic and Atmospheric Administration's Climate Program Office, Climate Observations and Monitoring Program.

The study builds on previous research that showed storm surge is a major factor in extreme sea level variability, which is when water level thresholds are higher or lower than normal conditions. In addition to storm surge, factors behind extreme sea level variability also include mean sea level and low frequency tides.

Coastal flood risk assessments often omit the role of extreme sea level variations, ignoring that flood risk is higher in some periods than others, and instead focus on long-term sea level rise, says Thomas Wahl, study co-author and an assistant professor in UCF's Department of Civil, Environmental and Construction Engineering.

"Knowing how the extreme sea level variations we are investigating modulate the potential losses can help better plan and adapt to mitigate these impacts," he says.

To develop the models, the researchers linked large-scale climate variability events, such as El Niño, to variability in storm surge activity. Then they tested the models by having them predict past storm surge variability and then compared their predictions with what actually occurred.

The results indicated that the models matched the overall trends and variability of storm surge indicators for almost all coastal regions of the U.S during both the tropical and extra-tropical storm seasons.

For Florida, the models reflect the difference in the variability of storm surge on the west coast compared to the east, Wahl says.

"It's a little bit larger on the west coast, and the highs and lows along the two coastlines are also not in phase," he says.

The researchers say they will continue to improve their models as the global climate models they employ continue to improve in accuracy.

Credit: 
University of Central Florida

Detecting soil-surface ozone early can help prevent damage to grapes and apples

image: Vapor-depositing conducting polymer "tattoos" on plant leaves can allow growers to accurately detect and measure such ozone damage, even at low exposure levels. UMass Amherst materials chemists say their conducting polymer film, PEDOT, is just 1 micron thick so it lets sunlight in and does not hurt leaves.

Image: 
UMass Amherst/Andrew lab

AMHERST, Mass. - Farmers and fruit growers are reporting that climate change is leading to increased ozone concentrations on the soil surface in their fields and orchards - an exposure that can cause irreversible plant damage, reduce crop yields and threaten the food supply, say materials chemists led by Trisha Andrew at the University of Massachusetts Amherst.

Writing in Science Advances, co-first authors Jae Joon Kim and Ruolan Fan show that the Andrew lab's method of vapor-depositing conducting polymer "tattoos" on plant leaves can allow growers to accurately detect and measure such ozone damage, even at low exposure levels. Their resilient polymer tattoos placed on the leaves allow for "frequent and long-term monitoring of cellular ozone damage in economically important crops such as grapes and apples," Andrew says.

They write, "We selected grapes (Vitis vinifera L.) as our model plant because the fruit yield and fruit quality of grapevines decrease significantly upon exposure to ground level ozone, leading to significant economic losses." Ground-level ozone can be produced by the interaction between the nitrates in fertilizer and the sun, for example.

UMass Amherst viniculturist Elsa Petit, who advised the chemistry team, says the sensor tattoo could be especially useful to the grape industry. "With climate change, ozone will increase and this new sensor might be extremely useful to help farmers act before the damage is recognizable by eye," she says. Ground-level ozone can be mitigated by early detection and treating the soil surface with charcoal or zeolite powders.

As Andrew explains, her lab, funded by the National Science Foundation, adapted the electrode vapor-deposition method they had developed earlier to coat fabrics for medical sensing devices for a new use - on living plants. The conducting polymer film, poly(3,4-ethylenedioxytiophene), PEDOT, is just 1 micron thick so it lets sunlight in and does not hurt leaves. Non-metal, carbon-based polymers that act as conducting electrodes are increasingly used in soft materials design since they were invented in the 1970s, she adds.

"Ours acts like a temporary tattoo on a human," Andrew says. "It doesn't wash away and the polymer's electrical properties don't degrade, even over a long time. We have some tattooed plants in a greenhouse on campus and a year later they are still growing fine, putting out roots and leaves as normal."

To test for early ozone damage, she and colleagues use a hand-held impedance spectrometer adapted from human medical practice. When it touches the electrode tattoo, a read-out reports the electrical resistance vs frequency relationship. This voltage value changes in the presence of various factors, including oxidative damage from ozone.

Andrew says, "You get a wave-form image; a software program fits the wave so we can extract certain tissue parameters. We can recognize patterns for different kinds of damage. It's consistent and remarkably accurate. If you use it on the same plant over a year, as long as the plant is healthy the signal doesn't really change over that time."

"The problem scientifically is that visual ozone damage looks exactly the same as if you watered the plant too little or it got too much sun. This project became intellectually interesting to us when we looked at the ozone signature of our read-outs and it was very different from drought or UV damage. Ozone produces a unique change in the high-frequency electrical impedance and phase signals of leaves."

The scientists hope their invention could be used by farmers and fruit growers who could place a few "reporter plants" among crops to periodically monitor soil ozone levels. "It gives you a picture of what is going on in your soil," Andrew suggests. "You can be alerted if the fertilizer level is wrong, for example. This can happen, especially with food crops that need a lot of sun and fertilizer to produce, like melons, grapes and orchard fruits. Some plants are very sensitive to it."

Credit: 
University of Massachusetts Amherst

CEOs with uncommon names tend to implement unconventional strategies

HOUSTON - (Sept. 8, 2020) - If you're looking for an unconventional approach to doing business, select a CEO with an uncommon name, according to new research co-authored by an expert at Rice University's Jones Graduate School of Business.

"Using 19 years of data on 1,172 public firms, we show that firms' distinctive strategies are systematically linked to their CEOs' uncommon names," wrote co-authors Yan Anthea Zhang, the Fayez Sarofim Vanguard Professor of Strategy at the Jones School, and Yungu Kang and David H. Zhu of Arizona State University's W.P. Carey School of Business.

Past studies have examined how organizational outcomes are associated with leadership personalities, values, experiences and demographic characteristics, but not CEO's names -- "one of the most fundamental attributes," the authors argue. A person's name influences their behavior, cognition and sense of self, according to the paper.

"Studies suggest that individuals with uncommon names tend to have a self-conception of being different from their peers," they wrote. "Although many people may not have the confidence to exhibit how unique they believe themselves to be, CEOs do -- they are generally confident individuals."

CEOs who have uncommon names are motivated to differentiate themselves from other CEOs, they argue, which influences strategic distinctiveness, or the degree that a business' strategy differs from industry peers.

"This is consistent with findings from psychological research that successful professionals who have uncommon names tend to view themselves as more special, unique, interesting and creative," they wrote.

Developing and implementing unique business strategies is "critical for firms to obtain competitive advantage and achieve superior performance," according to the authors. They argue that CEOs with uncommon names tend to adopt strategies that deviate from the industry norm, leading to distinctive strategies.

"Our findings can help all stakeholders to better understand and predict a CEO's strategic decisions, they wrote. "Because CEOs with uncommon names tend to pursue distinctive strategies, boards that seek to enhance the distinctiveness of their firms' strategies may want to hire CEOs with uncommon names."

"Other top executives, middle-level managers and employees can also expect a higher likelihood of implementing distinctive strategies when their CEOs have more uncommon names," the authors continued. "Competitors can expect a firm to engage in unusual competitive moves when the CEO has an uncommon name."

Credit: 
Rice University

Model shows that the speed neurons fire impacts their ability to synchronize

image: Cell membranes have a voltage across them due to the uneven distribution of charged particles, called ions, between the inside and outside of the cell. Neurons can shuttle ions across their membrane through channels and pumps, which changes the voltage of the membrane. Fast firing Purkinje neurons have a higher membrane voltage than slow firing neurons.

Image: 
Image modified from

Research conducted by the Computational Neuroscience Unit at the Okinawa Institute of Science and Technology Graduate University (OIST) has shown for the first time that a computer model can replicate and explain a unique property displayed by a crucial brain cell. Their findings, published today in eLife, shed light on how groups of neurons can self-organize by synchronizing when they fire fast.

The model focuses on Purkinje neurons, which are found within the cerebellum. This dense region of the hindbrain receives inputs from the body and other areas of the brain in order to fine-tune the accuracy and timing of movement, among other tasks.

"Purkinje cells are an attractive target for computational modeling as there has always been a lot of experimental data to draw from," said Professor Erik De Schutter, who leads the Computation Neuroscience Unit. "But a few years ago, experimental research into these neurons uncovered a strange behavior that couldn't be replicated in any existing models."

These studies showed that the firing rate of a Purkinje neuron affected how it reacted to signals fired from other neighboring neurons.

The rate at which a neuron fires electrical signals is one of the most crucial means of transmitting information to other neurons. Spikes, or action potentials, follow an "all or nothing" principle - either they occur, or they don't - but the size of the electrical signal never changes, only the frequency. The stronger the input to a neuron, the quicker that neuron fires.

But neurons don't fire in an independent manner. "Neurons are connected and entangled with many other neurons that are also transmitting electrical signals. These spikes can perturb neighboring neurons through synaptic connections and alter their firing pattern," explained Prof. De Schutter.

Interestingly, when a Purkinje cell fires slowly, spikes from connected cells have little effect on the neuron's spiking. But, when the firing rate is high, the impact of input spikes grows and makes the Purkinje cell fire earlier.

"The existing models could not replicate this behavior and therefore could not explain why this happened. Although the models were good at mimicking spikes, they lacked data about how the neurons acted in the intervals between spikes," Prof. De Schutter said. "It was clear that a newer model including more data was needed."

Testing a new model

Fortunately, Prof. De Schutter's unit had just finished developing an updated model, an immense task primarily undertaken by now former postdoctoral researcher, Dr. Yunliang Zang.

Once completed, the team found that for the first time, the new model was able to replicate the unique firing-rate dependent behavior.

In the model, they saw that in the interval between spikes, the Purkinje neuron's membrane voltage in slowly firing neurons was much lower than the rapidly firing ones.

"In order to trigger a new spike, the membrane voltage has to be high enough to reach a threshold. When the neurons fire at a high rate, their higher membrane voltage makes it easier for perturbing inputs, which slightly increase the membrane voltage, to cross this threshold and cause a new spike," explained Prof. De Schutter.

The researchers found that these differences in the membrane voltage between fast and slow firing neurons were because of the specific types of potassium ion channels in Purkinje neurons.

"The previous models were developed with only the generic types of potassium channels that we knew about. But the new model is much more detailed and complex, including data about many Purkinje cell-specific types of potassium channels. So that's why this unique behavior could finally be replicated and understood," said Prof. De Schutter.

The key to synchronization

The researchers then decided to use their model to explore the effects of this behavior on a larger-scale, across a network of Purkinje neurons. They found that at high firing rates, the neurons started to loosely synchronize and fire together at the same time. Then when the firing rate slowed down, this coordination was quickly lost.

Using a simpler, mathematical model, Dr. Sungho Hong, a group leader in the unit, then confirmed this link was due to the difference in how fast and slow firing Purkinje neurons responded to spikes from connected neurons.

"This makes intuitive sense," said Prof. De Schutter. He explained that for neurons to be able to sync up, they need to be able to adapt their firing rate in response to inputs to the cerebellum. "So this syncing with other spikes only occurs when Purkinje neurons are firing rapidly," he added.

The role of synchrony is still controversial in neuroscience, with its exact function remaining poorly understood. But many researchers believe that synchronization of neural activity plays a role in cognitive processes, allowing communication between distant regions of the brain. For Purkinje neurons, they allow strong and timely signals to be sent out, which experimental studies have suggested could be important for initiating movement.

"This is the first time that research has explored whether the rate at which neurons fire affects their ability to synchronize and explains how these assemblies of synchronized neurons quickly appear and disappear," said Prof. De Schutter. "We may find that other circuits in the brain also rely on this rate-dependent mechanism."

The team now plans to continue using the model to probe deeper into how these brain cells function, both individually and as a network. And, as technology develops and computing power strengthens, Prof. De Schutter has an ultimate life ambition.

"My goal is to build the most complex and realistic model of a neuron possible," said Prof. De Schutter. "OIST has the resources and computing power to do that, to carry out really fun science that pushes the boundary of what's possible. Only by delving into deeper and deeper detail in neurons, can we really start to better understand what's going on."

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Elevated clotting factor V levels linked to worse outcomes in severe COVID-19 infections

BOSTON - Patients hospitalized with severe COVID-19 infections who have high levels of the blood clotting protein factor V are at elevated risk for serious injury from blood clots such as deep vein thrombosis or pulmonary embolism, investigators at Massachusetts General Hospital (MGH) have found.

On the other hand, critically ill patients with COVID-19 and low levels of factor V appear to be at increased risk for death from a coagulopathy that resembles disseminated intravascular coagulation (DIC), a devastating, often fatal abnormality in which blood clots form in small vessels throughout the body, leading to exhaustion of clotting factors and proteins that control coagulation, report Elizabeth M. Van Cott, MD, investigator in the deparment of pathology at MGH and colleagues.

Their findings, based on studies of patients with COVID-19 in MGH intensive care units (ICUs), point to disturbances in factor V activity as both a potential cause of blood clotting disorders with COVID-19, and to potential methods for identifying at-risk patients with the goal of selecting the proper anticoagulation therapy.

The study results are published online in the American Journal of Hematology.

"Aside from COVID-19, I've never seen anything else cause markedly elevated factor V, and I've been doing this for 25 years," Van Cott says.

Patients with severe COVID-19 disease caused by the SARS-CoV-2 virus can develop blood clots in medical lines (intravenous lines, catheters, etc), and in arteries, lungs, and extremities, including the toes. Yet the mechanisms underlying coagulation disorders in patients with COVID-19 are still unknown.

In March 2020, in the early days of the COVID-19 pandemic in Massachusetts, Van Cott and colleagues found that a blood sample from a patient with severe COVID-19 on a ventilator contained factor V levels high above the normal reference range. Four days later, this patient developed a saddle pulmonary embolism, a potentially fatal blood clot occurring at the junction of the left and right pulmonary arteries.

This pointed the investigators to activity of factor V as well as factor VIII and factor X, two other major clotting factors. They studied the levels of these clotting factors and other parameters in a group of 102 consecutive patients with COVID-19, and compared the results with those of current critically ill patients without COVID-19, and with historical controls.

They found that factor V levels were significantly elevated among patients with COVID-19 compared with controls, and that the association between high factor V activity and COVID-19 was the strongest among all clinical parameters studied.

In all, 33 percent of patients with factor V activity well above the reference range had either deep vein thrombosis or a pulmonary embolism, compared with only 13 percent of patients with lower levels. Death rates were significantly higher for patients with lower levels of factor V (30 percent vs. 12 percent), with evidence that this was due to a clinical decline toward a DIC-like state.

Van Cott and colleagues also found that the clinical decline toward DIC was foreshadowed by a measurable change in the shape or "waveform" of a plot charting light absorbance against the time it takes blood to coagulate (waveform of the activated partial thromboplastin time, or aPTT).

"The waveform can actually be a useful tool to help assess patients as to whether their clinical course is declining toward DIC or not," Van Cott explains. "The lab tests that usually diagnose DIC were not helpful in these cases."

Importantly, the MGH investigators note that factor V elevation in COVID-19 could cause misdiagnosis of some patients, because under normal circumstances factor V levels are low in the presence of liver dysfunction or DIC. Physicians might therefore mistakenly assume that patients instead have a deficiency in vitamin K.

"This investigation was spurred by the surprising case we encountered, and was conducted rapidly by an interdisciplinary pathology team at MGH during the peak of the pandemic," said Jonathan Stefely, MD, PhD, one of the study's co-authors.

Credit: 
Massachusetts General Hospital