Tech

Hyperactive immune cells accelerate heart valve disease: Study

image: Illustration showing the path of immune cells through a healthy aortic valve (left) and stenotic aortic valve (right).

Image: 
RMIT University

A new study using organ-on-a-chip technology reveals how overactive immune cells aggravate heart valve disease and how this damaging hyperactivity could potentially be controlled.

Aortic valve stenosis is the most common type of heart valve disease in the elderly and affects more than one in eight people aged over 75. Left untreated it has a higher mortality than most cancers.

The condition is typically caused by degeneration and thickening of the aortic valve, which narrows the valve opening and reduces blood flow. Blood cells that have to squeeze through the narrow valve come under intense frictional force, known as shear stress.

A team of Australian researchers and clinicians set out to investigate the effect of this shear stress on white blood cells - key players in our immune system's first line of defence.

They found the constant stress of squeezing through the narrow aortic valve activates these cells, leading to harmful inflammation that accelerates the progression of aortic stenosis.

The team have identified a potential drug target by pinpointing the receptor that controls this white blood cell overactivity.

The study, led by RMIT University and the Baker Heart and Diabetes Institute, is published in the internationally-leading cardiovascular journal Circulation.

Co-chief investigator Dr Sara Baratchi said the research combined clinical work, such as blood samples and valve measurements, with lab experiments using organ-on-a-chip technology that replicated the pathological conditions inside the aortic valve.

"In someone with severe aortic valve stenosis, circulating blood cells come under heavy shear stress about 1500 times a day," said Baratchi, an ARC DECRA Fellow and Senior Lecturer at RMIT.

"We now know this constant frictional force makes the white blood cells hyperactive. If we can stop that inflammatory response, we can hope to slow down the disease.

"The same organ-on-a-chip technology that helped us make these discoveries will also enable us to easily test potential drugs to treat this harmful immune response."

Co-chief investigator Dr Karlheinz Peter, Deputy Director of Basic and Translational Research at the Baker Heart and Diabetes Institute, said the study helped explain why aortic valve stenosis can start to worsen dramatically, often over just a few months.

"The smaller the narrowing, the more the inflammatory cells get activated, and then they accelerate the disease," Peter said.

"Our study also shows that a valve replacement - either through open heart surgery or via a catheter-based percutaneous approach - not only improves blood flow but also acts as an anti-inflammatory measure. The latter is a novel and centrally important discovery."

Under pressure: how the study was done

Replacing the aortic valve is the most effective treatment for severe aortic valve stenosis.

For the study, researchers compared immune cells taken from 24 patients before and after replacement.

They also designed a microfluidic organ-on-a-chip system to replicate the conditions inside the aortic valve, pre- and post-replacement.

This enabled the researchers to precisely assess how the cells responded to changes in shear stress.

The team focused on the largest circulating cells - a type of white blood cell known as a monocyte - as they experience the most shear stress when passing through the narrow aortic valve.

Importantly, these cells are known to be central drivers of the pathology of aortic stenosis, but it has been unclear until now how changes in blood flow dynamics affected the immune response.

The researchers can now confirm that high shear stress activates multiple white blood cell functions.

A membrane protein known as "Piezo-1" was identified as the mechanoreceptor primarily responsible for activating these functions, making it a potentially druggable target.

The research also revealed for the first time that replacing the aortic valve has an anti-inflammatory effect, expanding the known therapeutic benefits of the procedure.

Peter said monocytes were also known to play a role in atherosclerosis, where blood flow is obstructed due to a build-up of cholesterol plaque in the artery wall.

"A drug that targeted Piezo-1 could potentially be applied to slowing the progression of aortic valve stenosis as well as treating atherosclerosis," he said.

Experimental tech: organ-on-a-chip

Organ-on-a-chip technology is based on microfluidic chips. These are transparent devices the size of postage stamps that contain an array of miniature channels, valves and pumps to replicate the biophysical and biochemical properties of a human organ.

For this study, researchers designed a device to mimic the shear stress conditions that immune cells experience while passing through the stenotic aortic valve - effectively creating aortic-stenosis-on-a-chip.

Fabricated at RMIT's state-of-the-art Micro Nano Research Facility, the technology was designed and delivered by the multi-disciplinary Mechanobiology and Microfluidics research group.

The group brings together biomedical engineers from the School of Engineering and mechano-biologists and immunologists in the School of Health and Biomedical Sciences.

"With this technology, we can meticulously mimic both healthy and diseased organs in the body, at very low cost and in a highly controlled experimental environment," Baratchi said.

"We can build models to simulate different flow situations and identify drug targets, which we hope in future may reduce or even replace the need for animal models."

Credit: 
RMIT University

Brigham investigators develop sterilizable, alternative N95 mask

Boston, MA -- With N95 masks in short supply, a team of bioengineers and clinical experts from Brigham and Women's Hospital and the Massachusetts Institute of Technology (MIT) has been developing a new, sustainable solution for health care workers to provide protection during the pandemic. Made from sterilizable materials and known as the Injection Molded Autoclavable, Scalable, Conformable (iMASC) system, the team's N95 mask alternative is still in its prototyping stage. But early results from modeling and a feasibility study for fit testing suggest that the iMASC system could fit faces of different sizes and shapes and be sterilized for reuse. Preliminary findings are published in the British Medical Journal Open.

"Like many of our colleagues, when we heard about shortages in personal protective equipment, we wanted to help. We thought that an approach that could be helpful would be to develop a mask system that could be readily sterilized in many different ways and reused," said corresponding author Giovanni Traverso, MB, BChir, PhD, a gastroenterologist and biomedical engineer in the Division of Gastroenterology at the Brigham and Department of Mechanical Engineering at MIT.

Traverso and his colleagues, including co-lead authors James Byrne, MD, PhD, and Adam Wentworth, MS, worked closely with the Massachusetts General Brigham COVID Center for Innovation on their project. The authors selected DOW Corning QP1-250 liquid silicone rubber (LSR) for their mask material. Silicone rubber can withstand heat of up to 572 degrees Fahrenheit and is used in a wide variety of products, including silicone baking sheets, undergarments, medical implants and medical devices such as respiratory masks used to deliver anesthesia. The team created the masks using injection molding -- a common manufacturing technique in which a liquid material is fed into a mold cavity to give it shape. Elastic straps secure the mask in place and two replaceable filters keep out solid particles.

"From the beginning, we were thinking about scalability. We selected materials recognized to be sterilizable and comfortable and a manufacturing process designed to be scaled," said Byrne, a resident in the Department of Radiation Oncology at the Brigham and a postdoctoral fellow in the Traverso lab.

The team tested various sterilization techniques on the masks, including autoclaving, soaking in a bleach solution and soaking in isopropanol. While 10 autoclave cycles made the masks slightly stiffer, there were no large differences in the sterilized masks compared to the masks before sterilization.

"We wanted to create a mask that could be easily sterilized and reused for several reasons. Not only is this important because of disruptions to the supply chain, but also disposable masks, gloves and other PPE can cause a tremendous amount of litter," said Wentworth, a research engineer in the Traverso lab.

Using 3D modeling, the team evaluated how the mask might fit on different wearers and how much force would be required to keep the mask secure on a range of face shapes and sizes. In addition, the team recruited health care workers from the Brigham in a small fit testing study. Of the 20 participants who performed fit testing, 100 percent completed the process successfully. When asked about their preferences, participants responded that:

60 percent would be willing to wear the iMASC system instead of a surgical mask while 20 percent had no preference;

25 percent would prefer the iMASC system instead of an N95 mask while 60 percent indicated no preference.

The authors acknowledge that their proof-of-concept study has several limitations. Fit testing and surveys were conducted among only a small number of people at a single institution. Modifications to the filter system and elastic straps would likely improve the fit and
robustness of the mask. And large-scale production will require greater quality control of filter components.

Based on their initial study, the team has further refined the iMASC, and the authors have recently completed a multi-institutional trial of the new system. They continue to work with various partners from across Mass General Brigham to test the system and are considering strategies to support scaling up and deploying the iMASC.

Credit: 
Brigham and Women's Hospital

Bespoke catalysts for power-to-X

image: Test setup including high-pressure cell for the Fischer-Tropsch measurement campaign using the CAT-ACT measurement line at the KIT synchrotron. (Photo: Tiziana Carambia)

Image: 
Photo: Tiziana Carambia

Suitable catalysts are of great importance for efficient power-to-X applications - but the molecular processes occurring during their use have not yet been fully understood. Using X-rays from a synchrotron particle accelerator, scientists of the Karlsruhe Institute of Technology (KIT) have now been able to observe for the first time a catalyst during the Fischer-Tropsch reaction that facilitates the production of synthetic fuels under industrial conditions. It is intended to use the test results for the development of bespoke power-to-X catalysts. The team has published the results in the scientific journal Reaction & Chemical Engineering. (DOI: 10.1039/c9re00493a)

On the way to a CO2-neutral society, power-to-X processes (P2X), i.e. processes that convert renewable energy into chemical energy sources, support the interlocking of different sectors. For example, synthetic fuels can be produced from wind or solar power, enabling climate-friendly mobility and goods transport without additional greenhouse gas emissions. The Fischer-Tropsch synthesis (FTS), which is necessary for this purpose among other things, yielding long-chain hydrocarbons for the production of petrol or diesel from carbon monoxide and hydrogen, is an established process in the chemical industry. However, even though more than one hundred years have passed since the discovery of this technology, the processes involved are still not fully understood scientifically: "This applies in particular to the structural changes in the catalysts required for the process under industrial conditions," says Professor Jan-Dierk Grunwaldt from the Institute for Chemical Technology and Polymer Chemistry (ITCP) of KIT. "During the reaction, undesirable by-products can be formed or disruptive structural changes in the catalyst can occur. So far, it has not been explained sufficiently how this happens exactly during the reaction and what the effects on the overall process are."

In a transdisciplinary project, in cooperation with P2X experts from the Institute for Micro Process Engineering (IMVT) and the Institute of Catalysis Research and Technology (IKFT) of KIT, the team has now achieved a breakthrough in understanding the FTS at the atomic level. "For the analysis, we use methods of synchrotron research, i.e. X-ray absorption spectroscopy and X-ray diffraction," explains Marc-André Serrer (IKFT), one of the authors of the study. "This was the first time that we were able to watch, so to speak, an FTS catalyst at work at the atomic level under real process conditions." While catalytic reactions had already been studied beforehand with a synchrotron, a special particle accelerator for generating particularly intense X-ray radiation, reactions that take place over a long period of time and at high temperatures and pressures, as in real-time operation at a P2X facility, have so far presented an obstacle. For the experiment at KIT, a novel high-pressure infrastructure has now been added to the CAT-ACT measuring line (CATalysis and ACTinide measuring line) designated for catalyst studies at the KIT synchrotron. With this infrastructure - which was built as part of the German Federal government's Kopernikus projects for the energy turnaround - it was possible to determine the function of a commercial cobalt-nickel catalyst operando at 250 °C and 30 bar for more than 300 hours during the FTS. This was also the first time that a sufficient quantity of hydrocarbons could be produced in such an experiment that could be analyzed afterwards.

Catalyst development at the computer

The experiment allowed the scientists to identify hydrocarbon deposits that hinder the diffusion of the reactive gases towards the active catalyst particles. "In the next step, these insights can be used to protect the catalyst specifically against these deactivation mechanisms," says Grunwaldt. "This is done, for example, by modifying the catalyst with promoters, i.e. substances that improve the properties of the catalyst." In the future, the novel atomic understanding of catalytic reactions will contribute to computer simulations for a fast, resource-saving and cost-effective development of bespoke catalysts for P2X processes.

Credit: 
Karlsruher Institut für Technologie (KIT)

Reducing radioactive waste in processes to dismantle nuclear facilities

image: Researcher at the UPV/EHU's Department of Nuclear Engineering and Fluid Mechanics.

Image: 
UPV/EHU

Recent years have seen a move into a phase to decommission and dismantle nuclear power stations and facilities, above all in Europe. By 2015, 156 reactors at nuclear facilities across the world had been shut down or were being decommissioned, and by 2050 over half of the current nuclear capacity of 400 GW across the world is programmed to be decommissioned so that it can be dismantled. "In Europe this will result in an increase in radioactive waste while current storage facilities have limited capacity. Optimizing this management is crucial," said the UPV/EHU professor Margarita Herranz.

The European H2020 INSIDER project --with funding of nearly five million euros over four years-- is tackling the specification of the best strategy to optimize the production of radioactive waste during the dismantling of nuclear facilities; it is focussing on the characterization strategy and on improving the methodology, above all in constrained environments, by working to propose new and better solutions for dismantling nuclear and radioactive facilities, including power stations that produce electrical power, and for environment remediation, taking post-accident situations into consideration as well.

In situ measurements in constrained environments.

"The dismantling of facilities of this type is a very costly process, the waste takes up a huge amount of space and, what is more, people do not like having repositories of this type on their doorstep. And if we also talk about dismantling many nuclear facilities, it is crucial to specify what has to be regarded as radioactive waste inside a nuclear power station and what does not; this is because the cost of managing this waste increases significantly in terms of its level of activity, and the dismantling of a nuclear power station may result in the extracting of tonnes upon tonnes of waste," explained the researcher in the UPV/EHU's Department of Nuclear Engineering and Fluid Mechanics. Although the dismantling conducted so far has exhaustively complied with the regulations in force, "a considerable part of what has been regarded as nuclear and radioactive waste does not in fact fit into that category", she said. "Erring too much on the side of caution has occurred in this respect."

Margarita Herranz, who leads the working group responsible for organising and implementing measures in situ and conducting the subsequent analysis of the results, said that "it is essential to optimize the in situ measurements of radioactivity in walls, partition walls, machinery, metal shields, etc. owing to the impossibility of moving them in their entirety to the lab". It is worth highlighting that these are difficult measurements "because you have to see what equipment has been adapted for this purpose and obtain good results in terms of the atmosphere existing in each environment: radiation, temperature, pressure, humidity, etc.". In this context "we have specified the constrained environments from the standpoint of in situ measurements in nuclear and radioactive facilities, how these constraints affect the type of equipment that is going to be used, and how these constraints may end up affecting the results or the assessment of the results that are going to be obtained," she said. They are also working to describe the different zones of a nuclear/radioactive facility and the problems that may be present in them, and also to recommend the types of instruments to be used in each of these zones.

Herranz pointed out that this project "is contributing towards optimizing the dismantling processes and towards improving the public perception of these processes. In other words, to show that they are being monitored and that work is being done in this respect. A lot of technology has been placed at the service of this aim. Basically, it is a social aim". Within the framework of the European INSIDER project, many scientific articles are being published and are being used to compile an extensive methodological guide which can be accessed via the INSIDER website. The project is hoping to improve EU policy: "We hope this work will end up influencing the drawing up of international regulations," concluded the researcher.

Credit: 
University of the Basque Country

Physical activity of older people requires tailored monitoring

image: Commercial activity monitors may underestimate the exertion level of older adults' activity.

Image: 
University of Jyväskylä.

The ability to move about may deteriorate when ageing, a phenomenon which needs to be considered when assessing physical activity in older people. A study on active ageing at the University of Jyväskylä examined movement that exceeds the intensity of preferred walking speed.

Improving physical performance requires exercising regularly beyond one's usual level of exertion. The body then adapts to the new level of exertion by improving performance. Many activity monitors on the market have been developed for young and middle-aged people who have higher physical performance than older adults. Therefore, activity monitors may underestimate the exertion level of older adults' activity.

In the study at the University of Jyväskylä, preferred walking speed was measured in a six-minute walking test. In addition, the participants wore an activity monitor while living their day-to-day life.

"By measuring their preferred walking speed we were able to assess the time that our participants exercised more strenuously than what is their usual exertion level and what is beyond their comfort zone," explains postdoctoral researcher Laura Karavirta from the Gerontology Research Center and Faculty of Sport and Health Sciences.

The participants in the study accumulated 62 minutes of activity, on average, beyond the intensity of their preferred walking speed. Interestingly, the amount of activity was similar in 75-, 80- and 85-year-old people, regardless of age.

"The new method enables us to investigate physical activity as individual behaviour, which is not influenced by fitness level," Karavirta explains. "A physically active lifestyle is about challenging oneself according to one's own abilities. Light intensity movement is also important, but at least moderate exertion is required for improving physical performance."

The prevailing recommendation for all adults is a minimum of 150 minutes of at least moderate intensity physical activity per week. The general definition for moderate intensity is equivalent to exceeding three times the energy consumption of rest. Individual exertion at this intensity varies according to person's fitness level.

"For most young adults, it feels easy and corresponds to slow walking but for some older adults it may be the hardest effort they can perform," Karavirta says.

The study is part of a larger AGNES study for 75-, 80-, and 85-year-old people living independently in Jyväskylä, which is funded by the Academy of Finland and European Research Council. Out of 1,021 participants, 444 took part in this study, where a motion sensor was attached to the thigh for a week and preferred walking speed was measured in the laboratory as the average speed in a self-paced six-minute walking test.

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

Milking algae mechanically: Progress to succeed petroleum derived chemicals

image: Micrograph of algae

Image: 
Masaki Ihara Ph.D., Interdisciplinary Cluster for Cutting Edge Research Institute for Biomedical Sciences, Shinshu University

Algae holds a lot of untapped potential for use in industry. So far algae has provided invaluable nutrition in the health food sector but has struggled to be competitive against petroleum-derived chemical production. Algae is favorable to petroleum from an environmental standpoint but the production cost of culturing, collecting, extracting and refining adds up to make it too expensive for practical use. There is a need to improve production efficiency to reduce the cost of algae derived products in order for them to be a viable alternative to petroleum-derived products.

A research team led by Alice Uchida and Masaki Ihara of Shinshu University succeeded in developing a method of cultivating microalgae by solving three issues of cultivation; collection/recovery of compounds and extraction/purification of products with this new method. First, it was necessary not to kill the algal cells during extraction. By preserving the algae, there is no need to cultivate and multiply the algae. Secondly, the algae they chose naturally gather together for ease of collection. Thirdly, the compounds wanted for harvest; polysaccharides (carbohydrates) and phycobiliproteins are released outside of the algae and bound to the cell surface. There is no need for a solvent for extraction or purification, dramatically simplifying and decreasing the cost of processing. This non-destructive continuous milking system is a practical and effective method of algae-derived chemical production.

In the beginning of the study, the researchers struggled to find a type of algae that could withstand mechanical shearing. They were not sure such an algae existed. However, after an extensive search, they were able to find the Tolypothrix filamentous cyanobacteria and were able to cultivate it continuously for 2 years with little cell damage despite mechanical shearing of the compounds bound to the cell surface. They grew the algae in non-sterile agricultural water and performed 87 day milking cycles which yielded 90 to 140 mg/L of extracellular carbohydrates every 3 weeks. Phycobiliproteins are currently in demand for food additives and cosmetic applications.

The Ihara lab hopes to enable petroleum-based products to be replaced by algae-derived products that inflict less strain on the environment. In order to do so, algae production needs to happen on a much, much larger scale. He continues to look for tough algae that can survive in a variety of environments. He hopes to be able to collaborate with researchers from a variety of fields including fermentation engineering, chemical engineering, polymer chemistry- specifically algal biomass conversion technology, environmental and forest conservation studies in order to study the effects of large-scale algae culture on the environment.

The realization of a post-petroleum society would cause the landscape to be altered, similar to how rice cultivation changed the landscape of Japan through the introduction of rice paddy fields. Although the researchers are optimistic about the future potential of algae, they proceed with caution to consider all the potential effects of change.

Credit: 
Shinshu University

Enhancing chemotherapy by RNA interference - BIO Integration

Announcing a new article publication for BIO Integration journal. In this review article the authors Shuwen Cao, Chunhao Lin, Shunung Liang, Chee Hwee Tan, Xiaoding Xu and Phei Er Saw from Sun Yatsen University, Guangzhou, China and Guangzhou University of Chinese Medicine, Guangzhou, China consider enhancing chemotherapy by RNA interference.

Small interfering RNA (siRNA) has shown tremendous potential for treating human diseases in the past decades. siRNA can selectively silence a pathological pathway through the targeting and degradation of a specific mRNA, significantly reducing the off-target side effects of anticancer drugs. However, the poor pharmacokinetics of RNA significantly restricted the clinical use of RNAi technology.
In this review, the authors examine in-depth the siRNA therapeutics currently in preclinical and clinical trials, multiple challenges faced in siRNA therapy, feasibility of siRNA treatment with anticancer drugs in combined with siRNA in nanoparticles or modified to be parental drugs, sequential therapy of siRNA treatment prior to drug treatment with siRNA and drugs loaded in nanoparticles. The combinatorial activation of apoptosis by different pathways, namely Bcl-2, survivin, and Pgp protein was focused on. Taken together, this review serves to establish the pathway of effective and efficient combination therapy of siRNA and drugs as a new strategy. BIO Integration is fully open access journal which will allow for the rapid dissemination of multidisciplinary views driving the progress of modern medicine.
Article reference: Shuwen Cao, Chunhao Lin, Shunung Liang, Chee Hwee Tan, Xiaoding Xu and Phei Er Saw, Enhancing Chemotherapy by RNA Interference. BIO Integration, 2020, https://doi.org/10.15212/bioi-2020-0003

As part of its mandate to help bring interesting work and knowledge from around the world to a wider audience, BIOI will actively support authors through open access publishing and through waiving author fees in its first years. Also, publication support for authors whose first language is not English will be offered in areas such as manuscript development, English language editing and artwork assistance.

Credit: 
Compuscript Ltd

Graphene: It is all about the toppings

Graphene consists of a single layer of carbon atoms. Exceptional electronic, thermal, mechanical and optical properties have made graphene one of the most studied materials at the moment. For many applications in electronics and energy technology, however, graphene must be combined with other materials: Since graphene is so thin, its properties drastically change when other materials are brought into direct contact with it.

However, combining graphene with other materials at the molecular level is difficult: The way graphene interacts with other materials depends not only on which material you choose, but also on how these materials are brought into contact with the graphene. Rather than sticking a finished material layer to the graphene, the appropriate atoms are brought into contact with the graphene in such a way that they "grow" on the graphene in the desired crystal structure.

Until now the mechanisms of the "growth" of such other materials on graphene have often remained unclear. A new joint study by research teams from the TU Wien and the University of Vienna for the first time observes now how indium oxide grows on graphene. The combination of indium oxide with graphene is important, for example for displays and sensors. The results have now been presented in the scientific journal "Advanced Functional Materials".

Graphene pizza

"As with a pizza, graphene technology is not only dependent on the graphene pizza base but also on its toppings," explains Bernhard C. Bayer from the Institute of Materials Chemistry at the TU Wien, who led the study. "How these toppings are applied to the graphene is, however, crucial."

In most cases, atoms in the gaseous state are condensed on the graphene. In the case of indium oxide, these are indium and oxygen. "But there are many parameters such as background pressure, temperature or the speed at which these atoms are directed at the graphene that influence the result drastically," says Bernhard Bayer. "It is therefore important to develop a fundamental understanding of the chemical and physical processes that actually take place. But to do this, you have to watch the growth process as it proceeds. "

This is exactly what the research team has now succeeded in doing: for the first time, the individual steps of growing indium oxide on graphene were observed in the electron microscope at atomic resolution.

Randomly distributed or perfectly aligned

"What was particularly interesting for us was the observation that, depending on the background pressure, the indium oxide crystallites either arrange themselves randomly on the graphene's crystal lattice or snap perfectly on one another like Lego bricks. This difference in arrangement can have a major impact on the application properties of the combined materials," says Kenan Elibol, first author of the study. The new findings will be useful to make the integration of graphene with other materials more predictable and controllable with respect to future application requirements.

Credit: 
Vienna University of Technology

Spectroscopy approach poised to improve treatment for serious heart arrhythmia

video: Researchers showed that a new mapping approach based on near infrared spectroscopy can distinguish various types of tissue in the heart. The movie shows the sampling sites as well as 3D renderings of the adipose contrast index (ACI) and lesion optical index (LOI1).

Image: 
Christine P. Hendon, Columbia University

WASHINGTON -- Researchers have demonstrated that a new mapping approach based on near infrared spectroscopy can distinguish between fat and muscle tissue in the heart. This distinction is critical when using radiofrequency ablation to treat a serious heart rhythm problem known as ventricular tachycardia.

Radiofrequency ablation, the only treatment for ventricular tachycardia, involves identifying areas of the heart that are triggering abnormal signals and then heating them to the point that abnormal signals can no longer be transmitted. During the procedure, it's important, yet challenging, to identify precisely where to deliver energy while avoiding healthy tissue.

In The Optical Society (OSA) journal Biomedical Optics Express, research team leader Christine P. Hendon from Columbia University and a multidisciplinary group of colleagues show, for the first time, that an ablation catheter incorporating near-infrared spectroscopy mapping can successfully distinguish various tissue types in hearts donated from patients with cardiovascular disease.

"Ventricular tachycardia is the single largest cause of sudden death in the U.S., with an estimated 300,000 deaths per year occurring from the condition," said Hendon. "We hope that our technology can be translated to the clinic to increase the efficacy of radiofrequency ablation therapy and reduce related complications for ventricular tachycardia patients."

An optical option

Today, most clinical heart mapping systems are based on functional measurements such as voltage. "An optical measurement that provides information about the underlying tissue composition has the potential to be used with standard functional methods to improve ablation success rates," said Hendon.

The researchers used near infrared spectroscopy, which works by shining light with a broad range of wavelengths onto the tissue and then detecting the light that is reflected back. This reflectance spectrum provides information about tissue composition based on its absorption and scattering properties.

"By using near-infrared wavelengths in addition to visible wavelengths, we can probe deeper into the tissue," said Hendon. "The technique lets us distinguish various types of tissue within human hearts because fat, muscle and ablation lesions all have different scattering and absorption wavelength dependent properties."

The approach could not only be used to guide ablation procedures and evaluate how well they worked, but might also provide information that could be used to develop new computational models that would help advance the understanding of mechanisms involved in arrhythmia.

"Once an abnormal area has been identified and heated to form an ablation lesion, it is important for the operator to know if that lesion was placed successfully and had the desired effect," said Hendon. "Direct measurement of tissue characteristics affords the possibility of improved ability both to find abnormal tissue and to determine how well it has been treated."

Inside the heart

Using near-infrared spectroscopy during radiofrequency ablation required the researchers to develop new ablation catheters that incorporated optical fibers for emitting and detecting light, as well as a custom tip for tracking the instrument. They also developed new signal and data processing techniques, a workflow for rendering anatomical tissue maps and a catheter tracking system to enable spatial mapping of the tissue.

Using the new catheter, the researchers tracked the position of the instrument as it moved along the heart surface. At each location they recorded reflectance spectra and used this to compute an optical index for both fat and lesion tissues. The experiments were performed on donor hearts from deceased people with cardiovascular disease to replicate what would likely be encountered in the clinic.

"So far we have extremely encouraging results," said Hendon. "Our work shows that optics can have a large and impactful role within the field of cardiac electrophysiology."

The researchers are now working on a new catheter prototype that would more fully integrate the mapping processes. They also plan to demonstrate the method in large animals to test how well it works with heart muscles moving and blood circulating through the heart.

Credit: 
Optica

Researchers work to better measure delirium severity in older patients

In a study published in the journal Dementia and Geriatric Cognitive Disorder, researchers reported on their effort to improve and validate tools used to assess the severity of a condition called delirium, an acute confusional state often experienced by older hospitalized patients. The aim was to more accurately define methods for detecting and measuring delirium symptom severity, which could in turn lead to improved prevention and treatment for patients at risk. Sharon K. Inouye, M.D., M.P.H., Director of the Aging Brain Center at the Hinda and Arthur Marcus Institute for Aging Research at Hebrew SeniorLife, was principal investigator. Lead author was Sarinnapha M. Vasunilashorn, Ph.D., Assistant Professor of Medicine at Harvard Medical School (HMS) and the Division of General Medicine at Beth Israel Deaconess Medical Center (BIDMC), and member of the Aging Brain Center Working Group at the Marcus Institute.

Delirium is a clinical syndrome characterized by acute decline in cognition, which can present as inattention, disorientation, lethargy or agitation, and perceptual disturbance. Delirium is common among older hospitalized patients, and can lead to poor outcomes, including prolonged hospital stays, deep psychological stress for patients and their families, functional decline, and in worst cases, death. With in-hospital mortality rates for patients with delirium of 25-33 percent and annual health care costs in excess of $182 billion in the U.S. alone, delirium has garnered increasing attention as a worldwide public health and patient safety priority.

In large part because of Dr. Inouye's pioneering research on delirium, the condition has shown to be preventable, or at minimum its severity mitigated, with proper patient assessment and effective protocols included to detect and treat symptoms. Although several delirium severity assessment tools currently exist, most have been developed without use of advanced measurement methods and have not been rigorously validated.

Accurately identifying the severity of delirium a patient experiences is critical to developing effective treatment. For any medical disorder, severity is a complex topic and may mean different things to different stakeholders. From a clinical perspective, severity may reflect the likelihood of an adverse outcome or the urgency for symptom treatment. For patients and their families, severity may impact the level of distress they experience or impair patient function and recovery.

Researchers in this study performed a literature review and used an expert panel process and advanced data analytic techniques to identify a set of items for use in developing a new delirium severity instrument. The process revealed several characteristics of an ideal instrument. It should address a broad spectrum of delirium symptoms, should be proven to be reliable, yield diagnosis by severity rating and criteria, and be able to be administered quickly and easily by minimally trained raters.

Using this information, the researchers developed a 17-item set of criteria that they, along with the panel of experts, agreed captures the severity of delirium. This study indicates that high-quality delirium severity instruments should ultimately have immediate relevant application to clinical care and quality improvement efforts.

"Moving beyond consideration of delirium as present or absent, delirium severity represents an important outcome for evaluating preventive and treatment interventions, and tracking the course of patients," said Dr. Inouye, who also holds appointments as Professor of Medicine at HMS and BIDMC.

Dr. Vasunilashorn, who is also Assistant Professor in the Department of Epidemiology at the Harvard T. H. Chan School of Public Health, added, "As a result of this study, we have more fully conceptualized delirium severity and have identified characteristics of an ideal delirium severity instrument."

Credit: 
Hebrew SeniorLife Hinda and Arthur Marcus Institute for Aging Research

Rock 'n' control

image: Artist's impression of the phase transition of indium atoms on a silicon crystal controlled by light pulses

Image: 
Dr Murat Sivis

The goal of "Femtochemistry" is to film and control chemical reactions with short flashes of light. Using consecutive laser pulses, atomic bonds can be excited precisely and broken as desired. So far, this has been demonstrated for selected molecules. Researchers at the University of Göttingen and the Max Planck Institute for Biophysical Chemistry have now succeeded in transferring this principle to a solid, controlling its crystal structure on the surface. The results have been published in the journal Nature.

The team, led by Jan Gerrit Horstmann and Professor Claus Ropers, evaporated an extremely thin layer of indium onto a silicon crystal and then cooled the crystal down to -220 degrees Celsius. While the indium atoms form conductive metal chains on the surface at room temperature, they spontaneously rearrange themselves into electrically insulating hexagons at such low temperatures. This process is known as the transition between two phases - the metallic and the insulating - and can be switched by laser pulses. In their experiments, the researchers then illuminated the cold surface with two short laser pulses and immediately afterwards observed the arrangement of the indium atoms using an electron beam. They found that the rhythm of the laser pulses has a considerable influence on how efficiently the surface can be switched to the metallic state.

This effect can be explained by oscillations of the atoms on the surface, as first author Jan Gerrit Horstmann explains: "In order to get from one state to the other, the atoms have to move in different directions and in doing so overcome a sort of hill, similar to a roller coaster ride. A single laser pulse is not enough for this, however, and the atoms merely swing back and forth. But like a rocking motion, a second pulse at the right time can give just enough energy to the system to make the transition possible." In their experiments the physicists observed several oscillations of the atoms, which influence the conversion in very different ways.

Their findings not only contribute to the fundamental understanding of rapid structural changes, but also open up new perspectives for surface physics. "Our results show new strategies to control the conversion of light energy at the atomic scale," says Ropers from the Faculty of Physics at the University of Göttingen, who is also a Director at the Max Planck Institute for Biophysical Chemistry. "The targeted control of the movements of atoms in solids using laser pulse sequences could also make it possible to create previously unobtainable structures with completely new physical and chemical properties."

Credit: 
University of Göttingen

The best (and worst) materials for masks

It's intuitive and scientifically shown that wearing a face covering can help reduce the spread of the novel coronavirus that causes COVID-19. But not all masks are created equal, according to new University of Arizona-led research.

Amanda Wilson, an environmental health sciences doctoral candidate in the Department of Community, Environment and Policy in the Mel and Enid Zuckerman College of Public Health, is lead author on a recent study published in the Journal of Hospital Infection that assessed the ability of a variety of nontraditional mask materials to protect a person from infection after 30 seconds and after 20 minutes of exposure in a highly contaminated environment.

When the researchers compared wearing masks to wearing no protection during 20-minute and 30-second exposures to the virus, they found that infection risks were reduced by 24-94% or by 44-99% depending on the mask and exposure duration. Risk reduction decreased as exposure duration increased, they found.

"N99 masks, which are even more efficient at filtering airborne particles than N95 masks, are obviously one of the best options for blocking the virus, as they can reduce average risk by 94-99% for 20-minute and 30-second exposures, but they can be hard to come by, and there are ethical considerations such as leaving those available for medical professionals," Wilson said.

The next best options, according to the research, are N95 and surgical masks and, perhaps surprisingly, vacuum cleaner filters, which can be inserted into filter pockets in cloth masks. The vacuum filters reduced infection risk by 83% for a 30-second exposure and 58% for a 20-minute exposure. Of the other nontraditional materials evaluated by the researchers, tea towels, cotton-blend fabrics and antimicrobial pillowcases were the next best for protection.

Scarves, which reduced infection risk by 44% after 30 seconds and 24% after 20 minutes, and similarly effective cotton t-shirts are only slightly better than wearing no mask at all, they found.

"We knew that masks work, but we wanted to know how well and compare different materials' effects on health outcomes," said Wilson, who specializes in quantitative microbial risk assessment.

Wilson and her team collected data from various studies of mask efficacy and created a computer model to simulate infection risk, taking various factors into consideration.

"One big component of risk is how long you're exposed. We compared risk of infection at both 30 seconds and 20 minutes in a highly contaminated environment," she said.

Other conditions that impact risk of infection are the number of people around you and their distance from you, she said.

The size of virus-transporting droplets from sneezes, coughs or even speech is also a very important factor. Larger, heavier droplets carrying the virus drop out of the air faster than smaller, lighter ones. That's one reason distance helps reduce exposure.

"Aerosol size can also be affected by humidity," Wilson said. "If the air is drier, then aerosols become smaller faster. If humidity is higher, then aerosols will stay larger for a longer period of time, dropping out faster. That might sound good at first, but then those aerosols fall on surfaces, and that object becomes another potential exposure route."

The study also showed that the more time a person spends in an environment where the virus is present, the less effective a mask becomes.

"That doesn't mean take your mask off after 20 minutes," Wilson said, "but it does mean that a mask can't reduce your risk to zero. Don't go to a bar for four hours and think you're risk free because you're wearing a mask. Stay home as much as possible, wash your hands often, wear a mask when you're out and don't touch your face."

Masks protect the wearer and others in a number of different ways. Wilson said there are two "intuitive ways" that masks filter larger aerosols: mechanical interception and inertial impaction.

"The denser the fibers of a material, the better it is at filtering. That's why higher thread counts lead to higher efficacy. There's just more to block the virus," she said. "But some masks (such as those made from silk) also have electrostatic properties, which can attract smaller particles and keep them from passing through the mask as well."

The model developed by Wilson and her colleagues included parameters such as inhalation rate - the volume of air inhaled over time - and virus concentration in the air.

"We took a lot of research data, put it into a mathematical model and related those data points to each other," Wilson said. "For example, if we know people's inhalation rates vary by this much and know this much virus is in the air and these materials offer this much efficiency in terms of filtration, what does that mean for infection risk? We provide a range, in part, because everyone is different, such as in how much air we breathe over time."

Wilson also said it's important for a mask to have a good seal that pinches at nose, and she noted that people shouldn't wear a mask beneath the nose or tuck it under the chin when not in use.

"Proper use of masks is so important," Wilson said. "Also, we were focusing on masks protecting the wearer, but they're most important to protect others around you if you're infected. If you put less virus out into the air, you're creating a less contaminated environment around you. As our model shows, the amount of infectious virus you're exposed to has a big impact on your infection risk and the potential for others' masks to protect them as well."

Credit: 
University of Arizona

New method for simulating yarn-cloth patterns to be unveiled at ACM SIGGRAPH

image: A team will present a method for animating yarn-level cloth effects using a thin-shell solver. The new method will be presented at ACM SIGGRAPH 2020.

Image: 
© Georg Sperl, Rahul Narain, and Chris Wojtan

The simulation of woven and knitted fabrics is an ongoing problem in computer graphics. Simulating the way a fabric drapes or moves while being worn, while accurately modeling low-level effects such as the stiffness and stretch of individual yarns, is a complicated challenge that requires sophisticated computational modeling.

In this new work, a global team of computer scientists from the Institute of Science and Technology (IST) Austria and Indian institute of Technology Delhi (IITD) has developed a method for specifically animating yarn-level cloth effects, accurately capturing the physics of the material, including the stretching and bending response. The results of the team's homogenized computational modeling framework accurately mimic the appearance of knitted and woven materials using yarn.

The team, comprised of Georg Sperl and Chris Wojtan from IST Austria and Rahul Narain from IITD, is set to present their work at SIGGRAPH 2020. The conference, which will take place virtually this year starting 17 August, gathers a diverse network of professionals who approach computer graphics and interactive techniques from different perspectives. SIGGRAPH continues to serve as the industry's premier venue for showcasing forward-thinking ideas and research.

"Yarn-level cloth techniques produce beautifully detailed and realistic results, but it can become impractically slow to simulate full garments," Sperl, lead author of the research and Ph.D. student in the Wojtan lab at IST Austria, says. "Our method helps make such simulations more feasible by precomputing the mechanical properties of yarn patterns and fitting continuum materials that can be used in existing mesh-based simulators. This can greatly speed up the simulation of cloth while preserving the material's overall stretching and bending resistance and capturing characteristic phenomena of fabrics such as curling."

Picture the level of details that make up yarn cloth, from a variety of lengths and widths, the various appearances of spun thread, and the wide array of patterns created with yarn cloth by knitting, crocheting, or weaving. The team's approach is able to capture all of these specific details and material properties of yarn-level cloth in a truly realistic and precise way.

They validated their method using stretching and draping tests that produced accurate translations of the fabric in a virtual world. Visual examples included the animation of a variety of detailed yarn-cloth patterns and techniques, i.e., slip stitch honeycomb and satin weave, showcasing more of the folds, curling, and wrinkles of the cloth while draped over an object or worn as a sweater or scarf.

One of the important challenges the team addressed was the ability to simulate yarn-level cloth faster and at scale. Their method captures the complex physics emerging from yarn patterns at a fraction of the cost of direct yarn-level cloth simulation.

"Our technique allows us to capture the resistance of multiple deformations at the same time," the authors say. "We additionally developed a procedure to fit a material model capable of capturing these effects from homogenized data. With this, we were able to automatically reproduce characteristic behaviors of different fabrics, including subtle phenomena like the interaction between stretching and curling in knitted patterns, which have not been captured in previous cloth models."

In the future, this method could expand to animate other complicated multi-physics materials like layered quilts, layered elastic material, skin tissue, and deployable shells. The team's method opens up exciting avenues for future studies by applying it to homogenization of layered or composite materials, for estimating material properties of new materials constructed from simpler components, or for inverse design problems in the manufacturing of knitted cloth.

Credit: 
Association for Computing Machinery

Study reveals how bacteria build essential carbon-fixing machinery

image: Illustration of a complex carboxysome structure and Rubisco enzymes

Image: 
Luning Liu

Scientists from the University of Liverpool have revealed new insight into how cyanobacteria construct the organelles that are essential for their ability to photosynthesise. The research, which carried out in collaboration with the University of Science and Technology of China, has been published in PNAS.

Cyanobacteria are an ancient group of photosynthetic microbes that occur in the ocean and most inland waters. They have evolved a protein organelle, called the carboxysome, to convert environmental carbon dioxide into sugar in an efficient way.

A key step of this conversion is catalysed by a carbon-fixing enzyme Rubisco. However, Rubisco is poorly 'designed' because it is inefficient in fixing CO2 when a high level of O2 is around. Cyanobacterial carboxysomes sequester and concentrate Rubisco enzymes within the separated compartment and provide a low O2 environment for Rubisco to improve carbon fixation.

"It is a mystery how cyanobacterial cells generate the complex carboxysome structure and pack Rubisco enzymes in the organelle to have biological functions," said Luning Liu, a Professor at the University of Liverpool, and a senior author on this paper. "My research group has interest in addressing the key questions in this biological process."

The formation of the Rubisco complex involves a few 'helping' proteins called chaperones, including a protein named Rubisco assembly factor 1 (Raf1). To understand the exact roles of Raf1, the team used state-of-the-art microscopies, such as confocal fluorescence microscopy, electron microscopy, and cryo-electron microscopy, combined with molecular biology and biochemical techniques, to study how Raf1 interacts with Rubisco subunits to promote the assembly of Rubisco, and how carboxysome formation is affected when cells do not produce Raf1.

The researchers proved that Raf1 is vital for building the Rubisco complex. Without Raf1, the Rubisco complexes are less efficiently assembled and cannot be densely packed inside the carboxysomes. This could greatly affect the construction of carboxysomes and therefor the growth of cyanobacterial cells.

"This is the first time that we have determined the function of Rubisco assembly chaperones in the biosynthesis of carboxysomes in cyanobacterial cells," said Dr Fang Huang, a Leverhulme Trust Early Career Fellow, and the first author on this paper. "We are very excited about this finding. It also allowed us to propose a new working model of carboxysome biogenesis, which teach us in detail how Rubisco complexes are generated, how Raf1 drive Rubisco packing, and how the entire carboxysome structure is constructed."

Currently, there is a tremendous interest in transferring carboxysomes into crop plants to improve crop yields and food production. This study may provide important information required for producing intact and functional carbon-fixing machinery.

Credit: 
University of Liverpool

Spider silk made by photosynthetic bacteria

image: Spiders produce amazingly strong and lightweight threads made from silk proteins, which can be used to manufacture useful materials. Researchers succeeded in producing the spider silk using photosynthetic bacteria.

Image: 
RIKEN

Spiders produce amazingly strong and lightweight threads called draglines that are made from silk proteins. Although they can be used to manufacture a number of useful materials, getting enough of the protein is difficult because only a small amount can be produced by each tiny spider. In a new study published in Communications Biology, a research team led by Keiji Numata at the RIKEN Center for Sustainable Resource Science (CSRS) reported that they succeeded in producing the spider silk using photosynthetic bacteria. This study could open a new era in which photosynthetic bio-factories stably output the bulk of spider silk.

In addition to being tough and lightweight, silks derived from arthropod species are biodegradable and biocompatible. In particular, spider silk is ultra-lightweight and is as tough as steel. "Spider silk has the potential to be used in the manufacture of high-performance and durable materials such as tear-resistant clothing, automobile parts, and aerospace components," explains Choon Pin Foong, who conducted this study. "Its biocompatibility makes it safe for use in biomedical applications such as drug delivery systems, implant devices, and scaffolds for tissue engineering." Because only a trace amount can be obtained from one spider, and because breeding large numbers of spiders is difficult, attempts have been made to produce artificial spider silk in a variety of species.

The CSRS team focused on the marine photosynthetic bacterium Rhodovulum sulfidophilum. This bacterium is ideal for establishing a sustainable bio-factory because it grows in seawater, requires carbon dioxide and nitrogen in the atmosphere, and uses solar energy, all of which are abundant and inexhaustible.

The researchers genetically engineered the bacterium to produce MaSp1 protein, the main component of the Nephila spider dragline which is thought to play an important role in the strength of the spider silk. Optimization of the gene sequence that they inserted into the bacterium's genome was able to maximize the amount of silk that could be produced. They also found that a simple recipe -- artificial seawater, bicarbonate salt, nitrogen gas, yeast extract, and irradiation with near-infrared light -- allows R. sulfidophilum to grow well and produce the silk protein efficiently. Further observations confirmed that the surface and internal structures of the fibers produced in the bacteria were very similar to those produced naturally by spiders.

"Our current study shows the initial proof of concept for producing spider silk in photosynthetic bacteria. We are now working to mass produce spider-silk dragline proteins at higher molecular weights in our photosynthetic system," Numata says. "The photosynthetic microbial cell factories, which produce bio-based and bio-degradable materials via a carbon neutral bioprocess, could help us in accomplishing some of the Sustainable Development Goals (SDGs) adopted by the United Nations such as Goal #12 'Responsible Production and Consumption, and Goal #13 'Climate Action'. Our results will help provide feasible solutions for energy, water and food crises, solid waste problems, and global warming."

Credit: 
RIKEN