Tech

New model predicts the peaks of the COVID-19 pandemic

image: Fits of the data for active cases available on 08 May 2020 for various severely affected countries around the world.

Image: 
Constantino Tsallis and Ugur Tirnakli, Frontiers

As of late May, COVID-19 has killed more than 325,000 people around the world. Even though the worst seems to be over for countries like China and South Korea, public health experts warn that cases and fatalities will continue to surge in many parts of the world. Understanding how the disease evolves can help these countries prepare for an expected uptick in cases.

This week in the journal Frontiers, researchers describe a single function that accurately describes all existing available data on active cases and deaths--and predicts forthcoming peaks. The tool uses q-statistics, a set of functions and probability distributions developed by Constantino Tsallis, a physicist and member of the Santa Fe Institute's external faculty. Tsallis worked on the new model together with Ugur Tirnakli, a physicist at Ege University, in Turkey.

"The formula works in all the countries in which we have tested," says Tsallis.

Neither physicist ever set out to model a global pandemic. But Tsallis says that when he saw the shape of published graphs representing China's daily active cases, he recognized shapes he'd seen before--namely, in graphs he'd helped produce almost two decades ago to describe the behavior of the stock market.

"The shape was exactly the same," he says. For the financial data, the function described probabilities of stock exchanges; for COVID-19, it described daily the number of active cases--and fatalities--as a function of time.

Modeling financial data and tracking a global pandemic may seem unrelated, but Tsallis says they have one important thing in common. "They're both complex systems," he says, "and in complex systems, this happens all the time." Disparate systems from a variety of fields--biology, network theory, computer science, mathematics--often reveal patterns that follow the same basic shapes and evolution.

The financial graph appeared in a 2004 volume co-edited by Tsallis and the late Nobelist Murray Gell-Mann. Tsallis developed q-statitics, also known as "Tsallis statistics," in the late 1980s as a generalization of Boltzmann-Gibbs statistics to complex systems.

In the new paper, Tsallis and Tirnakli used data from China, where the active case rate is thought to have peaked, to set the main parameters for the formula. Then, they applied it to other countries including France, Brazil, and the United Kingdom, and found that it matched the evolution of the active cases and fatality rates over time.

The model, says Tsallis, could be used to create useful tools like an app that updates in real-time with new available data, and can adjust its predictions accordingly. In addition, he thinks that it could be fine-tuned to fit future outbreaks as well.

"The functional form seems to be universal," he says, "Not just for this virus, but for the next one that might appear as well."

"Despite the promise and excitement of universality, other complex systems researchers emphasize the need for further statistical work. William Farr, an early statistician, first explored fitting bell-shaped curves to epidemics during a smallpox outbreak in the mid-1800s. The approach has since failed to bear out for many real-world epidemics, and stands as a cautionary tale for physicists who seek to apply their models to epidemiology.”

Credit: 
Santa Fe Institute

Study: Paper-thin gallium oxide transistor handles more than 8,000 volts

BUFFALO, N.Y. -- People love their electric cars. But not so much the bulky batteries and related power systems that take up precious cargo space.

Help could be on the way from a gallium oxide-based transistor under development at the University at Buffalo.

In a study published in the June edition of IEEE Electron Device Letters, electrical engineers describe how the tiny electronic switch can handle more than 8,000 volts, an impressive feat considering it's about as thin as a sheet of paper.

The transistor could lead to smaller and more efficient electronic systems that control and convert electric power -- a field of study known as power electronics -- in electric cars, locomotives and airplanes. In turn, this could help improve how far these vehicles can travel.

"To really push these technologies into the future, we need next-generation electronic components that can handle greater power loads without increasing the size of power electronics systems," says the study's lead author, Uttam Singisetti, who adds that the transistor could also benefit microgrid technologies and solid-state transformers.

Singisetti, PhD, associate professor of electrical engineering at the UB School of Engineering and Applied Sciences, and students in his lab have been studying the potential of gallium oxide, including previous work exploring transistors made from the material.

Perhaps the chief reason researchers are exploring gallium oxide's potential for power electronics is a property known as "bandgap."

Bandgap measures how much energy is required to jolt an electron into a conducting state. Systems made with wide-bandgap materials can be thinner, lighter and handle more power than systems made of materials with lower bandgaps.

Gallium oxide's bandgap is about 4.8 electron volts, which places it among an elite group of materials considered to have an ultrawide bandgap.

The bandgap of these materials exceeds that of silicon (1.1 electron volts), the most common material in power electronics, as well as potential replacements for silicon, including silicon carbide (about 3.4 electron volts) and gallium nitride (about 3.3 electron volts).

A key innovation in the new transistor revolves around passivation, which is a chemical process that involves coating the device to reduce the chemical reactivity of its surface. To accomplish this, Singisetti added a layer of SU-8, an epoxy-based polymer commonly used in microelectronics.

The results were impressive.

Tests conducted just weeks before the COVID-19 pandemic temporarily shuttered Singisetti's lab in March show the transistor can handle 8,032 volts before breaking down, which is more than similarly designed transistors made of silicon carbide or gallium nitride that are under development.

"The higher the breakdown voltage, the more power a device can handle," says Singisetti. "The passivation layer is a simple, efficient and cost-effective way to boost the performance of gallium oxide transistors."

Simulations suggest the transistor has a field strength of more than 10 million volts (or 10 megavolts) per centimeter. Field strength measures the intensity of an electromagnetic wave in a given spot, and it eventually determines the size and weight of power electronics systems.

"These simulated field strengths are impressive. However, they need to be verified by direct experimental measurements," Singisetti says.

Credit: 
University at Buffalo

When COVID-19 meets flu season

CHICAGO --- As if the COVID-19 pandemic isn't scary enough, the flu season is not far away. How severe will the flu season be as it converges with the COVID-19 outbreak? What can we do to prepare?

Dr. Benjamin Singer, a Northwestern Medicine pulmonologist who treats COVID-19 patients in the intensive care unit, outlines the best defense against influenza, which also may protect against coronavirus.

In an editorial that will be published May 29 in the journal Science Advances, Singer, an assistant professor of pulmonary and critical care and biochemistry and molecular genetics at Northwestern University Feinberg School of Medicine, examines the epidemiology and biology of SARS-CoV-2 and influenza to help inform preparation strategies for the upcoming flu season.

He outlines the following four factors that could determine the severity of the upcoming flu season:

1. Transmission: Social distancing policies designed to limit the spread of COVID-19 are also effective against the flu. If COVID-19 cases begin to spike in the fall of 2020, re-tightening social distancing measures could help mitigate early spread of the flu to flatten the curves for both viruses.

2. Vaccination: As we await vaccine trials for COVID-19, we should plan to increase rates of vaccination against the flu, particularly among older adults who are more susceptible to both the flu and COVID-19.

3. Co-infection: We need widespread availability of rapid diagnostics for COVID-19 and other respiratory pathogens because co-infection with another respiratory pathogen, including the flu, occurred in more than 20% of COVID-19-positive patients who presented with a respiratory viral syndrome early in the pandemic.

4. Disparities: The COVID-19 pandemic has highlighted unconscionable disparities among African Americans, Latinx and Native Americans so we must galvanize public health efforts aimed to limit viral spread, increase vaccination rates, deploy rapid diagnostics and expand other health care services for vulnerable populations, including communities of color, the poor and older adults.

The Centers for Disease Control and Prevention estimated that the 2019-2020 seasonal influenza epidemic resulted in tens of millions of cases and tens of thousands of deaths.

"Even in non-pandemic years, the flu and other causes of pneumonia represent the eighth-leading cause of death in the United States, and respiratory viruses are the most commonly identified pathogens among hospitalized patients with community-acquired pneumonia," Singer said.

Credit: 
Northwestern University

How well do Germans understand weather risks?

Although the current focus is on coronavirus, it is important not to forget a crisis that poses an even greater threat in the long term: climate change. As climate change unfolds, the number of extreme weather events is increasing worldwide. These events require effective responses not only on the part of the authorities, but also on the part of every individual. Only those who can gauge weather risks correctly are able to take the necessary precuations. But how savvy is the general population when it comes to weather risks? How well do we understand the uncertainty of weather forecasts? And how aware are we of climate change, which will further intensify weather risks in the future?

To answer these questions, researchers from the Max Planck Institute for Human Development and the Hans Ertel Centre for Weather Research surveyed 1,004 Germans aged between 14 and 93 years. The respondents answered 62 factual questions about weather conditions such as heat, UV radiation, thunderstorms, heavy rain, and ground frost and their impacts, as well as on forecast uncertainty and climate change in Germany to date.

Respondents had difficulties judging weather risks in several areas. For example, 44% of participants believed that ground frost, which may cause icy conditions on roads and pavements, is only possible at air temperatures of 0 degrees Celsius and below--a misconception that can be treacherous. In fact, the temperature just above ground level can drop below zero even when the air temperature reported in the weather forecast is above zero: Air temperature is typically measured two meters above the ground. What's more, 66% of respondents falsely believed that higher temperatures mean higher UV radiation levels. UV radiation is actually highest around midday, whereas temperatures tend to continue rising over the course of the day. And if a thunderstorm were approaching, many respondents would probably not take shelter in time: Only one fifth of respondents correctly estimtated that a 30-second gap between a lightning flash and the sound of thunder means that a thunderstorm is about 10 kilometers away. More than a quarter of respondents thought it was about 30 kilometers away, thus severely underestimating their distance from the storm.

At the same time, there was uncertainty about how to interpret probabilistic forecasts. Only one fifth of respondents knew that a forecast predicting a 30% chance of rain in Berlin means that it will rain in Berlin on 30% of all days with that forecast. Many respondents mistakenly thought it meant that it will rain in 30% of the area or for 30% of the day. According to the study's authors, it is up to weather communicators to resolve this uncertainty. It is their responsibility to make clear and transparent what the probabilities refer to.

With regard to evidence for climate change in Germany since 1880, 70% of respondents were aware that the average temperature in Germany has risen. But 80% believed that storm intensity has increased, whereas in fact there is no evidence for any long-term change in Germany in this respect. "This perception could be influenced by recent extreme events and the broad media coverage of them," says lead author Nadine Fleischhut, researcher at the Max Planck Institute for Human Development, and principle investigator of the WEXICOM project on the communication of weather warnings at the Hans Ertel Centre for Weather Research. As co-author Ralph Hertwig, Director at the Max Planck Institute for Human Development, adds: "If people don't properly understand weather risks in the here and now, it is unlikely that they will be able to grasp the impact that climate change will have in the future. Daily weather forecasts could be an opportunity for a literacy offensive, helping us all to become a little smarter every day in our understanding of weather, climate, and uncertainty."

The study's authors call for efforts to further improve the communication of extreme weather events and their impacts. Forecasts should not focus exclusively on the weather event itself, but also predict its impacts, such as traffic jams or economic damage to buildings. At the same time, the certainty of forecasts should be communicated more transparently. "Impact forecasts must be carefully designed and tested to avoid unintended consequences, such as overreaction or trivialization of risks," says co-author Stefan Herzog, head of the Boosting Decision Making research area in the Center for Adaptive Rationality at the Max Planck Institute for Human Development. The authors call on experts from meteorology, psychology, and journalism to cooperate in designing effective communication formats.

Credit: 
Max Planck Institute for Human Development

New streamlined assay can improve prenatal detection of alpha-thalassemia

image: The superiority and applicability comparison of Sanger sequencing, RDB, and this method for nondeletional α-thalassemia genotyping.

Image: 
The Journal of Molecular Diagnostics

Philadelphia, May 29, 2020 - In a report in The Journal of Molecular Diagnostics, published by Elsevier, researchers describe a rapid, accurate novel assay for nondeletional alpha-thalassemia genotyping based on one-step nested asymmetric PCR melting curve analysis, which may enhance prenatal diagnosis, newborn screening, and large-scale population screening.

Thalassemia is a group of inherited blood disorders that reduces the ability of blood to circulate oxygen throughout the body. The severity can vary from benign to life threatening; therefore, it is important to identify infants as early as possible who may develop thalassemia-associated symptoms, as well as parents who are carriers. This requires the availability of practical and precise molecular diagnostic tools.

"The nondeletional alpha-thalassemia genotyping assay developed in this study has the advantages of one-step closed-tube operation, high-throughput, speed, and automation, which can meet the methodological needs of a control program for thalassemia in large-scale populations," explained Wanjun Zhou, PhD, of the Department of Medical Genetics, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.

Dr. Zhou noted that the strategy of one-step nested asymmetric PCR melting analysis overcomes the bottlenecks of high homology and GC-rich secondary structure that limited previous types of analyses.

Thalassemia affects up to five percent of the world's population. These disorders are characterized by low levels of hemoglobin, decreased red cell production, and anemia. Patients with thalassemia report fatigue, weakness, shortness of breath, dizziness, or headaches. One subtype, alpha-thalassemia, is caused by one or more mutations in two different genes (HBA1 and HBA2) associated with production of the alpha-globin subunits of hemoglobin. Every individual has two copies of these genes, so up to four genes can be affected; this can determine the severity of symptoms and carrier status. Though the most common type of genetic mutation associated with alpha-thalassemia is deletional (removal of a section of the gene sequence), the assay in this case focuses on point, or nondeletional, mutations.

The researchers tested the ability of the new assay to detect five nondeletional alpha-thalassemia mutations. All five mutations were accurately identified with a concordance rate of 100 percent in a blind analysis of 255 samples with known genotypes, as determined by other analytic methods including gap- PCR, PCR-reverse dot blot (RDB), or Sanger sequencing.

The investigators also tested the capability of the new assay to screen large populations. After testing 1,250 blood samples, the assay showed 100 percent sensitivity and specificity for all of the targeted mutations.

The overall analysis time with the new assay was just under 2.5 hours. This is considerably faster than other molecular genetic testing methods, such as Sanger sequencing, which require 380 minutes, or RDB, which takes 300 minutes.

"These other methods are unsuitable for use in large-scale screening programs because they have limitations such as cumbersome operation, low throughput, subjective interpretation, and possible laboratory contamination caused by post-PCR open-tube operation," commented Dr. Zhou. "Our results prove that this new assay is accurate, reliable, simple, and rapid and can meet the requirements for clinical diagnosis and mass screening of nondeletional alpha-thalassemia." He believes the same strategy may be used in the future for rapid genotyping of other genetic mutations.

Credit: 
Elsevier

Theoretical breakthrough shows quantum fluids rotate by corkscrew mechanism

image: Merging dynamics of two BECs, one rotating and one stationary. Density evolution of each drop is shown in the top row, and angular momentum transfer is shown in the bottom row. Angular momentum is transferred due to the spontaneous emergence of a corkscrew structure at the interface.

Image: 
Image by the Center for Nanoscale Materials.

If a drop of creamer falls from a spoon into a swirling cup of coffee, the whirlpool drags the drop into rotation. But what would happen if the coffee had no friction — no way to pull the drop into a synchronized spin?

Superfluids — also called quantum fluids — appear in a wide range of systems and applications. For example, cosmological superfluids meld with each other during neutron star mergers, and scientists use superfluid helium to cool magnetic resonance imaging (MRI) machines.

The fluids have unique and useful properties governed by quantum mechanics — a framework usually used to describe the realm of the very small. For superfluids, however, these quantum mechanical properties dominate on a larger, macroscopic scale. For example, superfluids lack viscosity, a sort of internal friction that allows the fluid to resist and cause motion.

“It doesn’t just look like a corkscrew — its functionality is similar, too.” — Dafei Jin, scientist at DOE’s Center for Nanoscale Materials at Argonne

This lack of viscosity grants the liquids unusual abilities, like traveling freely through pipes with no loss of energy or remaining still inside a spinning container. But when it comes to rotational motion, scientists struggle to understand how rotating superfluids transfer angular momentum — a quality that speaks to how fast the liquids will spin.

In a recent study, scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory collaborated with scientists from the National High Magnetic Field Laboratory (MagLab) in Tallahassee, Florida, and Osaka City University in Japan to perform advanced computer simulations of merging rotating superfluids, revealing a peculiar corkscrew-shaped mechanism that drives the fluids into rotation without the need for viscosity.

When a rotating raindrop falls into a pond, viscosity enables the drop to drive the surrounding water into rotation, generating vortices or eddy currents in the process. This viscous drag reduces the difference in motion between the two bodies. A superfluid, however, allows this difference.

“The atoms stay roughly in the same place when superfluids transfer angular momentum, unlike with eddy currents in classical fluids,” said Dafei Jin, a scientist at Argonne’s Center for Nanoscale Materials (CNM), a DOE Office of Science User Facility. “Rather than through the convection of particles, it’s more efficient for superfluid atoms to transfer angular momentum through quantum mechanical interactions.”

These quantum mechanical interactions give rise to a mesmerizing effect, exhibited in the team’s simulations performed using the Carbon computer cluster at CNM. The scientists simulated the merging of rotating and stationary drops of a superfluid state of matter called a Bose-Einstein Condensate (BEC).

“We chose to simulate Bose-Einstein Condensates because they’re relatively general superfluid systems that display characteristics shared by various other quantum fluids,” said Wei Guo, a professor at Florida State University (FSU) and a researcher at the MagLab.

Toshiaki Kanai, a graduate student of Guo’s in FSU’s Physics Department, led the design of the simulations, which model the interaction between two BEC drops from the moment they come into contact until they merge completely. Tsubota Makoto, a professor at Osaka City University and expert in quantum fluid simulation, also contributed to the project design and interpretation of the results.

“We were particularly fortunate to work with Dafei Jin at CNM, who helped us solve many technical challenges,” said Guo, a long-time collaborator with Jin, “and Argonne has computer clusters and other computational resources that allowed us to efficiently perform the simulation many times under different conditions to obtain systematic results.”

View related video here.

As the drops draw close to each other, the corkscrew shape spontaneously appears and extends into both drops, growing in size and influence until the two drops are mixed and rotating at the same speed.

“It doesn’t just look like a corkscrew — its functionality is similar, too,” said Jin. “It transfers angular momentum by twisting into the samples, causing them to speed up or slow down their rotation.”

The simulation result is applicable to many laboratory BEC systems of various sizes, from tens of nanometers to hundreds of microns — or millionths of meters. The results also hold true for larger superfluid systems. Despite differences in scale all superfluid systems exhibit common fundamental properties linked to their quantum nature.

“Although we focused on a very small system, the results are general,” said Guo. “The insight we gained into how these interactions occur can help physicists inform models of systems from nanoscale ultracold atoms to cosmological-scale superfluids in astrophysical systems.”

For example, superfluid helium can exist at the centimeter and meter scales, and BECs in neutron stars can be, well, astronomical in size. When neutron stars merge, they act as two very large, rotating superfluid drops in some respects, and the discovery of the corkscrew mechanism could inform astrophysical models of these mergers.

The scientists hope to test their theoretical discovery of the corkscrew mechanism through experiment. Quantum liquids have implementations in cold atom systems, superfluids, superconductors and more, and basic science research of their behavior will aid in development of applications of these systems.

Credit: 
DOE/Argonne National Laboratory

New view on how tissues flow in the embryo

video: Movie of epithelial cells during dramatic tissue flows, which elongate the head-to-tail body axis of the Drosophila embryo.

Image: 
Xun Wang/Columbia Engineering

New York, NY--May 29, 2020--As embryos develop, tissues flow and reorganize dramatically on timescales as brief as minutes. This reorganization includes epithelial tissues that cover outer surfaces and inner linings of organs and blood vessels. As the embryo develops, these tissues often narrow along one axis and extend along a perpendicular axis through cellular movement caused by external or internal forces acting differently along various directions in the tissue (anisotropies). Researchers have long wondered how simple clusters of cells inside developing embryos transform into tissues and organs--how do tissues physically change shape in the embryo? Might they turn from "solids" into "fluids" at specific times in development to make it easier to rapidly sculpt functional tissues and organs?

Watching and measuring what happens in tissues inside the human embryo is currently not possible, and it is still very difficult to do this in mammalian models like mice. Because humans and the fruit fly Drosophila share so many biological similarities, researchers from Columbia Engineering and Syracuse University decided to tackle this problem by focusing on fruit flies. In a paper published online May 29 in PNAS, the team reports that they can predict when the tissue will begin to rapidly flow just by looking at cell shapes in the tissue.

"Thanks to earlier theoretical work from our colleagues at Syracuse, we thought we might be able to learn something about whether the embryonic tissues are solid or fluid by just looking at the shapes of cells in the tissue," says the study's lead PI Karen Kasza, Clare Boothe Luce Assistant Professor of Mechanical Engineering. "So we decided to try this in the fly. We're really excited about our results, which could reveal fundamental mechanisms underlying human development and point to where things can go wrong, causing birth defects."

The challenge was how to apply traditional engineering approaches to measure the mechanical properties of cells and tissues inside the flies' tiny embryos to see which tissues behave like solids, maintaining their shape and resisting flow, and which tissues behave like fluids, flowing easily and changing shape. The researchers used high-resolution confocal fluorescence imaging to take movies of embryonic development in which they could see in great detail the shape and packings of cells in tissues inside the fly embryo. They focused on a very fast-moving developmental event in which the embryonic tissue rapidly changes shape to elongate the head-to-tail body axis of the fly (something that also happens in most animal embryos).

By combining experimental studies in the fruit fly embryo at Columbia with theoretical modeling approaches at Syracuse, the researchers demonstrated that the shapes and alignment of cells within tissues can help to both explain and predict how tissues change shape during development and how defects in these processes can result in abnormalities in embryo shape. "It was a fantastic collaboration between experiment and theory," Kasza observes.

Adds Lisa Manning, co-author of the study and William R. Kenan, Jr. Professor of Physics at Syracuse, "From the theory side, it was really unclear what collective mechanisms allow the cells to easily rearrange during tissue elongation. With Professor Kasza's group, who has some of the best tools in the world to study mechanical properties of fruit fly tissue, we were really able to nail down precisely how changes to cell shapes drive changes to tissue mechanics. It is amazing that we can now just look at a snapshot of cell shapes in the fruit fly and predict how cells will move, with no fit parameters."

A surprise for the researchers was that they could anticipate when the tissue would begin to flow by looking at cell shapes in the tissue without any adjustable parameters in the theoretical model. But, unlike previous studies and predictions, they needed to include a new parameter--anisotropy--that described the alignment of cells within the tissue because the forces acting on and in the tissue were highly anisotropic, or varied along different directions in the tissue. What they found particularly interesting was that their findings suggest that embryonic tissue seems to become more fluid-like just before the onset of the rapid tissue flows during body axis elongation.

"This is really exciting" says Kasza, "because it suggests that the mechanical properties of the cells might be regulated biologically during embryonic development, i.e. in the genetic instructions encoded in DNA, to make it easier for tissues to change shape dramatically during brief time windows during development. This adds to a growing body of research revealing that mechanics is really crucial to understanding life."

The team is now looking at how the instructions for changes in tissue fluidity are genetically encoded. They are also exploring the mechanical properties of tissues to build quantitative models of tissue morphogenesis that will enable them to predict, design, build, and control tissue shape and tissue movements, both in developing embryos and in engineered tissues in the lab.

Credit: 
Columbia University School of Engineering and Applied Science

New report discusses coffee's effect on digestion and digestive disorders

A new report from the Institute for Scientific Information on Coffee (ISIC), entitled 'Coffee and its effect on digestion' reviews the latest research into coffee's effect on digestion, and indicates a potential protective effect against gallstones and gallstone disease,1,2,3 and pancreatitis4,5. The report also highlights other beneficial effects that coffee consumption may have on the process of digestion6-11, including supporting gut microflora17-19 and promoting gut motility12,13-16.

The report was authored by Professor Carlo La Vecchia, at the Department of Clinical Sciences and Community Health, University of Milan, Italy, who commented: "The effect of coffee on digestion is an evolving area of research. Data indicates benefits against common digestive complaints such as constipation, as well as a potential reduction in the risk of more serious conditions like chronic liver diseases, from non alcoholic fatty liver disease (NAFLD), gallstones and related pancreatitis".

Gallstone disease is a common digestive disorder, caused by the accumulation of gallstones in the gallbladder or bile duct, which affects approximately 10-15% of the adult population20. While the mechanism by which coffee may protect against gallstone disease is not yet known1-3, it has been observed that the risk for the condition declines with increasing daily consumption of coffee1,2. Caffeine is thought to play a role in these associations, as the same effect is not observed with decaffeinated coffee3.

A common question among consumers and focus area for research is whether coffee is associated with heartburn or gastro-oesophageal reflux disease (GORD). Heartburn is a mild form of acid reflux that can affect most people on occasion, while GORD is a chronic and severe acid reflux condition that affects up to one in five adults21, and is characterised by frequent heartburn, regurgitation of food or liquid, and difficulty swallowing. While a small number of studies have suggested an association between coffee drinking and GORD22-24, the majority of studies reviewed suggest that coffee is not a major trigger of these conditions12,25-31.

The report also reviewed a growing area of health and nutrition research, namely: the effect of coffee on the gut microflora (microorganism populations)17-19. Recent studies suggest that populations of the beneficial gut bacteria Bifidobacterium spp., increase after drinking coffee19,32. It is thought that the dietary fibre and polyphenols found in coffee, support the healthy growth of microflora populations18,19.

Additional research findings highlighted in the report include:

Coffee can stimulate gut motility12,13-16.

Coffee consumption is thought to stimulate digestion by encouraging the release of gastric acid, bile and pancreatic secretions6-11.

Coffee is already one of the most widely researched components of the diet, and its effect on digestion remains a growing area of research. While this report highlights a number of the more interesting findings that have emerged in recent years, it also provides insight into areas where further research would be beneficial, to better understand the mechanisms behind some of the beneficial effects observed.

Credit: 
Kaizo

Researchers identify mechanisms that make skin a protective barrier

A Mount Sinai research team has identified one of the mechanisms that establish the skin as a protective barrier, a breakthrough that is critical to understanding and treating common skin conditions including eczema and psoriasis, according to a study published Thursday, May 28, in the scientific journal Genes & Development.

One of the most important roles of the skin is to act as a barrier that prevents water loss and protects the skin from pathogens. Failure of this protective function contributes to dermatological diseases. The research team led by Sarah E. Millar, PhD, Director of the Black Family Stem Cell Institute at the Icahn School of Medicine at Mount Sinai, found that the scaffolding protein, histone deacetylase 3 (HDAC3), is essential for proper skin development and barrier formation.

The group found that mice lacking HDAC3 specifically in the epidermis--the outermost layer of the skin--fail to develop a functional skin barrier and die shortly after birth due to dehydration. The team's extensive research describes a complex process in which HDAC3 regulates expression of its target genes in the epidermis by interacting with multiple DNA-binding proteins.

"HDAC3 is particularly interesting to us, as it associates with different proteins in different tissue types to regulate its target genes," says Katherine Szigety, an MD/PhD student in the Millar Lab and first author of the study. "While HDAC3 has been studied in diverse contexts, its role and transcriptional partners in the developing epidermis had not been identified until now."

HDAC3 is a member of a family of epigenetic regulators, known as histone deacetylases (HDACs), which control gene expression by changing the structure of genetic material. Understanding the biology of epigenetic regulation is an area of active scientific investigation, as many new therapeutics are designed to modify this process. With a focus on skin biology, Dr. Millar's lab is studying HDACs because a group of drugs called HDAC inhibitors are used to treat cutaneous T-cell lymphoma, a rare cancer that affects the skin.

The lab's research on HDAC3 builds on their previous studies of the related proteins HDAC1 and HDAC2 in skin development. The team discovered that the mechanisms by which HDAC3 regulates target gene expression are distinctly different from those involving HDAC1 and HDAC2.

"Unlike HDACs 1 and 2, HDAC3's functions in regulating epidermal development appear to be independent of its enzyme activity. Because clinically available HDAC inhibitors specifically block enzyme function, our findings suggest that the effects of treatment with an HDAC inhibitor might resemble loss of HDACs 1 and 2 in the skin, but perhaps not HDAC3," said Dr. Millar. "This may have important implications for the use of HDAC inhibitors in managing CTCL and other skin conditions. An exciting next step for our group will be to characterize the role of HDAC3 in skin disease."

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Cancer drugs cause large cells that resist treatment; scientist aims to stop it

image: Human tumor lymphoma cells. Daruka Mahadevan, MD, PhD, a researcher at the Mays Cancer Center, home to UT Health San Antonio MD Anderson, studies high-grade lymphoma and says many chemotherapies cause some cells to become large with more than two chromosomes. He said the goal is to introduce other drugs that stop this cell acquisition of extra chromosomes.

Image: 
National Cancer Institute

A cancer therapy may shrink the tumor of a patient, and the patient may feel better. But unseen on a CT scan or MR image, some of the cells are undergoing ominous changes. Fueled by new genetic changes due to cancer therapy itself, these rogue cells are becoming very large with twice or quadruple the number of chromosomes found in healthy cells. Some of the cells may grow to eight, 16 or even 32 times the correct number. Quickly, they will become aggressive and resistant to treatment. They will eventually cause cancer recurrence.

Daruka Mahadevan, MD, PhD, professor and chief of the Division of Hematology-Oncology in the Long School of Medicine at UT Health San Antonio, has studied this progression for 20 years. In a paper published in April 2020 in the journal Trends in Cancer, he and co-author Gregory C. Rogers, PhD, explain a rationale for stopping it.

"When you give therapy, some cells don't die," explained Dr. Mahadevan, leader of hematology and medical oncology care at the Mays Cancer Center, home to UT Health San Antonio MD Anderson. "These cells don't die because they've acquired a double complement of the normal chromosomes plus other genetic changes. Many types of chemotherapy actually promote this."

Dr. Mahadevan found that two cancer-causing genes, called c-Myc and BCL2, are operative in "double-hit" high-grade lymphomas, which are incurable. "These genes are part of the problem, because when they are present, they help the lymphoma cells to live longer and prime them to become large cells with treatment," he said.

Although the drugs seem to be working, once therapy is stopped, the large rogue cells (called tetraploid cells) start to divide again and become smaller but faster-growing cells, driven by c-Myc and BCL2.

"It's a double hit, a double whammy," Dr. Mahadevan said.

To counter this, Dr. Mahadevan seeks to find drugs that prevent or treat the rogue cells' acquisition of multiple chromosomes. He has identified a small-molecule inhibitor that shows promise in cell experiments in the laboratory. "We have data to show that it works," he said.

A drug that suppresses large cells with multiple copies of chromosomes could be used in combination with existing chemotherapies to prevent large cell resistance, not only in lymphoma but in many other types of cancer, he said.

Credit: 
University of Texas Health Science Center at San Antonio

Algorithm quickly simulates a roll of loaded dice

The fast and efficient generation of random numbers has long been an important challenge. For centuries, games of chance have relied on the roll of a die, the flip of a coin, or the shuffling of cards to bring some randomness into the proceedings. In the second half of the 20th century, computers started taking over that role, for applications in cryptography, statistics, and artificial intelligence, as well as for various simulations -- climatic, epidemiological, financial, and so forth.

MIT researchers have now developed a computer algorithm that might, at least for some tasks, churn out random numbers with the best combination of speed, accuracy, and low memory requirements available today. The algorithm, called the Fast Loaded Dice Roller (FLDR), was created by MIT graduate student Feras Saad, Research Scientist Cameron Freer, Professor Martin Rinard, and Principal Research Scientist Vikash Mansinghka, and it will be presented next week at the 23rd International Conference on Artificial Intelligence and Statistics.

Simply put, FLDR is a computer program that simulates the roll of dice to produce random integers. The dice can have any number of sides, and they are "loaded," or weighted, to make some sides more likely to come up than others. A loaded die can still yield random numbers -- as one cannot predict in advance which side will turn up -- but the randomness is constrained to meet a preset probability distribution. One might, for instance, use loaded dice to simulate the outcome of a baseball game; while the superior team is more likely to win, on a given day either team could end up on top.

With FLDR, the dice are "perfectly" loaded, which means they exactly achieve the specified probabilities. With a four-sided die, for example, one could arrange things so that the numbers 1,2,3, and 4 turn up exactly 23 percent, 34 percent, 17 percent, and 26 percent of the time, respectively.

To simulate the roll of loaded dice that have a large number of sides, the MIT team first had to draw on a simpler source of randomness -- that being a computerized (binary) version of a coin toss, yielding either a 0 or a 1, each with 50 percent probability. The efficiency of their method, a key design criterion, depends on the number of times they have to tap into this random source -- the number of "coin tosses," in other words -- to simulate each dice roll.

In a landmark 1976 paper, the computer scientists Donald Knuth and Andrew Yao devised an algorithm that could simulate the roll of loaded dice with the maximum efficiency theoretically attainable. "While their algorithm was optimally efficient with respect to time," Saad explains, meaning that literally nothing could be faster, "it is inefficient in terms of the space, or computer memory, needed to store that information." In fact, the amount of memory required grows exponentially, depending on the number of sides on the dice and other factors. That renders the Knuth-Yao method impractical, he says, except for special cases, despite its theoretical importance.

FLDR was designed for greater utility. "We are almost as time efficient," Saad says, "but orders of magnitude better in terms of memory efficiency." FLDR can use up to 10,000 times less memory storage space than the Knuth-Yao approach, while taking no more than 1.5 times longer per operation.

For now, FLDR's main competitor is the Alias method, which has been the field's dominant technology for decades. When analyzed theoretically, according to Freer, FLDR has one clear-cut advantage over Alias: It makes more efficient use of the random source -- the "coin tosses," to continue with that metaphor -- than Alias. In certain cases, moreover, FLDR is also faster than Alias in generating rolls of loaded dice.

FLDR, of course, is still brand new and has not yet seen widespread use. But its developers are already thinking of ways to improve its effectiveness through both software and hardware engineering. They also have specific applications in mind, apart from the general, ever-present need for random numbers. Where FLDR can help most, Mansinghka suggests, is by making so-called Monte Carlo simulations and Monte Carlo inference techniques more efficient. Just as FLDR uses coin flips to simulate the more complicated roll of weighted, many-sided dice, Monte Carlo simulations use a dice roll to generate more complex patterns of random numbers.

The United Nations, for instance, runs simulations of seismic activity that show when and where earthquakes, tremors, or nuclear tests are happening on the globe. The United Nations also carries out Monte Carlo inference: running random simulations that generate possible explanations for actual seismic data. This works by conducting a second series of Monte Carlo simulations, which randomly test out alternative parameters for an underlying seismic simulation to find the parameter values most likely to reproduce the observed data. These parameters contain information about when and where earthquakes and nuclear tests might actually have occurred.

"Monte Carlo inference can require hundreds of thousands of times more random numbers than Monte Carlo simulations," Mansinghka says. "That's one big bottleneck where FLDR could really help. Monte Carlo simulation and inference algorithms are also central to probabilistic programming, an emerging area of AI with broad applications."

Despite its seemingly bright future, FLDR almost did not come to light. Hints of it first emerged from a previous paper the same four MIT researchers published at a symposium in January, which introduced a separate algorithm. In that work, the authors showed that if a predetermined amount of memory were allocated for a computer program to simulate the roll of loaded dice, their algorithm could determine the minimum amount of "error" possible -- that is, how close one comes toward meeting the designated probabilities for each side of the dice.

If one doesn't limit the memory in advance, the error can be reduced to zero, but Saad noticed a variant with zero error that used substantially less memory and was nearly as fast. At first he thought the result might be too trivial to bother with. But he mentioned it to Freer who assured Saad that this avenue was worth pursuing. FLDR, which is error-free in this same respect, arose from those humble origins and now has a chance of becoming a leading technology in the realm of random number generation. That's no trivial matter given that we live in a world that's governed, to a large extent, by random processes -- a principle that applies to the distribution of galaxies in the universe, as well as to the outcome of a spirited game of craps.

Credit: 
Massachusetts Institute of Technology

Global environmental changes leading to shorter, younger trees

video: Researchers led by the U.S. Department of Energy's Pacific Northwest National Laboratory have found that rising temperatures and carbon dioxide have been altering the world's forests through increased stress and carbon dioxide and other factors. The Earth has witnessed a dramatic decrease in the age and stature of forests.

Image: 
Graham Bourque/PNNL

RICHLAND, Wash. - Ongoing environmental changes are transforming forests worldwide, resulting in shorter and younger trees with broad impacts on global ecosystems, scientists say.

In a global study published in the May 29 issue of the journal Science, researchers led by the U.S. Department of Energy's Pacific Northwest National Laboratory found that rising temperatures and carbon dioxide have been altering the world's forests through increased stress and carbon dioxide fertilization and through increasing the frequency and severity of disturbances such as wildfire, drought, wind damage and other natural enemies. Combined with forest harvest, the Earth has witnessed a dramatic decrease in the age and stature of forests.

"This trend is likely to continue with climate warming," said Nate McDowell, a PNNL Earth scientist and the study's lead author. "A future planet with fewer large, old forests will be very different than what we have grown accustomed to. Older forests often host much higher biodiversity than young forests and they store more carbon than young forests."

Carbon storage and rich biodiversity are both keys to mitigate climate change.

The study, "Pervasive shifts in forest dynamics in a changing world," determined that forests have already been altered by humans and will mostly likely continue to be altered in the foreseeable future, resulting in a continued reduction of old-growth forests globally.

Three conditions generate a deforestation loop

The researchers used satellite imagery along with a detailed literature review to conclude that the globally averaged tree size has declined over the last century and is likely to continue declining due to continuing environmental changes.

Several factors have led to the loss of trees through human activity and natural causes - clear-cutting, wildfire, insects and disease are leading causes. Known as deforestation, the phenomenon has led to an imbalance of three important characteristics of a diverse and thriving forest: (1) recruitment, which is the addition of new seedlings to a community; (2) growth, the net increase in biomass or carbon; and (3) mortality, the death of forest trees.

"Mortality is rising in most areas, while recruitment and growth are variable over time, leading to a net decline in the stature of forests," said McDowell. "Unfortunately, mortality drivers like rising temperature and disturbances such as wildfire and insect outbreaks are on the rise and are expected to continue increasing in frequency and severity over the next century. So, reductions in average forest age and height are already happening and they're likely to continue to happen."

Vegetation dynamics are changing

The conditions promoting deforestation will likely accelerate, drastically altering the living conditions for plants and animals, McDowell said.

"Over the last hundred years we've lost a lot of old forests," McDowell said." And they've been replaced in part by non-forests and in part by young forests. This has consequences on biodiversity, climate mitigation, and forestry."

Wide-ranging impact

The study also reveals that other mechanisms of deforestation -- "chronically changing drivers" -- are underway. They include:

Atmospheric carbon dioxide: Carbon dioxide levels in the atmosphere have increased dramatically since the Industrial Revolution and are projected to continue rising over the next century. Higher levels of carbon dioxide can increase a tree's growth rate and seed production. However, such carbon dioxide fertilization appears to only happen in younger forests with abundant nutrients and water. Most forests globally are exposed to limitations in nutrients and water, which drastically reduces the carbon dioxide benefits to trees.

Temperature: Rising temperatures limit life-giving photosynthesis, leading to lower growth, higher mortality, and reduced regeneration. This is one key to shorter trees, the study determined.

Droughts: They're expected to increase in frequency, duration and severity globally. Drought can directly cause tree death or indirectly lead to mortality through associated increases in insect or pathogen attack.

Other factors are altering the face of the world's forests:

Wildfire is increasing in many forests worldwide and future fires may be more frequent than they have been in the past 10,000 years in some regions, the study found. Plant growth following forest fires may be slow or absent due to elevated temperatures.

Biotic deforestation disturbances--by insects, fungi and choking vines--have been on the increase. The carbon storage lost to insects each year is the same as the amount of carbon emitted by 5 million vehicles, a study published last year says. This is expected to continue with warming, along with other biotic deforestation disturbances, such as fungi and bacteria. In the tropics, vines that use other plants as host structures are choking trees to death.

Wood harvests alone has had a huge impact on the shift of global forests towards younger ages or towards non-forest land, reducing the amount of forests, and old-growth forests, globally. Where forests are re-established on harvested land, the trees are smaller and biomass is reduced.

Deforestation study born of collaboration

McDowell collaborated with more than 20 scientists to produce the deforestation study, which included data and observations made in more than 160 previous studies.

"Environmental changes are making disturbances worse, and this is causing a change in vegetation dynamics towards shorter, younger forests," McDowell said.

Credit: 
DOE/Pacific Northwest National Laboratory

Global environmental changes are leading to shorter, younger trees -- new study

Ongoing environmental changes are transforming forests worldwide, resulting in shorter and younger trees with broad impacts on global ecosystems, scientists say.

In a global study published in the 29 May 2020 issue of Science magazine, researchers including experts at the University of Birmingham, showed how rising temperatures and carbon dioxide have been altering the world's forests.

These alterations are caused by increased stress and carbon dioxide fertilization and through increasing the frequency and severity of disturbances such as wildfire, drought, wind damage and other natural enemies. Combined with forest harvest, the Earth has witnessed a dramatic decrease in the age and stature of forests.

The study was led by the U.S. Department of Energy's Pacific Northwest National Laboratory (PNNL), with analysis on changes in forest age carried out by the Birmingham Institute of Forest Research (BIFoR).

Dr Tom Pugh, of BIFoR, said: "This study reviews mounting evidence that climate change is accelerating tree mortality, increasingly pushing the world's forests towards being both younger and shorter. This implies a reduction in their ability to store carbon and potentially large shifts in the mix of species that compose and inhabit these forests.

"This is likely to have big implications for the services those forest provide, such as mitigating climate change. Increasing rates of tree mortality driven by climate and land-use change, combined with uncertainty in the mix of species that will form the next generation, pose big challenges for conservationists and forest managers alike."

Dr Nate McDowell, a PNNL Earth scientist and the study's lead author added: "This trend is likely to continue with climate warming. A future planet with fewer large, old forests will be very different than what we have grown accustomed to. Older forests often host much higher biodiversity than young forests and they store more carbon than young forests."

The study, 'Pervasive shifts in forest dynamics in a changing world,' concluded that forests have already been altered by humans and will mostly likely continue to be altered in the foreseeable future, resulting in a continued reduction of the area and stature of old-growth forests globally.

The researchers used a detailed literature review along with analysis of data on land-use change to conclude that the globally averaged tree size has declined over the last century and is likely to continue declining due to continuing environmental changes.

Several factors have led to the widespread loss of trees through both human activity and natural causes - including clear-cutting, wildfire, insects and disease. High levels of tree loss leads to an imbalance in three important characteristics of a diverse and thriving forest: (1) recruitment, which is the addition of new seedlings to a community; (2) growth, the net increase in biomass or carbon; and (3) mortality, the death of forest trees.

"Examination of the global patterns of those three parameters over recent decades shows that mortality is rising in most areas, while recruitment and growth are variable over time, leading to a net decline in the stature of forests," said Dr McDowell. "Unfortunately, mortality drivers like rising temperature and disturbances such as wildfire and insect outbreaks are on the rise and are expected to continue increasing in frequency and severity over the next century. So, reductions in average forest age and height are already happening and they're likely to continue to happen."

Wide-ranging impact

The study also shows that other mechanisms of change in forests are underway too. These include:

Atmospheric carbon dioxide: Carbon dioxide levels in the atmosphere have increased dramatically since the Industrial Revolution and are projected to continue rising over the next century. Higher levels of carbon dioxide can increase a tree's growth rate and seed production. However, such carbon dioxide fertilization appears to only happen in younger forests with abundant nutrients and water. Most forests globally are exposed to limitations in nutrients and water, which drastically reduces the carbon dioxide benefits to trees.

Temperature: Rising temperatures limit life-giving photosynthesis, leading to lower growth, higher mortality, and reduced regeneration. This is one key to shorter trees, the study determined.

Droughts: They're expected to increase in frequency, duration and severity globally. Drought can directly cause tree death or indirectly lead to mortality through associated increases in insect or pathogen attack.

Other factors are altering the face of the world's forests:

Wildfire is increasing in many forests worldwide and future fires may be more frequent than they have been in the past 10,000 years in some regions, the study found. Plant growth following forest fires may be slow or absent due to elevated temperatures.

Biotic disturbances--by insects, fungi and choking vines--have been on the increase. The carbon storage lost to insects each year is the same as the amount of carbon emitted by 5 million vehicles, a study published last year says. This is expected to continue with warming, along with other biotic deforestation disturbances, such as fungi and bacteria. In the tropics, vines that use other plants as host structures are choking trees to death.

Wood harvests alone has had a huge impact on the shift of global forests towards younger ages or towards non-forest land, reducing the amount of forests, and old-growth forests, globally. Where forests are re-established on harvested land, the trees are smaller and biomass is reduced.

The US Department of Energy's Office of Science, the European Research Council (TreeMort project) and several other organisations funded the work. The study's authors include researchers from more than a dozen institutions.

Credit: 
University of Birmingham

New 'whirling' state of matter discovered in an element of the periodic table

image: Contrary to regular magnets, spin glasses have randomly placed atomic magnets that point in all kinds of directions. Self-induced spin glasses are made of whirling magnets circulating at different speeds and constantly evolving over time.

Image: 
Courtesy of Daniel Wegner

The strongest permanent magnets today contain a mix of the elements neodymium and iron. However, neodymium on its own does not behave like any known magnet, confounding researchers for more than half a century. Physicists at Radboud University and Uppsala University have shown that neodymium behaves like a so-called 'self-induced spin glass,' meaning that it is composed of a rippled sea of many tiny whirling magnets circulating at different speeds and constantly evolving over time. Understanding this new type of magnetic behaviour refines our understanding of elements on the periodic table and eventually could pave the way for new materials for artificial intelligence. The results will be published on 29th of May, in Science.

"In a jar of honey, you may think that the once clear areas that turned milky yellow have gone bad. But rather, the jar of honey starts to crystallize. That's how you could perceive the 'aging' process in neodymium." Alexander Khajetoorians, professor in Scanning probe microscopy, together with professor Mikhail Katsnelson and assistant professor Daniel Wegner, found that the material neodymium behaves in a complex magnetic way that no one ever saw before in an element on the periodic table.

Whirling magnets and glasses

Magnets are defined by a north and south pole. Dissecting a regular fridge magnet, one finds many atomic magnets, so-called 'spins', that are all aligned along the same direction and define the north and south pole. Quite differently, some alloy materials can be a 'spin glass,' randomly placed spins point in all kinds of directions. Spin glasses derive their name-sake from the amorphous evolving structure of the atoms in a piece of glass. In this way, spin glasses link magnetic behaviour to phenomena in softer matter, like liquids and gels.

Spin glasses have been known to sometimes occur in alloys, which are combinations of metals with one or more other elements and with an amorphous structure, but never in pure elements of the periodic table. Surprisingly, Radboud researchers found that the atomic spins of a perfectly ordered piece of the rare-earth element neodymium form patterns that whirl like a helix but constantly change the exact pattern of the helix. This is the manifestation of a new state of matter called a 'self-induced spin glass'.

Seeing the magnetic structure

"In Nijmegen, we are specialists in scanning tunnelling microscopy (STM). It allows us to see the structure of individual atoms, and we can resolve the north and south poles of the atoms", Wegner explains. "With this advancement in high-precision imaging, we were able to discover the behaviour in neodymium, because we could resolve the incredibly small changes in the magnetic structure. That's not an easy thing to do."

A material that behaves like neurons

This finding opens up the possibility that this complex and glassy magnetic behaviour could also be observed in uncountable new materials, including other elements on the periodic table. Khajetoorians: "It will refine textbook knowledge of the basic properties of matter. But it will also provide a proving ground to develop new theories where we can link physics to other fields, for example, theoretical neuroscience."

"The complex evolution of neodymium may be a platform to mimic basic behaviour used in artificial intelligence", Khajetoorians continues. "All the complex patterns which can be stored in this material can be linked to image recognition."

With the advancement of AI and its large energy footprint, there is increasing demand to create materials that can perform brain-like tasks directly in hardware. "You could never build a brain-inspired computer with simple magnets, but materials with this complex behaviour could be suitable candidates", Khajetoorians says.

Credit: 
Radboud University Nijmegen

The effectiveness of a heating system is validated, heating air from solar radiation

image: This is an image of the device installed by the research team.

Image: 
University of Córdoba

Heating and air conditioning in buildings make up almost half of the total energy consumption in the European Union. What is more, nearly 75% relies on fossil fuels, according to data from the European Commission. Hence, reducing this consumption and integrating renewable energy in heating and air conditioning processes in buildings is one of today's priorities for scientific research.

The research team from the Thermal Machines and Engines Section at the University of Cordoba performed an experimental study that validated the effectiveness of a heating system that heats air in buildings using solar radiation. The device uses a series of heat collectors, known as UTCs (Unglazed transpired collectors), that absorb the heat generated in the outer layer of the facade when hit with sun rays. Later, using this energy, the ventilation air is preheated before going into the residences to heat them.

The system, patented decades ago, "has not had widespread use in Europe, other than a few experimental buildings", explains the lead author, Fernando Peci. The team installed the device on a test module of 4x2 meters subjected to the normal conditions of a home. Over the course of a month, during winter, this heat collector was monitored throughout different weather conditions such as solar radiation, room temperature, wind and solar radiation angle of impact.

According to the study's results, the heating demands needed to heat the building would be covered for 75% of the days accounted for in the study, proving that "this technology could offer great performance in order to heat buildings using solar energy", especially those without much glass on their facades and facing southwards, as the Northern Hemisphere receives more natural light during the day from that direction.

A strategy to refurbish buildings:

As the lead author of the research points out, this heating system could be advisable when renovating old buildings, since installing it does not alter the original facade. The heat collector is made up of a perforated metal plate -covered with a dark material- so it can connect to the ventilators and carry the heat inside, so the building would be minorly affected.

Furthermore, as Fernando Peci underscores, the team suggests using these ventilation systems in social housing, "in homes where most families can't afford to pay for heating costs". In this vein, the device would not only benefit the environment, but also translate into financial savings on the electricity bill, one of the most feared bills in the winter.

Credit: 
University of Córdoba