Tech

Exposure to ultrafine aerosol particles in homes depends primarily on people themselves

image: The measurements clearly showed that significant amounts of ultra-fine particles are also released during cooking, baking and toasting.

Image: 
Tilo Arnhold, TROPOS

Leipzig/Berlin. Residents of large German cities have it above all in their own hands how high the concentrations of ultrafine dust are in their homes. The level of pollution in the home depends only partially on the air quality outside. However, it also depends very much on activities inside the home, such as cooking activities or heating of solid fuel. This study was led by the Leibniz Institute for Tropospheric Research (TROPOS) commissioned by the Federal Environment Agency (UBA). To this end, fine and ultrafine aerosol particles were measured inside and outside for about two weeks in each of 40 apartments in Leipzig and Berlin during different seasons of the year. The study was published in English in the journal Aerosol and Air Quality Research. It is the first long-term study on aerosol particles in the size range from10 Nano- to 10 Micrometer to be conducted in such detail in many apartments in Germany over such a long period of time.

Fine and ultrafine aerosol particles are important for public health, because of the links with respiratory and cardiovascular diseases. How many particles remain in the body depends, among other things, on the size of the particles. Among the most important sources of ultrafine particles, which are smaller than 100 Nanometer and can therefore penetrate deep into the body, are combustion engines in road and air traffic, small combustion plants, power stations or even forest fires. For this reason, many industrialized countries now have extensive measures in place to reduce particulate matter in the outside air. According to estimates, however, people in the so-called developed countries spend more than two thirds of their lives inside buildings and most of that time in their own homes. At home, they are exposed to a mixture of pollutants that come from indoor sources such as cooking activities or heating, but also from the outside air.

In order to find out to which kind of fine and ultrafine particle the people are exposed in their own homes, TROPOS has been commissioned by the Federal Environment Agency to investigate the indoor pollution in 40 non-smoking houses or apartments in Leipzig and Berlin between 2016 and 2019. Parallel to the indoor measurements, identical measurements were taken either on the balcony, on the terrace or in the garden. To be able to assess the effects of road traffic, about half of the apartments were located within 150 meters of busy roads. Further apartments were selected in the urban background and in outlying districts to map different quality levels of outdoor air. For the research project, the TROPOS team developed special measuring instruments to determine high-resolution particle number size distributions inside and outside the buildings. Over the course of two years, each of the 40 apartments was visited twice with a measuring period of one week in different seasons.

Since it was assumed that the activities of the residents have a major influence on the air quality in the apartment, they were asked to keep a digital logbook in which activities such as airing, cooking, lighting candles or vacuuming were noted. A total of around 10,000 measuring hours were recorded in summer and winter. This was important for the evaluation, as apartments are actively ventilated to varying degrees depending on the outside temperature.

The measurements showed that 90 percent of the number of particles in the houses or apartments were ultrafine and thus smaller than 100 Nanometer. Surprisingly, clear conclusions could be drawn about the indoor activities. Apart from burning candles, significant amounts of ultrafine particles were also released during cooking, baking, and toasting. The particles could also be measured in rooms outside of the kitchen.

In terms of time, the number of ultrafine particles was lowest during the night but peaked in the evening and morning. Especially in winter, when there is less active ventilation, a very clear daily profile was shown: "The particle number concentration in indoor rooms shows strong peaks at 8:00 a.m., 12:00 a.m. and 7:00 p.m., which are typical times for breakfast, lunch and dinner," explains Jiangyue Zhao from TROPOS, who evaluated the data in the frame of her doctoral thesis.

In summertime, the peaks of ultrafine particles were less pronounced, because or more active ventilation due to open windows. While the largest amounts particles were observed in summer as well as in winter in the evening around 8 pm, the morning peak shifted from around 8 am in summer to around 9 am in winter, which could be related to the fact that people become active later in the day due to the later sunrise in winter.

Prof. Alfred Wiedensohler from TROPOS summarized the results: "The approximately 500 measurement days enabled us to obtain a representative daily and seasonal variation pattern of exposure of fine and ultrafine particles in homes and to analyze the corresponding relationships between indoor and outdoor pollution. Concentrations of ultrafine particles were associated with the activities of residents and showed significantly higher concentrations and greater variability than those form outdoors".

From the scientific point of view, the exposure to ultrafine particles in German homes cannot be described by outdoor measurements. One reason for this is that the apartments are usually well insulated by modern energy-saving windows and air exchange is only carried out briefly by manual ventilation. In general, a robust dose-effect relationship for ultrafine particulate matter both indoors and outdoors is currently still lacking. As a result, the scientific community will be asked to conduct targeted studies in the coming years to investigate the health effects of indoor ultrafine particulates. Tilo Arnhold

Credit: 
Leibniz Institute for Tropospheric Research (TROPOS)

SARS-CoV-2 antibody tests are useful for population-level assessments

In this Focus, Juliet Bryant and colleagues highlight the potential power of population-level serological, or antibody, testing to provide snapshots of infection history and immunity in populations as the COVID-19 pandemic progresses. In contrast, they emphasize the risks of using current serological tests to assess individual immunity to the SARS-CoV-2 virus. While the WHO recommends restricting antibody testing to research use only, scientists here argue that these tests - even with moderate sensitivity and specificity levels - could provide highly valuable information to address critical public health questions, such as when to relax stay-at-home orders or school closures. In theory, antibody tests can examine whether a person has ever been exposed to a certain virus over their lifespan. However, what SARS-CoV-2 antibody test results mean for protection and immunity - and how this may vary across diverse populations from different genetic backgrounds - are still poorly understood. Thus, as tools to issue "immune passports" that certify an individual's immunity, current serological tests are insufficient and even harmful, the authors argue; the tests would, in fact, need near-perfect specificity to provide a reliable gauge of immune protection. By contrast, as tools to ascertain population-level epidemiological trends, serological surveys could help officials estimate the risk of future waves of disease, measure the impact of interventions, and confirm the absence of transmission after the pandemic has subsided, as long as the tests' sensitivity and specificity are well-defined during data interpretation. Moreover, serological surveys - which could be distributed to anyone regardless of the presence or absence of symptoms - could provide a less biased picture of infection fatality rate than PCR testing of viral RNA. The latter exhibits considerable variation in testing practices and may lead to testing biases, since they are conducted mostly in symptomatic individuals who seek diagnosis and care. Serological surveys could inform public health initiatives in much the same way that data from a national census is translated into policy decisions regarding infrastructure investments, the authors say, and would similarly require an efficient data-gathering system - incorporating broad consent - governed at national and international levels. As well, population-level serological sampling could enable screening of multiple biomarkers of public health concern, extending the utility of such a framework beyond SARS-CoV-2 alone, the researchers note.

Credit: 
American Association for the Advancement of Science (AAAS)

Computer model can process disparate sources of clinical data to predict brain age

Scientists have trained a computer to analyse different types of brain scan and predict the age of the human brain, according to a new study in the open-access journal eLife.

Their findings suggest that it may be possible to use the model clinically to combine different types of tests of brain function to predict other patient outcomes, such as cognitive decline or depression.

Non-invasive tests of brain function, such as magnetoencephalography (MEG), magnetic resonance imaging (MRI) and positron emission tomography (PET), play a crucial role in clinical neuroscience. But because these tests all measure different aspects of brain function, none of them are optimal on their own. Training computers to analyse data from different tests and predict a clinical outcome would provide a more complete picture of brain function.

"Computer models that have been trained to predict age of a person from brain data of healthy populations have provided useful clinical information," explains lead author Denis Engemann, a research scientist at Inria, the French national research institute for the digital sciences. "The problem is that, in the clinic, it is not always possible to obtain every type of data necessary for this analysis."

In this study, the team set out to see if they could develop a model that combines the anatomical information provided by MRI scans with information about brain rhythms that is powerfully captured by MEG. Most importantly, they wanted to see whether the model would still work if some of the data was missing.

They trained their computer model with a subset of data from the Cam-CAN database, which holds MEG, MRI and neuropsychological data for 650 healthy people aged between 17 and 90 years old. They then compared different versions of the model with the standard anatomical MRI scan, and models that had additional information from functional MRI (fMRI) scans and MEG tests. They found that adding either the MEG or fMRI scan to the standard MRI led to a more accurate prediction of brain age. When both were added, the model was enhanced even further.

Next, they looked at a marker of brain age (called brain age delta) and studied how this related to different brain functions that are measured by MEG and fMRI. This confirmed that MEG and fMRI were each providing unique insights about the brain's function, adding further power to the overall model.

However, when they tested their model against the full Cam-CAN database of 650 people, some of whom did not have MRI, fMRI and MEG data available, they found that, even with the missing data, the computer model using what was available was still more accurate than MRI alone. This is important, because in hospital neurology clinics, it is not always possible to get patients booked in for every type of scan.

In fact, as most hospitals use electroencephalography (EEG) rather than MEG tests, another important finding was that the most powerful brain function measurement that MEG tests provide to the model can also be accurately measured by EEG. This means that in the clinic, EEG could potentially be substituted for MEG without an impact on the predictive power of the model.

"We have used an opportunistic approach to train a computer model to learn from the data available at hand and predict brain age," concludes senior author Alexandre Gramfort, Research Director at Inria. "We anticipate that similar performance can be unlocked using simpler EEG tests that are routinely used alongside MRI in the clinic and could easily be applied to other clinical end points, such as drug dosage, survival or diagnosis."

Credit: 
eLife

How does an increase in nitrogen application affect grasslands?

image: The PaNDiv Experiment in the small Swiss town of Münchenbuchsee, near the capital Bern.

Image: 
© H. Vincent

Virtually all of the grasslands in Europe are managed by farmers and whilst traditional management involved periodic cutting and grazing, modern intensive management involves applications of large amounts of nitrogen fertiliser to increase grass production. Traditionally managed grasslands contained many plant species, but intensively managed ones contain only a few fast-growing ones that profit from the high nutrient levels. The number of disease-causing plant pathogens also increases with fertilisation. All of these changes are occurring simultaneously; however, ecologists do not know which are most important or what happens when several change at the same time.

The PaNDiv (Pathogens, Nitrogen and Diversity) experiment of the Institute of Plant Sciences at the University of Bern is unique because it manipulates many of these factors to see how they interact with each other. In their first article, "Decomposition disentangled" the authors focussed on litter decomposition, i.e. the speed at which plant matter rots down, which is critical for maintaining a healthy and fertile soil.

More than 800 nylon bags

"Nitrogen shifted the plant community towards more fast-growing plant species and this in turn made the leaves decompose faster (feeding back more nitrogen to the soil). This indicates that fertilisation effects on functioning are underestimated if we don't consider the changes in species composition", explains Dr. Noémie Pichon, first author of the study. Litter from fast growing plants decomposes faster because fast growing species invest less in structural tissue and build thinner larger leaves with higher nitrogen content, which are efficient at capturing light but which don't live long. These leaves rot down faster than the small thick leaves produced by slow growing plants. Small nylon bags were filled with litter and left to decompose on each plot to test how fast the biomass produced breaks down. "We ended up sewing more than 800 bags with a sewing machine, but the quality of the results was worth the amount of work" continues Dr. Pichon.

Testing different factors at the same time

Eric Allan, project leader and professor at the Institute of Plant Sciences, says: "The results of our first study show why this kind of experiment is needed: understanding ecosystem functioning is complicated and only by testing many different factors at the same time can we understand their importance and therefore predict how our ecosystems will change in the future".

Several experiments have shown that ecosystems with more plant species have higher levels of ecosystem functioning. However, the PaNDiv experiment is unique because it not only manipulates plant diversity, on small plots of 2 x 2m, but also varies the type of plant species present. This is done by creating plant communities consisting of only fast-growing plants (which thrive in fertile soil) or only slow growing plants (that prosper in poorer conditions). It also combines these treatments with nitrogen fertilisation and the application of fungicide to remove plant pathogens. On each plot, the researchers measured several ecosystem functions. These included: how much plant matter they produced; how active the soil organisms were; the levels of nutrients and carbon in the soil; as well as the number of insects and plant pathogens present. The PaNDiv experiment is located in the small Swiss town of Münchenbuchsee, near the capital Bern. Pichon et al. 2020 is the first article of a long series that will be published using PaNDiv results.

Credit: 
University of Bern

Research shows that the combined production of fish and vegetables can be profitable

image: Greenhouse of the aquaponics facility of the 'Mueritzfischer' in Waren, Germany.

Image: 
Hendrik Monsees, IGB

When it comes to future food production, the combined farming of fish and vegetables through aquaponics is currently a hotly debated topic. But how realistic is the idea? Publicly available data and analysis on the economic feasibility of professional aquaponics are at present very limited. Researchers from the Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB) have just published an extensive profitability analysis of a facility that already produces fish and vegetables on a large scale. The result: aquaponics may have both environmental and cost benefits - if produced according to good agricultural practice and under suitable conditions.

The subject of analysis was the aquaponic system of the "Mueritzfischer", located in Waren (Mueritz). This 540 square metre facility produces fish and vegetables in a combined recirculating system. The aquaponic system was built within "INAPRO", an EU-funded project led by IGB.

The researchers carried out extensive analysis based on real one-year production data. Although the aquaponic system was not profitable at the research stage, the very extensive and valuable set of data it produced enabled the researchers to develop two scenarios for production practice. One scenario showed that the aquaponics approach is profitable if facilities are sufficiently large. On the basis of this scenario, the scientists developed a model case with defined economic key indicators, enabling them to calculate the figures for different sized facilities.

"It is a good thing that there is a social, political and economic interest in aquaponics as a future technology. The aim of our study is to offer a research-based contribution to this debate, pointing out the opportunities and the challenges involved. This is one of the reasons why we decided to publish our findings cost-free, in open access format," explained Professor Werner Kloas, leader of the project.

According to the IGB researchers, the main obstacles for commercial aquaponics are the high investment costs and, especially in Germany, the high operating costs such as for fish feed, labour and energy. They also state that undertakings must have the necessary expertise in both aquaculture and horticulture. Furthermore, the margin reportedly depends to a considerable extent on the market environment and the production risks, which are very difficult to forecast in some cases.

Urban farming: aquaponics in the city

The lead author of the study, Goesta Baganz, sees great potential in the system, despite the risks. Citing the example of urban spaces, he stated: "The already profitable model case would cover an overall space of about 2,000 square metres. This would mean that professional aquaponics would also be possible in urban and peri-urban areas, where space is scarce and often relatively expensive. If, therefore, urban aquaponics can make a profit on such a scale, there is even greater opportunity for local food production, which is becoming increasingly important throughout the world as urbanisation progresses."

"Considering current problems like climate change, population growth, urbanisation as well as overexploitation and pollution of natural resources, global food production is the largest pressure caused by humans on Earth, threatening ecosystems and the stability of societies. Consequently, one of the key societal goals is to achieve eco-friendly, efficient food production," explained Werner Kloas, putting aquaponics research into the global context.

How IGB aquaponics - known also as "Tomatofish" - works:

A wide range of aquaponics approaches exist, many of which originated from amateur settings. The approach developed by IGB researchers is based on two recirculating systems in which fish and plants are produced in separate units. Smart software and sensors continuously take measurements and interconnect the two cycles, whenever needed, to make optimum use of synergies, whilst still creating the best growth conditions for both units.

Credit: 
Forschungsverbund Berlin

Retrofitting of VW Diesel engines was successful

image: A Volkswagen Passat Diesel from 2011. Software updates after the Dieselgate-scandal in 2015 im-proved NOx-Emissions of such cars significantly.

Image: 
Volkswagen, www.vwpress.co.uk

The VW diesel scandal began with a bang on September 18, 2015. On exactly the opening day of the Frankfurt International Motor Show (IAA), the US Environmental Protection Agency (EPA) published its "Notice of Violation": VW diesel engines with 1.6 and 2.0 liters displacement (type code EA 189) contained illegal software designed to manipulate emissions. It quickly became clear that 11 million vehicles of the VW group were affected worldwide. Company boss Martin Winterkorn resigned. Expensive lawsuits followed. In many countries, the EA 189 engines manufactured by the VW group had to be retrofitted with software or hardware updates.

Now - almost five years later - a study by the University of York and Empa shows that the retrofitting was successful from an environmental point of view. Retrofitted VW diesel engines emit up to a third less harmful nitrogen oxides in everyday use than engines with the original software dating from the Dieselgate era.

Exhaust gas measurement from the roadside

Stuart Grange works in Empa's Air Pollution/Environmental Technology Laboratory and also at the Wolfson Atmospheric Chemistry Laboratories at the University of York. Together with his colleagues, he used a special instrument to examine the exhaust plumes of 23,000 passing cars and analyzed the levels of NOx and CO2. The measurements took place in England between May 2012 and April 2018 - before and after the Dieselgate scandal.

At each measurement, the vehicle's registration number was also registered and the vehicle data were retrieved from the British registration database MVRIS (Motor Vehicle Registration Information System). Among the 23,000 correctly measured exhaust plumes, Grange recorded 4053 times the emissions of the VW EA 189 diesel engine, which formed the basis for his analysis. In April 2020 it was published in the Environmental Science & Technology Letters of the American Chemical Society.

Significant improvement

The results of the measurements show a clear effect: NOx emissions from the small 1.6-liter engines of the EA 189 series had decreased by more than 36 percent. VW had offered software and hardware retrofitting for this engine: In addition to updating the engine software, a small supplementary component was installed in the engine's intake duct, allowing the air mass sensor to work more precisely.

For the larger 2.0-liter engine of the EA 189 series only the software was modified. Here the measured NOx emissions fell by an average of almost 22 percent. The improvements for each individual car are even greater: in the UK, the retrofitting of the engines was voluntary and was only carried out by around 70 percent of VW owners. This means that a certain number of diesel engines that had not been retrofitted also passed the measuring device, thus worsening the average value.

For commercial vehicles with EA 189 engines - i.e. VW Caddy and VW Van - the results were significantly less impressive. The NOx values for the 1.6-litre diesel were just 22 per cent better than before (compared with 36 percent for passenger cars), and for the 2.0-litre diesel the emissions were even 53 per cent worse. The researchers suspect that fewer commercial vehicle operators had voluntarily carried out the retrofitting.

In Switzerland, retrofitting the EA 189 engine was mandatory. According to Amag company spokesman Dino Graf, all vehicles - both commercial vehicles and passenger cars - have meanwhile been retrofitted.

The good, the bad and the ugly

For comparison, Stuart Grange and his colleagues also examined the exhaust plumes of other vehicles before and after the diesel scandal, which did not need to be retrofitted. The ambient temperature plays a very important role here: the measurements before the diesel scandal were taken at an average of 20 degrees Celsius, the measurements after the diesel scandal at an average of 11 degrees Celsius. In another study last year, the researchers had already found dramatically higher NOx emissions caused by diesel cars on cold days. This effect has now reappeared - but not for all manufacturers.

Cars from General Motors (Opel, Vauxhall, Chevrolet), Renault-Nissan and Fiat Chrysler Automobiles emitted almost twice as much NOx into the environment on cold days as on warm days. VW's 3.0-litre diesel engine also had 55% higher NOx values. But there is another way, which was showed by vehicles of the BMW Group (BMW, Mini), cars from Volvo and PSA (Peugeot, Citroën) and cars of the Indian brand Tata: they did not emit more NOx on cold days. The engine-management was obviously programmed more carefully.

What happens when the engineers are allowed to make a real effort, however, was shown by VW. After the diesel scandal and the software update, the NOx values were significantly improved, despite the significantly cooler weather!

Have all vehicles retrofitted

The researchers provide the approval and environmental authorities with a straightforward tip: the EU's NOx limits are still being violated in many European cities. But depending on the european country, only between 30 and 90 percent of Dieselgate engines have been retrofitted. Since VW Group vehicles are very widespread, mandatory retrofitting could certainly make the NOx limits easier to comply with.

The legislator has already tightened up in another area: today's vehicles must pass the stricter WLTP cycle. Exhaust gases are now measured in the laboratory at 23 and 14 degrees Celsius; during road tests, outside temperatures down to -7 degrees are permitted. A car that emits significantly more NOx in winter would no longer receive EU type approval today.

Credit: 
Swiss Federal Laboratories for Materials Science and Technology (EMPA)

Why pancreatic ductal adenocarcinoma is so lethal

image: Photomicrograph of a human pancreatic cancer tumor showing a mixture of different kinds of cells in close proximity to each other. Tumor-promoting, p63-positive cancer cells are stained red, neutrophils orange, fibroblasts white, and other p63-negative cancer cells are green. Given that p63-positive pancreatic cancer cells secrete inflammatory factors, this image demonstrates how these cancer cells can communicate with surrounding cells and promote inflammatory processes in human patients. The sample was stained by multiplexed immunofluorescence.

Image: 
Vakoc lab/CSHL, 2020

Pancreatic ductal adenocarcinoma (PDA) is a deadly cancer, killing patients within a year. CSHL Professor Christopher Vakoc and his former postdoc Timothy Somerville discovered how pancreatic cells lose their identity, acquire a deadly new identity, and recruit nearby cells to help them grow, promote inflammation, and invade nearby tissues. This understanding could lead to new therapies similar to ones developed for other cancers.

Vakoc says, "We think part of the reason why these tumors are so aggressive is that they exploit normal cells. The normal cells that are in the vicinity of these tumors, are actually co-conspirators in this disease, and are being co-opted to kind of create a community of cells that are kind of teaming up with one another to drive this aggressive cancer to expand and metastasize. Ultimately, we think we sort of learned why this tumor is so aggressive through understanding these two mechanisms."

Somerville found two transcription factors that were highly abundant in PDA but not in a normal pancreas: ZBED2 (pronounced Z-bed too) and p63.

ZBED2 confuses the pancreas cell about its own identity. It displaces another transcription factor that is required for the pancreas cell to perform its normal functions as a pancreas cell. ZBED2 turns pancreas cells into squamous cells--a type of cell found in the skin. Patients with the worst outcomes have the highest levels of squamous cells in their tumors.

Little was known about ZBED2 when Somerville began his research. He says, "ZBED2 is a gene. It makes a protein, which is transcription factor ZBED2. What was completely unknown was what this protein ZBED2 was actually doing. We were able to demonstrate that it is a transcription factor, which means that it can bind to DNA and regulate other genes. And we were able to show what types of genes it regulates."

p63 recruits nearby cells--mostly neutrophils and fibroblasts--to support the cancerous squamous cells. They "alter the tumor microenvironment, making it more inflammatory and more aggressive. This is what we think is contributing to the particularly poor outcomes of this group of pancreatic patients," says Somerville.

PDA is notoriously resistant to chemotherapy. The wall of inflammatory cells makes it difficult for anti-tumor drugs to access the tumor. Somerville believes that understanding what ZBED2 and p63 are doing to make this cancer so aggressive will uncover ways that scientists can prevent or at least slow its growth. Somerville notes, "It's about exploiting transcription factors. If we understand their functions, we can use them to show us how to think about different ways to treat this disease."

The FDA has already approved drugs that target transcription factors in breast cancer, leukemia, and prostate cancer. Vakoc's lab is seeking to advance this concept for other types of cancer, such as PDA.

Credit: 
Cold Spring Harbor Laboratory

How the mouse conquered the house

Like humans, the house mouse, or Mus musculus sp., is widespread throughout the world, making it the most invasive rodent species. An international study involving eight countries* and led by Thomas Cucchi of the 'Archaeozoology, Archaeobotany: Societies, Practices and Environments' laboratory (CNRS/Muséum national d'Histoire naturelle) reveals how human activities have favoured the emergence and spread of this animal over the last 20,000 years, from the Middle East to Europe 4,000 years ago. To reconstruct the history of the biological invasion of the house mouse, the researchers analysed more than 800 remains from 43 archaeological sites. The study, published in Scientific Reports on 19 May 2020, also reveals that the diffusion dates into Europe coincide with the first appearance of domestic cats on the continent, suggesting that the introduction of this predator may have been motivated by the need to control mouse populations in order to protect grain and food stocks.

Credit: 
CNRS

Mount Sinai first in US using artificial intelligence to analyze COVID-19 patients

image: For each pair of images, the left image is a CT image showing the segmented lung used as input for the CNN (convolutional neural network algorithm) model trained on CT images only, and the right image shows the heatmap of pixels that the CNN model classified as having SARS-CoV-2 infection (red indicates higher probability). (a) A 51-year-old female with fever and history of exposure to SARS-CoV-2. The CNN model identified abnormal features in the right lower lobe (white color), whereas the two radiologists labeled this CT as negative. (b) A 52-year-old female who had a history of exposure to SARS-CoV-2 and presented with fever and productive cough. Bilateral peripheral ground-glass opacities (arrows) were labeled by the radiologists, and the CNN model predicted positivity based on features in matching areas. (c) A 72-year-old female with exposure history to the animal market in Wuhan presented with fever and productive cough. The segmented CT image shows ground-glass opacity in the anterior aspect of the right lung (arrow), whereas the CNN model labeled this CT as negative. (d) A 59-year-old female with cough and exposure history. The segmented CT image shows no evidence of pneumonia, and the CNN model also labeled this CT as negative.

Image: 
BioMedical Engineering and Imaging Institute (BMEII) at the Icahn School of Medicine at Mount Sinai

Mount Sinai researchers are the first in the country to use artificial intelligence (AI) combined with imaging, and clinical data to analyze patients with coronavirus disease (COVID-19). They have developed a unique algorithm that can rapidly detect COVID-19 based on how lung disease looks in computed tomography (CT scans) of the chest, in combination with patient information including symptoms, age, bloodwork, and possible contact with someone infected with the virus. This study, published in the May 19 issue of Nature Medicine, could help hospitals across the world quickly detect the virus, isolate patients, and prevent it from spreading during this pandemic.

"AI has huge potential for analyzing large amounts of data quickly, an attribute that can have a big impact in a situation such as a pandemic. At Mount Sinai, we recognized this early and were able to mobilize the expertise of our faculty and our international collaborations to work on implementing a novel AI model using CT data from coronavirus patients in Chinese medical centers. We were able to show that the AI model was as accurate as an experienced radiologist in diagnosing the disease, and even better in some cases where there was no clear sign of lung disease on CT," says one of the lead authors, Zahi Fayad, PhD, Director of the BioMedical Engineering and Imaging Institute (BMEII) at the Icahn School of Medicine at Mount Sinai. "We're now working on how to use this at home and share our findings with others--this toolkit can easily be deployed worldwide to other hospitals, either online or integrated into their own systems."

This research expands on a previous Mount Sinai study that identified a characteristic pattern of disease in the lungs of COVID-19 patients and showed how it develops over the course of a week and a half.

The new study involved scans of more than 900 patients that Mount Sinai received from institutional collaborators at hospitals in China. The patients were admitted to 18 medical centers in 13 Chinese provinces between January 17 and March 3, 2020. The scans included 419 confirmed COVID-19-positive cases (most either had recently traveled to Wuhan, China, where the outbreak began, or had contact with an infected COVID-19 patient) and 486 COVID-19-negative scans. Researchers also had patients' clinical information, including blood test results showing any abnormalities in white blood cell counts or lymphocyte counts as well as their age, sex, and symptoms (fever, cough, or cough with mucus). They focused on CT scans and blood tests since doctors in China use both of these to diagnose patients with COVID-19 if they come in with fever or have been in contact with an infected patient.

The Mount Sinai team integrated data from those CT scans with the clinical information to develop an AI algorithm. It mimics the workflow a physician uses to diagnose COVID-19 and gives a final prediction of positive or negative diagnosis. The AI model produces separate probabilities of being COVID-19-positive based on CT images, clinical data, and both combined. Researchers initially trained and fine-tuned the algorithm on data from 626 out of 905 patients, and then tested the algorithm on the remaining 279 patients in the study group (split between COVID-19-positive and negative cases) to judge the test's sensitivity; higher sensitivity means better detection performance. The algorithm was shown to have statistically significantly higher sensitivity (84 percent) compared to 75 percent for radiologists evaluating the images and clinical data. The AI system also improved the detection of COVID-19-positive patients who had negative CT scans. Specifically, it recognized 68 percent of COVID-19-positive cases, whereas radiologists interpreted all of these cases as negative due to the negative CT appearance. Improved detection is particularly important to keep patients isolated if scans don't show lung disease when patients first present symptoms (since the previous study showed that lung disease doesn't always show up on CT in the first few days) and COVID-19 symptoms are often nonspecific, resembling a flu or common cold, so it can be difficult to diagnose.

CT scans are not widely used for diagnosis of COVID-19 in the United States; however, Dr. Fayad explains that imaging can still play an important role.

"Imaging can help give a rapid and accurate diagnosis--lab tests can take up to two days, and there is the possibility of false negatives--meaning imaging can help isolate patients immediately if needed, and manage hospital resources effectively. The high sensitivity of our AI model can provide a 'second opinion' to physicians in cases where CT is either negative (in the early course of infection) or shows nonspecific findings, which can be common. It's something that should be considered on a wider scale, especially in the United States, where currently we have more spare capacity for CT scanning than in labs for genetic tests," said Dr. Fayad, who is also a Professor of Diagnostic, Molecular and Interventional Radiology at the Icahn School of Medicine at Mount Sinai.

"This study is important because it shows that an artificial intelligence algorithm can be trained to help with early identification of COVID-19, and this can be used in the clinical setting to triage or prioritize the evaluation of sick patients early in their admission to the emergency room," says Matthew Levin, MD, Director of the Mount Sinai Health System's Clinical Data Science Team, and a member of the Mount Sinai COVID Informatics Center. "This is an early proof concept that we can apply to our own patient data to further develop algorithms that are more specific to our region and diverse populations."

Mount Sinai researchers are now focused on further developing the model to find clues about how well patients will do based on subtleties in their CT data and clinical information. They say this could be important to optimize treatment and improve outcomes.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Pretty as a peacock: The gemstone for the next generation of smart sensors

Scientists have taken inspiration from the biomimicry of butterfly wings and peacock feathers to develop an innovative opal-like material that could be the cornerstone of next generation smart sensors.

An international team of scientists, led by the Universities of Surrey and Sussex, has developed colour-changing, flexible photonic crystals that could be used to develop sensors that warn when an earthquake might strike next.

The wearable, robust and low-cost sensors can respond sensitively to light, temperature, strain or other physical and chemical stimuli making them an extremely promising option for cost-effective smart visual sensing applications in a range of sectors including healthcare and food safety.

In a study published by the journal Advanced Functional Materials, researchers outline a method to produce photonic crystals containing a minuscule amount of graphene resulting in a wide range of desirable qualities with outputs directly observable by the naked eye.

Intensely green under natural light, the extremely versatile sensors change colour to blue when stretched or turn transparent after being heated.

Dr. Izabela Jurewicz, Lecturer in Soft Matter Physics at the University of Surrey's Faculty of Engineering and Physical Sciences, said "This work provides the first experimental demonstration of mechanically robust yet soft, free-standing and flexible, polymer-based opals containing solution-exfoliated pristine graphene. While these crystals are beautiful to look at, we're also very excited about the huge impact they could make to people's lives."

Alan Dalton, Professor Of Experimental Physics at the University of Sussex's School of Mathematical and Physical Sciences, said: ""Our research here has taken inspiration from the amazing biomimicry abilities in butterfly wings, peacock feathers and beetle shells where the colour comes from structure and not from pigments. Whereas nature has developed these materials over millions of years we are slowly catching up in a much shorter period."

Among their many potential applications are:

Time-temperature indicators (TTI) for intelligent packaging - The sensors are able to give a visual indication if perishables, such as food or pharmaceuticals, have experienced undesirable time-temperature histories. The crystals are extremely sensitive to even a small rise in temperature between 20 and 100 degrees C.

Finger print analysis - Their pressure-responsive shape-memory characteristics are attractive for biometric and anti-counterfeiting applications. Pressing the crystals with a bare finger can reveal fingerprints with high precision showing well-defined ridges from the skin.

Bio-sensing - The photonic crystals can be used as tissue scaffolds for understanding human biology and disease. If functionalised with biomolecules could act as highly sensitive point-of-care testing devices for respiratory viruses offering inexpensive, reliable, user-friendly biosensing systems.

Bio/health monitoring - The sensors mechanochromic response allows for their application as body sensors which could help improve technique in sports players.

Healthcare safety - Scientists suggest the sensors could be used in a wrist band which changes colour to indicate to patients if their healthcare practitioner has washed their hands before entering an examination room.

The research draws on the Materials Physics Group's (University of Sussex) expertise in the liquid processing of two-dimensional nanomaterials, Soft Matter Group's (University of Surrey) experience in polymer colloids and combines it with expertise at the Advanced Technology Institute in optical modelling of complex materials. Both universities are working with the Sussex-based company Advanced Materials Development (AMD) Ltd to commercialise the technology.

Joseph Keddie, Professor of Soft Matter Physics at the University of Surrey, said: "Polymer particles are used to manufacture everyday objects such as inks and paints. In this research, we were able finely distribute graphene at distances comparable to the wavelengths of visible light and showed how adding tiny amounts of the two-dimensional wonder-material leads to emerging new capabilities."

John Lee, CEO of Advanced Materials Development (AMD) Ltd, said: "Given the versatility of these crystals, this method represents a simple, inexpensive and scalable approach to produce multi-functional graphene infused synthetic opals and opens up exciting applications for novel nanomaterial-based photonics. We are very excited to be able to bring it to market in near future."

Credit: 
University of Surrey

Parent-led discussion about mutual strengths benefits parent-teen communication

Philadelphia, May 19, 2020--A primary care-based intervention to promote parent-teen communication led to less distress and increased positive emotions among adolescents, as well as improved communication for many teens, according to a new study by researchers at the Center for Parent and Teen Communication at Children's Hospital of Philadelphia (CHOP). The findings, which were published today in The Journal of Pediatrics, highlight the potential impact of engaging parents in the primary care setting to improve parent-teen communication, which could lead to better adolescent health outcomes.

"These findings underscore the promise of this parent-directed intervention delivered in primary care to promote parent-teen communication and adolescent health outcomes," said Victoria A. Miller, PhD, a psychologist and Director of Research in the Craig-Dalsimer Division of Adolescent Medicine at CHOP and first author of the study. "Given the evidence that parents have a significant influence on their children during adolescence, supporting healthy parent-adolescent relationships should be a critical part of adolescent preventive care."

The intervention developed by the research team consisted of an eight-page booklet that addressed three main messages about parenting adolescents: adolescence is a time of change and opportunity, and parents matter now more than ever; teens need to remain connected to parents and at the same time develop a separate identity; and parents need to recognize and talk with teens about their strengths. To help promote discussions about strengths, the booklet offered prompts to help parents and their teens identify and discuss the strengths they see in themselves and each other, a unique approach that emphasized reciprocity, rather than one-way communication from parent to teen.

In order to assess the effectiveness of the materials on parent-teen communication, the researchers conducted a randomized controlled trial, in which 120 adolescents and an accompanying parent were placed either in an intervention group, which received the booklet and discussion instructions during their well check-up, and a control group, which did not receive the materials. The adolescents who enrolled in the study were 13- to 15-year-old established patients at a CHOP primary care practice. Parents and teens in both groups took a survey before their well visit and two months later.

The research team found that adolescents whose parents had received the booklet and discussion materials reported a decrease in distress after two months, while teens in the control group reported an increase. Patients in the intervention arm also demonstrated increased feelings of happiness and calm, while those in the control group showed a decrease in those emotions.

The researchers found that the materials had a positive impact on teens who had difficulty communicating openly with their parents before the trial period. The intervention did not, however, change the extent to which adolescents reported problematic communication with their parents or alter parental beliefs about typical adolescents being risky, moody, or friendly.

Although the intervention materials did not impact adolescent reports of well-being, the researchers were surprised to find that the parents in the control group, who did not receive the materials, reported a marginal increase in well-being after two months, whereas parents who received the materials did not. The researchers acknowledge this could be a spurious finding, but they surmise the materials might have raised concerns among certain parents about the status of their relationship with their teen or instigated discussions that led to disagreements or further tension.

"Given what we know about other communication interventions that have shown a positive impact on adolescent behavior, this study provides strong support for future research to further evaluate the potential impact and reach of interventions that target parents of adolescents in the context of pediatric primary care," Miller said.

Credit: 
Children's Hospital of Philadelphia

A spreadable interlayer could make solid state batteries more stable

image: Solid state batteries are of great interest to the electric vehicle industry. Scientists at Chalmers University of Technology, Sweden, and Xi'an Jiaotong University, China now present a new way of taking this promising concept closer to large-scale application. An interlayer, made of a spreadable, 'butter-like' material helps improve the current density tenfold, while also increasing performance and safety.

Image: 
Yen Strandqvist/Chalmers University of Technology

Solid state batteries are of great interest to the electric vehicle industry. Scientists at Chalmers University of Technology, Sweden, and Xi'an Jiaotong University, China now present a new way of taking this promising concept closer to large-scale application. An interlayer, made of a spreadable, 'butter-like' material helps improve the current density tenfold, while also increasing performance and safety.

"This interlayer makes the battery cell significantly more stable, and therefore able to withstand much higher current density. What is also important is that it is very easy to apply the soft mass onto the lithium metal anode in the battery - like spreading butter on a sandwich," says researcher Shizhao Xiong at the Department of Physics at Chalmers.

Alongside Chalmers Professor Aleksandar Matic and Professor Song's research group in Xi'an, Shizhao Xiong has been working for a long time on crafting a suitable interlayer to stabilise the interface for solid state batteries. The new results were recently presented in the prestigious scientific journal Advanced Functional Materials

Solid state batteries could revolutionise electric transport. Unlike today's lithium-ion batteries, solid-state batteries have a solid electrolyte and therefore contain no environmentally harmful or flammable liquids.

Simply put, a solid-state battery can be likened to a dry sandwich. A layer of the metal lithium acts as a slice of bread, and a ceramic substance is laid on top like a filling. This hard substance is the solid electrolyte of the battery, which transports lithium ions between the electrodes of the battery. But the 'sandwich' is so dry, it is difficult to keep it together - and there are also problems caused by the compatibility between the 'bread' and the 'topping'. Many researchers around the world are working to develop suitable resolutions to address this problem.

The material which the researchers in Gothenburg and Xi'an are now working with is a soft, spreadable, 'butter-like' substance, made of nanoparticles of the ceramic electrolyte, LAGP, mixed with an ionic liquid. The liquid encapsulates the LAGP particles and makes the interlayer soft and protective. The material, which has a similar texture to butter from the fridge, fills several functions and can be spread easily.

Although the potential of solid-state batteries is very well known, there is as yet no established way of making them sufficiently stable, especially at high current densities, when a lot of energy is extracted from a battery cell very quickly, that is at fast charge or discharge. The Chalmers researchers see great potential in the development of this new interlayer.

"This is an important step on the road to being able to manufacture large-scale, cost-effective, safe and environmentally friendly batteries that deliver high capacity and can be charged and discharged at a high rate," says Aleksandar Matic, Professor at the Department of Physics at Chalmers, who predicts that solid state batteries will be on the market within five years.

Credit: 
Chalmers University of Technology

South Asia faces increased threat of extreme heat, extreme pollution, study shows

Scientists know that extreme heat has a negative impact on the human body -- causing distress in the respiratory and cardiovascular systems -- and they know that extreme air pollution can also have serious effects.

But as climate change impacts continue globally, how often will humans be threatened by both of those extremes when they occur simultaneously? A Texas A&M University professor has led a regional research study, recently published in the new journal AGU Advances, answering that question for South Asia.

"South Asia is a hot-spot for future climate change impacts," said Yangyang Xu, an assistant professor in the Department of Atmospheric Sciences in the College of Geosciences at Texas A&M. Extreme heat occurrences worldwide have increased in recent decades, and at the same time, many cities are facing severe air pollution problems, featuring episodes of high particulate matter (PM) pollution, he said. This study provides an integrated assessment of human exposure to rare days of both extreme heat and high PM levels.

"Our assessment projects that occurrences of heat extremes will increase in frequency by 75% by 2050, that is an increase from 45 days a year to 78 days in a year. More concerning is the rare joint events of both extreme heat and extreme PM will increase in frequency by 175% by 2050," Xu said.

Climate change is not just a global average number - it is something you can feel in your neighborhood, he said, and that's why regional-scale climate studies are important.

The study's regional focus was South Asia: Afghanistan, Bangladesh, Bhutan, India, Myanmar, Nepal and Pakistan. The scientists used a high-resolution, decadal-long model simulation, using a state-of-the-science regional chemistry-climate model.

Xu lead the first of its kind research project, and scientists from the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, led the development of the fully coupled chemistry-climate model and performed model simulations for the present-day and future conditions.

"These models allow chemistry and climate to affect each other at every time step," said Rajesh Kumar, a project scientist at NCAR and co-author on the study.

The study was also co-authored by Mary Barth and Gerald A. Meehl, both senior scientists at NCAR, with most of the analysis done by Texas A&M atmospheric sciences graduate student Xiaokang Wu.

As climate change impacts continue to become reality, it is important for scientists to consider human impacts of multiple extreme conditions happening simultaneously, Xu said. Projected increases in humidity and temperature are expected to cause extreme heat stress for the people of South Asia, where the population is projected to increase from 1.5 billion people to 2 billion by 2050.

"It is important to extend this analysis on the co-variability of heat and haze extremes in other regions of the world, such as the industrial regions of the U.S., Europe, and East Asia," Barth said.

The analysis also showed that the fraction of land exposed to prolonged dual-extreme days increases by more than tenfold in 2050.

"I think this study raises a lot of important concerns, and much more research is needed over other parts of the world on these compounded extremes, the risks they pose, and their potential human health effects," Xu said.

Credit: 
Texas A&M University

Cancer researchers locate drivers of tumor resistance

image: Cancer biologists at the Mays Cancer Center, home to UT Health San Antonio MD Anderson, have identified important drivers that enable tumors to change their behavior and evade anticancer therapies. Zhijie (Jason) Liu, Ph.D., is the senior author of a study on this topic in the journal Nature Cell Biology.

Image: 
UT Health San Antonio

Cancer biologists at the Mays Cancer Center, home to UT Health San Antonio MD Anderson, have identified important drivers that enable tumors to change their behavior and evade anticancer therapies.

By studying tumors in cell lines, mice and human samples, the team documented genetic signals that promote the conversion of cancer cells from one stage to another. The journal Nature Cell Biology published the research May 18.

“Although we focused on breast cancer in this study, we believe the identified mechanism can apply to all treatment-resistant cancers,” said study senior author Zhijie “Jason” Liu, Ph.D., assistant professor of molecular medicine in the Long School of Medicine at UT Health San Antonio. He is a research member of the Mays Cancer Center.

“The same phenomenon is happening in lung cancer, prostate cancer and many other cancers,” Dr. Liu said.

More dangerous

The ability of cancer cells to take different shapes, to grow faster or slower, and to vary in size is called “phenotypic plasticity.” Cancers that acquire plasticity often are more dangerous, becoming metastatic and resistant to many targeted therapies, Dr. Liu said.

The team’s next step is to screen new drugs, in the form of small molecules, that disrupt the genetic signals underlying tumor plasticity. Such a drug could be administered along with current targeted therapies to eliminate the problem of resistance to those treatments, Dr. Liu said.

“If we target the drivers of phenotypic plasticity, we may increase the effectiveness of many therapies and cure more cancers,” Dr. Liu said.

The team is led by Dr. Liu and his long-term collaborator Lizhen Chen, Ph.D., an assistant professor in the Sam and Ann Barshop Institute for Longevity and Aging Studies and the Department of Cell Systems and Anatomy at UT Health San Antonio. They collaborated with researchers in Paris, France, and Shanghai, China, who provided human patient tumor samples for the project.

Acknowledgments

Funding is from the Cancer Prevention and Research Institute of Texas, the V Foundation, the Max and Minnie Tomerlin Voelcker Fund, Susan G. Komen, the National Cancer Institute, the National Institute of General Medical Sciences and The University of Texas System.

Bi, M., Zhang, Z., Jiang, Y. et al. Enhancer reprogramming driven by high-order assemblies of transcription factors promotes phenotypic plasticity and breast cancer endocrine resistance. Nat Cell Biol (2020). https://www.nature.com/articles/s41556-020-0514-z

About us

The Long School of Medicine at The University of Texas Health Science Center at San Antonio is named for Texas philanthropists Joe R. and Teresa Lozano Long. The school is the largest educator of physicians in South Texas, many of whom remain in San Antonio and the region to practice medicine. The school teaches more than 900 students and trains 800 residents each year. As a beacon of multicultural sensitivity, the school annually exceeds the national medical school average of Hispanic students enrolled. The school’s clinical practice is the largest multidisciplinary medical group in South Texas with 850 physicians in more than 100 specialties. The school has a highly productive research enterprise where world leaders in Alzheimer’s disease, diabetes, cancer, aging, heart disease, kidney disease and many other fields are translating molecular discoveries into new therapies. The Long School of Medicine is home to a National Cancer Institute-designated cancer center known for prolific clinical trials and drug development programs, as well as a world-renowned center for aging and related diseases.

The University of Texas Health Science Center at San Antonio, also referred to as UT Health San Antonio, is one of the country’s leading health sciences universities and is designated as a Hispanic-Serving Institution by the U.S. Department of Education. With missions of teaching, research, patient care and community engagement, its schools of medicine, nursing, dentistry, health professions and graduate biomedical sciences have graduated more than 37,000 alumni who are leading change, advancing their fields, and renewing hope for patients and their families throughout South Texas and the world. To learn about the many ways “We make lives better®,” visit www.uthscsa.edu.

Stay connected with The University of Texas Health Science Center at San Antonio on Facebook, Twitter, LinkedIn, Instagram and YouTube

Journal

Nature Cell Biology

DOI

10.1038/s41556-020-0514-z

Credit: 
University of Texas Health Science Center at San Antonio

Early Bird uses 10 times less energy to train deep neural networks

image: Rice University's Early Bird method for training deep neural networks finds key connectivity patterns early in training, reducing the computations and carbon footprint for the increasingly popular form of artificial intelligence known as deep learning. (Graphic courtesy of Y. Lin/Rice University)

Image: 
Y. Lin/Rice University

HOUSTON -- (May 18, 2020) -- Rice University's Early Bird could care less about the worm; it's looking for megatons of greenhouse gas emissions.

Early Bird is an energy-efficient method for training deep neural networks (DNNs), the form of artificial intelligence (AI) behind self-driving cars, intelligent assistants, facial recognition and dozens more high-tech applications.

Researchers from Rice and Texas A&M University unveiled Early Bird April 29 in a spotlight paper at ICLR 2020, the International Conference on Learning Representations. A study by lead authors Haoran You and Chaojian Li of Rice's Efficient and Intelligent Computing (EIC) Lab showed Early Bird could use 10.7 times less energy to train a DNN to the same level of accuracy or better than typical training. EIC Lab director Yingyan Lin led the research along with Rice's Richard Baraniuk and Texas A&M's Zhangyang Wang.

"A major driving force in recent AI breakthroughs is the introduction of bigger, more expensive DNNs," Lin said. "But training these DNNs demands considerable energy. For more innovations to be unveiled, it is imperative to find 'greener' training methods that both address environmental concerns and reduce financial barriers of AI research."

Training cutting-edge DNNs is costly and getting costlier. A 2019 study by the Allen Institute for AI in Seattle found the number of computations needed to train a top-flight deep neural network increased 300,000 times between 2012-2018, and a different 2019 study by researchers at the University of Massachusetts Amherst found the carbon footprint for training a single, elite DNN was roughly equivalent to the lifetime carbon dioxide emissions of five U.S. automobiles.

DNNs contain millions or even billions of artificial neurons that learn to perform specialized tasks. Without any explicit programming, deep networks of artificial neurons can learn to make humanlike decisions -- and even outperform human experts -- by "studying" a large number of previous examples. For instance, if a DNN studies photographs of cats and dogs, it learns to recognize cats and dogs. AlphaGo, a deep network trained to play the board game Go, beat a professional human player in 2015 after studying tens of thousands of previously played games.

"The state-of-art way to perform DNN training is called progressive prune and train," said Lin, an assistant professor of electrical and computer engineering in Rice's Brown School of Engineering. "First, you train a dense, giant network, then remove parts that don't look important -- like pruning a tree. Then you retrain the pruned network to restore performance because performance degrades after pruning. And in practice you need to prune and retrain many times to get good performance."

Pruning is possible because only a fraction of the artificial neurons in the network can potentially do the job for a specialized task. Training strengthens connections between necessary neurons and reveals which ones can be pruned away. Pruning reduces model size and computational cost, making it more affordable to deploy fully trained DNNs, especially on small devices with limited memory and processing capability.

"The first step, training the dense, giant network, is the most expensive," Lin said. "Our idea in this work is to identify the final, fully functional pruned network, which we call the 'early-bird ticket,' in the beginning stage of this costly first step."

By looking for key network connectivity patterns early in training, Lin and colleagues were able to both discover the existence of early-bird tickets and use them to streamline DNN training. In experiments on various benchmarking data sets and DNN models, Lin and colleagues found Early Bird could emerge as little as one-tenth or less of the way through the initial phase of training.

"Our method can automatically identify early-bird tickets within the first 10% or less of the training of the dense, giant networks," Lin said. "This means you can train a DNN to achieve the same or even better accuracy for a given task in about 10% or less of the time needed for traditional training, which can lead to more than one order savings in both computation and energy."

Developing techniques to make AI greener is the main focus of Lin's group. Environmental concerns are the primary motivation, but Lin said there are multiple benefits.

"Our goal is to make AI both more environmentally friendly and more inclusive," she said. "The sheer size of complex AI problems has kept out smaller players. Green AI can open the door enabling researchers with a laptop or limited computational resources to explore AI innovations."

Credit: 
Rice University