Culture

Urban dogs are more fearful than their cousins from the country

Fearfulness is one of the most common behavioural disorders in dogs. As an emotion, fear is a normal and vital reaction that helps individuals survive in threatening circumstances. When the fearfulness is excessive and disturbs the dog's life, it is referred to as a behavioural problem. Excessive fearfulness can significantly impair the dog's welfare, and it is also known to weaken the relationship between dog and owner.

Social fearfulness in dogs is particularly associated with fearfulness related to unfamiliar human beings and dogs. At the University of Helsinki, risk factors predisposing dogs to social fearfulness were investigated with the help of a dataset pertaining to nearly 6,000 dogs. The dataset was selected from a larger set of data, a behavioural survey encompassing almost 14,000 dogs.

Based on the survey, inadequate socialisation of puppies to various situations and stimuli had the strongest link with social fearfulness. The living environment also appears to make a difference, as dogs that live in urban environments were observed to be more fearful than dogs living in rural environments.

"This has not actually been previously investigated in dogs. What we do know is that human mental health problems occur more frequently in the city than in rural areas. However, further studies are needed before any more can be said about causes pertaining to the living environment," says Jenni Puurunen, a postdoctoral researcher at the Faculty of Veterinary Medicine, University of Helsinki.

Supporting prior research evidence, social fearfulness was demonstrated to be more common among neutered females and small dogs.

Alongside size and gender, activity is another factor associated with fearfulness. Fearful dogs were less active than bolder ones, and their owners also involved them in training and other activities significantly less often. Professor Hannes Lohi from the University of Helsinki speculates whether this is a cause or consequence.

"Activity and stimuli have already been found to have a positive effect on behaviour, in both dogs and humans. Of course, the lesser activity of fearful dogs can also be down to their owners wanting to avoid exposing their dogs to stressful situations. It may be that people just are not as active with fearful dogs," Lohi points out.

Furthermore, significant differences between breeds were identified in the study. Spanish Water Dogs and Shetland Sheepdogs expressed social fearfulness the most, while Wheaten Terriers were among the bravest breeds. The Cairn Terrier and the Pembroke Welsh Corgi expressed only little fearfulness towards other dogs.

"Differences between breeds support the notion that genes have an effect on fearfulness, as well as on many other mental health problems. This encourages us to carry out further research especially in terms of heredity. All in all, this study provides us with tools to improve the welfare of our best friend: diverse socialisation in puppyhood, an active lifestyle and carefully made breeding choices can significantly decrease social fearfulness," Lohi sums up.

Professor Lohi's group investigates the epidemiology of canine behaviour, as well as related environmental and genetic factors and metabolic changes.

Credit: 
University of Helsinki

Consumption of 3-6 eggs/week lowers the risk of cardiovascular disease and death

image: Associations of egg consumption with risk of CVD endpoints and all-cause mortality

Image: 
©Science China Press

Eggs have been acknowledged as a good source of high-quality proteins and contain bioactive components beneficial for health, while they are also loaded with abundant cholesterol in the yolks, making the public hesitant about consuming whole eggs. Up to now, most studies exploring the association of egg consumption with incident CVD or total death were conducted in high-income countries and findings were inconsistent across populations and CVD subtypes. Accordingly, no consensus has been reached on the recommendation of egg consumption around the world.

The current study conducted by Xia and her colleagues from Fuwai Hospital, Chinese Academy of Medical Sciences suggested that there were U-shaped relationships between egg consumption and the risks of incident CVD and total death among general Chinese, and those consumed 3-6 eggs/week was at the lowest risk. More specifically, consumption of =10 eggs/week was associated with 39% and 13% higher risk for incident CVD and total death, respectively.

In addition, researchers pointed out that the influence of egg consumption seemed to be different across CVD subtypes. Individuals had higher consumption of eggs was more likely to have increased risk of coronary heart disease (CHD) and ischemic stroke, while the elevated risk of hemorrhagic stroke was only found among those with lower consumption.

The current study was conducted based on the project of Prediction for Atherosclerotic Cardiovascular Disease Risk in China (China-PAR), which was established to estimate the epidemic of CVD and identify the related risk factors in general Chinese population. A total of 102136 participants from 15 provinces across China were included, who were all free of CVD, cancer or end-stage renal diseases at baseline. During up to 17 years of follow-up, 4848 cases of incident CVD (including 1273 CHD and 2919 stroke), and 5511 total death were identified, with over 90% follow-up rate.

A previous Chinese evidence from the China Kadoorie Biobank (CKB) study indicated that low to moderate intake of eggs (about 5 eggs/week) was significantly associated with lower risk of CVD in comparison with never or rare consumption (about 2 eggs/week). However, lacking participants with consumption of >=1 egg/d limited them to further assess the influence of higher egg consumption. In the China-PAR project, about 25% participants consuming 3-6 eggs/week, and the percentage of participants consuming =10 eggs/week was 12% and 24%, respectively. Benefiting from the wide range of egg consumption, the present study firstly demonstrated the potential adverse effects of too much egg intake among Chinese population.

The removal of limits on dietary cholesterol in the most recent US and Chinese dietary guidelines have provoked considerable reaction. Both the American Heart Association and the Chinese Preventive Medicine Association subsequently released scientific reports and emphasized that "dietary cholesterol should not be given a free pass to be consumed in unlimited quantities". Considering the rapid increase of both cholesterol intake and hypercholesteremia prevalence in China, measures should be taken to encourage the public to limit dietary cholesterol intake. Meanwhile, those with rare egg consumption could be recommended to eat a bit more in the future. This novel evidence should be considered in the update of guidelines on dietary cholesterol and CVD prevention for the general Chinese and probably for other populations in the low-and middle- income countries.

Credit: 
Science China Press

Possible lives for food waste from restaurants

image: The researcher team at the University of Cordoba

Image: 
University of Cordoba

More than a third of the food produced ends up being wasted. This situation creates environmental, ethical and financial issues, that also alter food security. Negative effects from waste management, such as bad smells or the emission of greenhouse gases, make the bioeconomy one of the best options to reduce these problems.

Research into the field of the bioeconomy and the search for waste valorization strategies, such as agricultural by-products, is the field of research for the BIOSAHE (a Spanish acronym of biofuels and energy-saving systems) research group at the University of Cordoba. Led by Professor Pilar Dorado, they are now taking a step further: they aim to establish the best valorization paths for restaurant food waste. Among the possible lives for restaurant scraps, they are looking to find which one is most effective and which provides the most value.

Along these lines, researcher Miguel Carmona and the rest of the BIOSAHE group, including Javier Sáez, Sara Pinzi, Pilar Dorado and Isabel López García, developed a methodology that assesses food waste and selects the best valorization path.

After analyzing food waste from a variety of different kinds of restaurants with varying degrees of caliber, the main chemical components were characterized, those being starches, proteins, lipids and fibers. The aim of this process was to find out what amounts of what compounds are held in food waste in order to link it to the best option for its transformation.

Once the chemical compounds of the scraps were identified, a statistical study was performed to analyze the variability (how compounds vary and the amounts of some waste compared to other waste).

Identifying compound typology and variability makes it possible to predict the most optimal valorization process depending on the waste, thus helping industries within the circular economy and the resource valorization sector to make decisions.

In this way, the lives of restaurant scraps can be turned into biodiesel, electricity or bioplastic. Specifically, the project that Pilar Dorado heads is developing a biorefinery that would, just as oil refineries do, generate biofuel, bioplastic, biolubricants and products with added value in the chemical, electrical and heat industries from restaurant food waste. In this project, in addition to the methodology that characterizes scraps and chooses the best paths, they have produced bioplastic that can be used as sutures in surgery procedures.

Credit: 
University of Córdoba

Uncertain climate future could disrupt energy systems

Extreme weather events - such as severe drought, storms, and heat waves - have been forecast to become more commonplace and are already starting to occur. What has been less studied is the impact on energy systems and how communities can avoid costly disruptions, such as partial or total blackouts.

Now an international team of scientists has published a new study proposing an optimization methodology for designing climate-resilient energy systems and to help ensure that communities will be able to meet future energy needs given weather and climate variability. Their findings were recently published in Nature Energy.

"On one side is energy demand - there are different types of building needs, such as heating, cooling, and lighting. Because of long-term climate change and short-term extreme weather events, the outdoor environment changes, which leads to changes in building energy demand," said Tianzhen Hong, a Berkeley Lab scientist who helped design the study. "On the other side, climate can also influence energy supply, such as power generation from hydro, solar and wind turbines. Those could also change because of weather conditions."

Working with collaborators from Switzerland, Sweden, and Australia, and led by a scientist at the Ecole Polytechnique Fédérale de Lausanne (EPFL), the team developed a stochastic-robust optimization method to quantify impacts and then use the data to design climate-resilient energy systems. Stochastic optimization methods are often used when variables are random or uncertain.

"Energy systems are built to operate for 30 or more years. Current practice is just to assume typical weather conditions today; urban planners and designers don't commonly factor in future uncertainties," said Hong, a computational scientist leading multi-scale energy modeling and simulation at Berkeley Lab. "There is a lot of uncertainty around future climate and weather."

"Energy systems," as defined in the study, provide energy needs, and sometimes energy storage, to a group of buildings. The energy supplied could include gas or electricity from conventional or renewable sources. Such community energy systems are not as common in the U.S. but may be found on some university campuses or in business parks.

The researchers investigated a wide range of scenarios for 30 Swedish cities. They found that under some scenarios the energy systems in some cities would not be able to generate enough energy. Notably, climate variability could create a 34% gap between total energy generation and demand and a 16% drop in power supply reliability - a situation that could lead to blackouts.

"We observed that current energy systems are designed in a way that makes them highly susceptible to extreme weather events such as storms and heat waves," said Dasun Perera, a scientist at EPFL's Solar Energy and Building Physics Laboratory and lead author of the study. "We also found that climate and weather variability will result in significant fluctuations in renewable power being fed into electric grids as well as energy demand. This will make it difficult to match the energy demand and power generation. Dealing with the effects of climate change is going to prove harder than we previously thought."

The authors note that 3.5 billion people live in urban areas, consuming two-thirds of global energy, and by 2050 urban areas are expected to hold more than two-thirds of the world's population. "Distributed energy systems that support the integration of renewable energy technologies will support the energy transition in the urban context and play a vital role in climate change adaptation and mitigation," they wrote.

Hong leads an urban science research group at Berkeley Lab that studies energy and environmental issues at the city scale. The group is part of Berkeley Lab's Building Technology and Urban Systems Division, which for decades has been at the forefront of research into advancing energy efficiency in the built environment.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Most of Earth's carbon was hidden in the core during its formative years

image: A team of scientists reports March 30 in the journal Proceedings of the National Academy of Sciences how carbon behaved during Earth's violent formative period. The findings can help scientists understand how much carbon likely exists in the planet's core and the ways it influences chemical and dynamic activities that shape the world, including the convective motion that powers the magnetic field that protects Earth from cosmic radiation. The team's lab experiments compared carbon's compatibility of silicates that comprise the Earth's mantle (outer circle) to its compatibility with the iron that comprises the planet's core (inner circle). The lab experiments were conducted under conditions mimicking the Earth's interior during its formative period. They found that more carbon would have stayed in the mantle than previously thought.

Image: 
Rebecca Fischer

Carbon is an essential building block for all living things on Earth and plays a vital role in many of the geologic processes that shape life on the planet, including climate change and ocean acidification. But the total amount of carbon on Earth remains a mystery, because more than 90% of Earth's carbon is inaccessible to direct observation and measurement, deep within the planet at extreme temperature and pressure.

Now, a team of scientists reports March 30 in the journal Proceedings of the National Academy of Sciences how carbon behaved during Earth's violent formative period. The findings can help scientists understand how much carbon likely exists in the planet's core and the ways it influences chemical and dynamic activities that shape the world, including the convective motion that powers the magnetic field that protects Earth from cosmic radiation.

The research team included Elizabeth Cottrell, the Smithsonian's National Museum of Natural History curator of rocks and ores; former museum fellow Marion Le Voyer; Harvard University professor and former museum fellow Rebecca Fischer; Yale University's Kanani Lee; and Carnegie Institution for Science's late Erik Hauri, to whom the paper is dedicated.

Earth's core is comprised mostly of iron and nickel, but its density indicates the presence of other lighter elements, such as carbon, silicon, oxygen, sulfur or hydrogen. It has been long suspected that there is a tremendous reservoir of carbon hidden in the core. In an effort to quantify the amount of carbon in the core, the research team turned to laboratory experiments that mimic the conditions of the Earth's formation to understand how carbon got there in the first place.

"To understand present day Earth's carbon content, we went back to our planet's babyhood, when it accreted from material surrounding the young sun and eventually separated into chemically distinct layers--core, mantle and crust," Fischer said. "We set out to determine how much carbon entered the core during these processes."

This was accomplished by lab experiments that compared carbon's compatibility with the silicates that comprise the mantle to its compatibility with the iron that comprises the core. These lab experiments were conducted under the extreme pressures and temperatures found deep inside the Earth during its formative period.

"We found that more carbon would have stayed in the mantle than we previously suspected," Cottrell said. "This means the core must contain significant amounts of other lighter elements, such as silicon or oxygen, both of which become more attracted to iron at high temperatures."

Despite this surprising discovery, the vast majority of Earth's total carbon inventory does likely exist down in the core. Even so, it only makes up a negligible fraction of the core's overall composition.

Tim McCoy, the museum's curator of meteorites who was not involved in the study, said, "This study provides compelling evidence that carbon-rich meteorites like those known from our collections delivered most of Earth's carbon when our planet first formed, with only a small percentage of carbon perhaps delivered by late-impacting asteroids or comets."

Credit: 
Smithsonian

Surprising hearing talents in cormorants

image: Biologist Jakob Christensen-Dalsgaard is an expert in animal hearing with a focus on the evolution of hearing and directional hearing. He has worked on animals like frogs, alligators, turtles, lizards, and birds.

Image: 
SDU

Many aquatic animals like frogs and turtles spend a big part of their lives under water and have adapted to this condition in various ways, one being that they have excellent hearing under water.

A new study shows that the same goes for a diving bird, the great cormorant.

This is surprising because the great cormorant spends most of its time out of the water. It is the first time we see such extensive hearing adaptations in an animal that does not spend most of its time under water, says biologist Jakob Christensen-Dalsgaard, University of Southern Denmark.

Human noise is a problem for animals at sea

Researchers are increasingly paying attention to the living conditions of animals living in or near the oceans.

Oceans are no longer the quiet habitats they used to be. Human activities produce noise - examples are ship traffic, fishing and windmill constructions, and this noise may pose a threat to the oceans' animals.

- We need more knowledge about how animals are affected by this noise - does it impair their hearing or their hunting and fishing abilities? We have studied the effect on whales for some time now, but we don't know very much about diving birds. There are many vulnerable animal species living or foraging at sea, that may be negatively affected by human noise, says Jakob Christensen-Dalsgaard.

Listening for fish?

- Even though the great cormorant is not an aquatic animal, it does frequently visit the water columns, so it makes sense that it, too, has adapted its ears for hearing under water, Jakob Christensen-Dalsgaard says about the new study.

Whereas the great cormorant spends about 30 seconds foraging under water in active pursuit of prey, approximately 150 other species of diving birds spend up to several minutes in pursuit of fish and squid.

Foraging under water is challenging for the sensory apparatus of the birds, however, and for most birds, their visual acuity under water is no better than that of humans. So, the birds may use other sensory modalities.

We know very little about birds' hearing under water

Apart from a few behavioral studies, the hearing of birds under water is unknown.

Previously, researchers from University of Southern Denmark, have documented that great cormorants and gentoo penguins respond to sound under water, but this is the first study of the physiology of underwater hearing in any bird.

The study shows that the cormorant ear has been specialized for underwater hearing.

How was the study done?

To study hearing of the cormorant in air and under water the scientists measured auditory evoked responses and neural activity in response to airborne and underwater sound in anesthetized birds.

The neural responses to airborne and underwater sounds were measured using electrodes under the skin. In this way, the scientists could measure hearing thresholds to sound in air and under water.

Thresholds in water and air proved to be similar, with almost the same sensitivity to sound pressure in the two media. This is surprising, because similar sound pressures in air and water means that the threshold sound intensity (the energy radiated by the sound wave) is much lower in water, so the ear is more sensitive to underwater than to airborne sound.

The cost: Stiffer and heavier eardrums

- We found anatomical changes in the ear structures compared to terrestrial birds. These changes may explain the good sensitivity to underwater sound. The adaptations also may provide better protection of the eardrums from the water pressure, says Jakob Christensen-Dalsgaard.

But there is - as always in Nature - a cost to these benefits:

Their hearing in air is not as sensitive as in many other birds. Their eardrums are stiffer and heavier.

How has the ear adapted?

The cormorant eardrum shows large vibrations in response to underwater sound, so the sensitivity likely is mediated by the eardrum and middle ear.

Underwater eardrum vibrations and anatomical features of the cormorant ear are similar to features found in turtles and aquatic frogs, that also appear to be specialized for underwater hearing.

The data suggest convergent modifications of the tympanic ear in these three distantly related species, and similar modifications may be found in other diving birds.

Credit: 
University of Southern Denmark

Tiny fly from Los Angeles has a taste for crushed invasive snails

image: Female phorid fly (Megaselia steptoeae) feeding on a crushed Draparnaud's glass snail (Oxychilus draparnaudi).

Image: 
Kat Halsey

As part of their project BioSCAN - devoted to the exploration of the unknown insect diversity in and around the city of Los Angeles - the scientists at the Natural History Museum of Los Angeles County (USA) have already discovered numerous insects that are new to science, but they are still only guessing about the lifestyles of these species.

"Imagine trying to find a given 2 mm long fly in the environment and tracking its behavior: it is the smallest imaginable needle in the largest haystack. So when researchers discover new life histories, it is something worth celebrating," explains Dr. Brian Brown, lead author of a recent paper, published in the scholarly open-access Biodiversity Data Journal.

However, Brown and Maria Wong, former BioSCAN technician, while doing field work at the L.A. County Arboretum, were quick to reveal a curious peculiarity about one particular species discovered as part of the project a few years ago. They successfully lured female phorid flies by means of crushing tiny, invasive snails and using them as bait. In comparison, the majority of phorid flies, whose lifestyles have been observed, are parasitoids of social insects like ants.

Within mere seconds after the team crushed tiny invasive snails (Oxychilus draparnaudi), females representing the fly species Megaselia steptoeae arrived at the scene and busied themselves feeding. Brown and Wong then collected some and brought them home alive along with some dead snails. One of the flies even laid eggs. After hatching, the larvae were observed feeding upon the rotting snails and soon they developed to the pupal stage. However, none was reared to adulthood.

Interestingly, the host species - used by the fly to both feed on and lay eggs inside - commonly known as Draparnaud's glass snail, is a European species that has been introduced into many parts of the world. Meanwhile, the studied fly is native to L.A. So far, it is unknown when and how the mollusc appeared on the menu of the insect.

To make things even more curious, species of other snail genera failed to attract the flies, which hints at a peculiar interaction worth of further study, point out the scientists behind the study, Brown and Jann Vendetti, curator of the NHM Malacology collection. They also hope to lure in other species of flies by crushing other species of snails.

Credit: 
Pensoft Publishers

Guidelines on caring for ICU patients with COVID-19

image: Waleed Alhazzani is assistant professor of medicine at McMaster University and an intensive care physician at St. Joseph's Healthcare Hamilton.

Image: 
Photo courtesy McMaster University

Hamilton, ON (April 1, 2020) - An international team including McMaster University researchers has come together to issue guidelines for health-care workers treating intensive care unit (ICU) patients with COVID-19.

The Surviving Sepsis Campaign COVID-19 panel has released 54 recommendations on such topics as infection control, laboratory diagnosis and specimens, the dynamics of blood flow support, ventilation support, and COVID-19 therapy.

The panel of 36 experts, with six from McMaster, telescoped what would have been more than a year of work into less than three weeks.

The guidelines were co-published in the journals Critical Care Medicine and Intensive Care Medicine.

"Previously there was limited guidance on acute management of critically ill patients with COVID-19, although the World Health Organization and the United States Centers for Disease Control and Prevention have issued preliminary guidance on infection control, screening and diagnosis in the general population," said first author Waleed Alhazzani, assistant professor of medicine at McMaster. He is also an intensive care physician at St. Joseph's Healthcare Hamilton.

"Usually, it takes a year or two to develop large clinical practice guidelines such as these ones. Given the urgency and the huge need for these guidelines, we assembled the team, searched the literature, summarized the evidence, and formulated recommendations within 18 days. Everyone worked hard to make this guideline available to the end user rapidly while maintaining methodological rigour."

Alhazzani added that the guidelines will be used by frontline clinicians, allied health professionals and policy makers involved in the care of patients with COVID-19.

The Surviving Sepsis Campaign COVID-19 panel included experts in guideline development, infection control, infectious diseases and microbiology, critical care, emergency medicine, nursing, and public health. The corresponding author of the guidelines is Andrew Rhodes of St. George's Healthcare NHS Trust in the United Kingdom.

Members of the panel came from Australia, Canada, China, Denmark, Italy, Korea, the Netherlands, United Arab Emirates, United Kingdom, United States and Saudi Arabia.

The panel started off by proposing 53 questions they considered to be relevant to the management of COVID-19 in the intensive care unit (ICU). The team then searched the literature for direct and indirect evidence on the management of COVID-19 in the ICU. They found relevant and recent systematic reviews on most questions relating to supportive care.

The group then assessed the certainty in the evidence using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach, which itself was developed at McMaster. GRADE is a way to assess previous work, a transparent framework for developing and presenting summaries of evidence and provides a systematic approach for making clinical practice recommendations for health-care professionals.

The resulting 54 recommendations include four best practice statements, nine strong recommendations, and 35 weak recommendations. No recommendation was provided for six questions. The four best practice statements based on high-quality evidence include:

Health-care workers performing aerosol-generating procedures, such as intubation, bronchoscopy, open suctioning, on patients with COVID-19 should wear fitted respirator masks, such as N95, FFP2 or equivalent - instead of surgical masks - in addition to other personal protective equipment, such as gloves, gown and eye protection.

Aerosol-generating procedures should be performed on ICU patients with COVID-19 in a negative pressure room, if available. Negative pressure rooms are engineered to prevent the spread of contagious pathogens from room to room.

Endotracheal intubation of patients with COVID-19 should be performed by health-care workers with experience in airway management to minimize the number of attempts and risk of transmission.

Adults with COVID-19 who are being treated with non-invasive positive pressure ventilation or a high flow nasal cannula should be closely monitored for worsening respiratory status and intubated early if needed.

The Surviving Sepsis Campaign COVID-19 panel said it plans to issue further guidelines in order to update the recommendations, if needed, or formulate new ones.

Credit: 
McMaster University

Purdue innovators moving to fast-track COVID-19 diagnostic, therapeutic solutions

WEST LAFAYETTE, Ind. - As the coronavirus pandemic spreads across the U.S. and the world, Purdue University scientists are working to move solutions to diagnose and treat the virus to the marketplace as soon as possible.

The Purdue Research Foundation Office of Technology Commercialization is working with innovators from across the university to patent and license technologies that focus on the diagnostic and therapeutic aspects of treating COVID-19.

Among the technologies being developed:

Paper-based and microfluidic, rapid-detection systems that are cheap and portable solutions to diagnose COVID-19 in the field.

Drugs and vaccine options for treating and preventing coronavirus infections.

Environmental and surface decontamination technologies to sterilize surfaces, water and air.

"We are expediting the review and processing components of our pipeline to move these inventions from Purdue to a world in need," said Brooke Beier, vice president of OTC. "We are intensely seeking industry partners to help us accomplish this mission."

Industry partners and others interested in licensing Purdue technologies can contact OTC at otcip@prf.org.

Purdue innovators are drawing on their strengths in areas such as environmental engineering, health sciences and chemistry to create solutions to help diagnose and treat people infected with the novel strain of coronavirus.

"I think it is important to note that Purdue investigators are not only thinking about today during this pandemic, but we are pragmatically providing long-term solutions for tomorrow so that we are better prepared for these types of challenges in the future," said Thomas Sors, who is assistant director of the Purdue Institute of Inflammation, Immunology and Infectious Disease, which is currently leading the COVID-19 task force effort at Purdue. "I'm impressed by the way our scientists and experts are responding swiftly to pivot their focus on this current outbreak. Many of them have already been developing solutions in related areas that can now be re-tooled and directly applied to COVID-19."

Credit: 
Purdue University

Covid-19 deaths in Italian hospitals are today increasing at maximum rate and significant numbers will continue to die until at least mid-April

A new report on Covid-19 data up to March 30 from Italy, prepared by an Italian expert for the European Society of Anaesthesiology (ESA), says that the number of daily deaths in Italian hospitals is today still accelerating at the maximum rate, and significant numbers of deaths in hospital are likely to continue until at least mid-April and could go on until early June. The report is by Davide Manca, Professor of Process Systems Engineering at Politecnico di Milano, Milan, Italy.

The data suggest that the increase in numbers of patients in intensive care (ICU) in both the Lombardy region and Italy as a whole are likely to have peaked, but that numbers of deaths in hospital will continue to increase at the maximum rate for several days to come.

March 31, 2020, is classed as day 39 of the pandemic in Italy, with day 1 classed as February 22. For deaths, it is important to note that patients dying now and during the days to come were mostly infected around two weeks ago. Models identify the maximum daily increase of deaths in hospital as being likely to occur during days 36-40 (that is March 28- April 1) in Lombardy and days 36-41 (March 28-April 2) in Italy.

Professor Manca has explored two different modelling techniques called logistic and Gompertz modelling to prepare his report. According to the more optimistic logistic model, 98% of total expected deaths in hospital would have occurred in both Lombardy and Italy by April 15. Conversely, the more pessimistic Gompertz model predicts 98% of deaths to occur by June 3 in Lombardy and by June 4 in Italy.

For the number of patients in ICU, the data show the day of maximum increase was reached at Day 22 (15 March) in Lombardy and Day 25 (18 March) in Italy. "The difference between Lombardy and Italy is due to the social-distancing measures adopted first in Lombardy and then all over the country. Every day counted," explains Professor Manca.

In the past few days, the number of patients in ICU has increased by less than 10 persons per day in Lombardy, due to its intensive care units being filled to capacity. Across Italy (including the South and Central regions), the number of patients in ICU has increased by 50-75 patients per day in recent days, compared with a much steeper increase of 180 to 240 patients per day across the period 13 to 23 March. It is important to remember, says Prof Manca, that space in ICU becomes available as patients recover and are discharged, or sadly die from Covid-19. Also, more ICU capacity is being created in Italy as the pandemic progresses.

"We expect to reach the date on which there will be little or no further increase of Covid-19 patients in ICU to be around day 45 (April 6) in Lombardy and day 47 (April 8) in Italy. The data suggest that numbers of patients in intensive care should begin to fall across Lombardy and Italy after these dates, depending on the continued implementation and enforcement of Italy's strict quarantine measures," says Professor Manca.

The new report contains several additional observations from the data:

Resuscitation and intensive care doctors are reporting extended periods of viral shedding. One infected ICU doctor is experiencing persistent virus shedding for more than 30 days (in quarantine since Feb 23) while doctors are also reporting longer periods of virus shedding (than average) by their patients

About 15 days are necessary to achieve an effective weaning from respiratory care in ICU. Indeed, the experience shows that about one-third (33%) of patients worsen after the first extubation and call for further (but not invasive) respiratory treatment.

The time spent in ICU by patients who tragically go on to die is in general rather high (about 10-12 days), which is longer than Chinese literature data (9-10 days).

Resuscitation doctors should keep in mind that new ICU beds added during wartime cannot meet the same high-level standards as in peacetime. That is why, in case of saturation conditions of ICU beds, the proportion of successful treatment decreases, and consequently the number of deaths in hospital increases.

Not all the patients that require intubation treatment can receive it when required. Sometimes, that treatment must be postponed because of ICU beds saturation and this may cause deterioration in the patient's pulmonary status and prevent the patient's weaning and extend their time in ICU

CPAP (continuous positive airway pressure) breathing devices may delay respiratory failure, and help medical staff (also non-ICU experts) to avoid ICU admission when intubation is not feasible.

Credit: 
The European Society of Anaesthesiology and Intensive Care (ESAIC)

Assessing forests from afar

image: University of Delaware assistant professor Pinki Mondal recently had a paper published in the Remote Sensing of Environment Journal that shows the importance of using finer scale satellite data in protected areas to ensure they are maintaining their health and are being reported on accurately.

Image: 
Photo courtesy of Pinki Mondal

While using large swaths of coarse satellite data can be an effective tool for evaluating forests on a national scale, the resolution of that data is not always well suited to indicate whether or not those forests are growing or degrading.

A new study led by the University of Delaware's Pinki Mondal recommends that in addition to using this broad scale approach, it is important for countries to prioritize areas such as national parks and wildlife refuges and use finer scale data in those protected areas to make sure that they are maintaining their health and are being reported on accurately.

To help create an easy-to-implement reporting framework for six Southeast Asian forest ecosystems -- in Bangladesh, Bhutan, India, Nepal, Pakistan, and Sri Lanka -- Mondal led a study that first looked at those countries using a broad brush approach and then used higher resolution data to focus on two specific protected areas to show how the coarse satellite data can sometimes overlook or misinterpret temporal changes in forest cover.

Sustainable Development Goals

The work was conducted to develop a reporting framework that can help the countries with their Sustainable Development Goal (SDG) reporting to the United Nations.

In 2015, the United Nations General Assembly set forth 17 SDGs to serve as a blueprint to achieve a better and more sustainable future for all, with the hope to achieve these goals by the year 2030. Among these, goal No. 15 -- Life on Land -- is to protect the world's forests to strengthen natural resource management and increase land productivity. To help with reporting SDG 15, Mondal and her research group have been using remote sensing to look at forests around the world.

Mondal, an assistant professor in the Department of Geography and Spatial Sciences in UD's College of Earth, Ocean and Environment, recently had a paper published in the Remote Sensing of Environment Journal looking at SDG 15.

Coarse Satellite Data

Most countries, especially the ones with limited access to computing resources and finer scale remote sensing data, use freely available remote sensing assets such as those from coarse-scale satellite sensors.

"Depending on the scale of a study, people tend to use coarser resolution data because generally, those satellite images have a larger footprint," said Mondal. "Only a few satellite images can cover an entire country and it's easier to use or analyze that kind of data."

The researchers used a broad-brush approach with coarser resolution satellite data to calculate vegetation trends in response to rainfall changes in the six countries.

At the country-level since 2001, the vegetation trends fluctuated and the researchers found instances of localized greening in Pakistan, India, and Nepal, and browning in Bangladesh and Sri Lanka, with Bhutan showing almost no trend. The greening found in India and Nepal was more localized and the forests showed localized browning in the northeastern states of India, and parts of Nepal and Sri Lanka.

While the coarse-resolution data could indicate an overall greening trend for an area, when they looked at two specific protected areas using finer scale data, they found that there was a lot more going on.

Protected Areas

Using finer-resolution satellite data, the researchers looked at intact versus non-intact forests that were located in two protected areas, the Sanjay National Park in India and the Ruhuna National Park in Sri Lanka. Since both test cases are national parks, they are expected to host mostly intact, or undisturbed forests that would not be impacted by human populations.

"Protected areas are supposed to host and maintain quality forest. But by using this finer scale data, we were able to see non-intact forests that could be a result of factors such as fire, disease, or human activities. If we cannot maintain a healthy forest even within protected areas, then that's a problem," said Mondal.

When using a broad-brush approach, the Sanjay National Park showed an overall greening trend but when using the more in-depth data, they found almost one-third of the Sanjay National Park to have non-intact forest. In addition, they were also able to identify spots in the national parks that had no forests at all. Maintaining the balance between healthy forests and other ecosystems such as grasslands within these protected areas and minimizing degradation should be high priority for land managers moving forward.

This finer scale data allowed the researchers to generate maps of 87 percent and 91 percent overall accuracy for the Indian and Sri Lankan protected areas.

Challenges in reporting

Mondal said one of the challenges facing researchers has been developing a broad definition for a forest, as depending on a country's ecosystem, their forests can be very different.

"If you work in a country like India, it's so diverse that by definition, you can't have one uniform forest," said Mondal. "In the land change science community, we have been debating the definition for a forest, but an acceptable measure is the one with 10 percent canopy cover."

This indicator of a forest can be tracked with satellites, and researchers use satellite images over time to measure how much of a particular mapping unit is covered by forest canopy.

"If you're working in a country with a diverse landscape, the status of forest cover might change pretty rapidly over time. But you cannot capture that change with this coarse-level, broad-brush input approach, which is what most of the national level studies use," said Mondal.

Overall, Mondal said that the goal of the paper was to encourage people to realize that there is not a one-size-fits-all approach to monitoring and reporting progress toward SDG.

"Our goal is to encourage landscape managers to think more deeply about the methods they are using in terms of reporting these SDGs because depending on what data you're using, your result might look completely different than what you're reporting at the U.N. level," said Mondal.

Credit: 
University of Delaware

Atomic magnetometer points to better picture of heart conductivity

image: A pair of coils induces a magnetic field response (labeled BEC) in a low-conductivity solution contained in a petri dish, detected by a radio-frequency atomic magnetometer, based on laser manipulation and interrogation of atomic spins contained in a cubic glass chamber.

Image: 
Cameron Deans

WASHINGTON, March 31, 2020 -- Mapping the electrical conductivity of the human heart would be a valuable tool in the diagnosis and management of diseases, such as atrial fibrillation. But doing so would require invasive procedures, none of which are capable of directly mapping dielectric properties.

Significant advances have been made recently that leverage atomic magnetometers, which are quantum devices, to provide a direct picture of electric conductivity of biological tissues. In this week's Applied Physics Letters, from AIP Publishing, new work in quantum sensors points to ways such technology could be used to examine the heart.

Researchers at University College London have modified approaches used in electromagnetic induction imaging to take a picture of the electrical conduction of models that resembles the human heart. Using a radio-frequency atomic magnetometer that relies on rubidium-87, the group achieved the level of performance required to image the dielectric properties of the supporting structures that drive cardiac function.

"For the first time, we have achieved, in unshielded environments, the sensitivity and stability for imaging low conductivity in small volumes that are comparable to the expected size of the anomalies seen in atrial fibrillation," said author Luca Marmugi. "Thus, we have demonstrated that noninvasive electromagnetic induction imaging of the heart is technically possible."

The device the UCL group has developed applies a small oscillating magnetic field that induces a signal in the heart and is detected by an ultrasensitive detector based on laser manipulation of atomic spins.

To map conductivity anomalies in the human heart, such a device would need to detect conductivities on the order of 0.7 to 0.9 siemens per meter. When tested on laboratory solutions of the same conduction features of the human heart, the group's device was able to yield a signal that small.

The results mark a fiftyfold improvement over previous attempts to capture such small specimens.

Marmugi said the group hopes to continue developing its magnetometer system for clinical use and looks to improve on machine learning techniques to better map heart conductivity data.

"Our work has demonstrated the feasibility of our idea proposed in 2016. Mission accomplished!" Marmugi said. "However, we know we cannot rest. In this sense, I hope it will trigger increased interested in this kind of applications, hopefully encouraging more and more groups to work in the same field and fostering new collaboration and ideas."

Credit: 
American Institute of Physics

Cancer treatment with immune checkpoint inhibitors may lead to thyroid dysfunction

WASHINGTON--Thyroid dysfunction following cancer treatment with new treatments called immune checkpoint inhibitors is more common than previously thought, according to research that was accepted for presentation at ENDO 2020, the Endocrine Society's annual meeting, and will be published in a special supplemental section of the Journal of the Endocrine Society.

Cancer immunotherapy, particularly treatment with immune checkpoint inhibitors, has become an important part of treating some types of cancer and offers some patients sustained remissions. Immune checkpoint inhibitors are drugs that take the 'brakes' off the immune system, which helps it recognize and attack cancer cells. Immune checkpoint inhibitors are approved to treat some patients with a variety of cancers, including breast, bladder, cervical, colon, head and neck, liver, lung, skin, stomach and rectal cancers.

Along with the benefits of these treatments, possible side effects include immune-related adverse events, when the immune system attacks normal, noncancerous cells. One of the more common, but mild, side effects is abnormal thyroid levels, particularly hypothyroidism (an underactive thyroid).

"It was unclear to what extent this side effect occurred outside the clinical trial setting and so we used information from electronic health records to determine how common it was in practice," said lead researcher Zoe Quandt, M.D., of the University of California, San Francisco in San Francisco, Calif. "Understanding who gets these immune-related adverse events, why they get them and what impact they have on response to therapy is an essential part of optimizing our use of immune checkpoint inhibitors."

The researchers analyzed electronic health record data from the University of California, San Francisco on every patient who had received immune checkpoint inhibitors for treatment of cancer between 2012 and 2018.

They excluded anyone with thyroid cancer, whether or not that was the indication for the immune checkpoint inhibitor, or pre-existing thyroid disease. For the remaining 1,146 patients, they looked for those who had some type of thyroid dysfunction--either abnormal levels of thyroid hormones or a prescription for thyroid medication. Melanoma was the most common cancer treated (32%), followed by non-small cell lung cancer (13%).

Overall, 19% of subjects exposed to immune checkpoint inhibitors developed thyroid dysfunction. In contrast, a review of clinical trials found a much lower rate--6.6% of immune checkpoint inhibitor patients developed hypothyroidism, and 2.9% had hyperthyroidism, or an overactive thyroid.

The new study found thyroid problems varied by the type of cancer. The rate of thyroid dysfunction ranged from 10% of patients with the brain tumor glioblastoma to 40% in renal cell cancer, a type of kidney cancer. While there was no significant association between thyroid dysfunction and specific immune checkpoint inhibitors, thyroid dysfunction was more common in patients who received a combination of nivolumab (Opdivo) and ipilimumab (Yervoy) (31%) compared with either pembrolizumab (Keytruda) (18%), nivolumab (18%) or ipilimumab (15%) alone.

Credit: 
The Endocrine Society

Survey finds physicians struggle to communicate positive thyroid cancer prognosis

WASHINGTON--Despite excellent prognosis with most thyroid cancers, many newly diagnosed patients have cancer-related worry, and physicians vary in their responses to patients' worry, according to new research accepted for presentation at ENDO 2020, the Endocrine Society's annual meeting, and publication in a special supplemental section of the Journal of the Endocrine Society.

"In this large population-based study, we found that physicians use different strategies to address this worry, with around half of them telling their patients that thyroid cancer is a 'good cancer,'" said lead study author Maria Papaleontiou, M.D., an assistant professor of medicine in the Division of Metabolism, Endocrinology & Diabetes (MEND) at the University of Michigan in Ann Arbor, Mich. "Although treating physicians are likely trying to emphasize optimistic outcomes, no evidence currently exists to support the notion that telling patients they have a 'good cancer' is helpful."

To investigate how physicians manage thyroid cancer-related worry, the researchers identified patients with differentiated thyroid cancer diagnosed in 2014 and 2015 from the Surveillance, Epidemiology, and End Results (SEER) registries of the State of Georgia and Los Angeles County, California, and they identified the endocrinologists and surgeons involved in their care. In a 2018-2019 survey, the authors asked the doctors to describe their thyroid cancer patients' general worry at the time of diagnosis and what they told their worried patients. Multivariable logistic regression analysis identified physician characteristics associated with reporting thyroid cancer as a "good cancer."

Of those who responded, 40% were endocrinologists, 30% were general surgeons, and 30% were otolaryngologists. The research team found that 65% of physicians reported that in general their patients were quite or very worried at the time of diagnosis, 27% that they were somewhat worried, and 8% that their patients were not worried or were a little worried. Almost all physicians (91%) reported that they provided their worried patients with details on prognosis, including recurrence and death.

Overall, 60% of physicians told their worried patients that their doctors are experienced in managing thyroid cancer, and 49% told them that thyroid cancer is a "good cancer." Otolaryngologists were more likely to use this terminology than endocrinologists. Clinicians in private practice were more likely to describe thyroid cancer as a "good cancer" than those in academic settings, and the phrasing was more common among individuals in Los Angeles County than the Georgia respondents. Physicians who perceived that in general their patients were quite or very worried at time of diagnosis were less likely to report describing thyroid cancer as a "good cancer" than were those whose patients were not or somewhat worried.

Cancer-related worry is a major issue for thyroid cancer patients. This study highlights the multiple strategies physicians use for addressing this worry. In particular, one strategy relates to terminology that although meant to be helpful, may not be effective. "Despite physicians' good intentions, currently no evidence suggests that telling thyroid cancer patients they have a 'good cancer' is helpful or reassuring," Papaleontiou said. "On the contrary, patients report that being told by doctors that they have a 'good cancer' invalidates their fears of having cancer and creates mixed and confusing emotions."

"Our findings emphasize the need for physician education and intervention to address thyroid cancer patient worry. Efforts should focus on helping physicians understand that calling thyroid cancer a 'good cancer' may not always alleviate worry," she said.

Credit: 
The Endocrine Society

First FDA-approved drug for thyroid eye disease effective regardless of age, gender

WASHINGTON--Teprotumumab, the first FDA-approved medicine for thyroid eye disease, provides significant improvement in eye bulging, regardless of patient gender, age or smoking status, according to a study accepted for presentation at ENDO 2020, the Endocrine Society's annual meeting, and publication in a special supplemental section of the Journal of the Endocrine Society.

Though it is a rare condition, thyroid eye disease is devastating to those who have it, affecting their relationships with family, friends and co-workers. Until recently, no medicine was approved by the U.S. Food and Drug Administration for the treatment of thyroid eye disease. Teprotumumab was approved by the FDA in January 2020.

"Teprotumumab decreases inflammation in the eye and the build-up of tissues behind the eye that produce the long-term symptoms that reduce quality of life for patients with thyroid eye disease," said lead researcher George J. Kahaly, M.D., Ph.D., of Johannes Gutenberg University Medical Center in Mainz, Germany. "It offers thyroid eye disease patients new hope."

The disease is associated with the outward bulging of the eye that can cause a variety of symptoms, such as eye pain, double vision, light sensitivity or difficulty closing the eye. The symptoms can lead to the progressive inability to perform important daily activities, such as driving or working. Thyroid eye disease affects more women than men.

The researchers analyzed data from two 24-week studies, with a total of 171 patients with thyroid eye disease. Prior analyses of these studies showed 77.4% of patients had a reduction in eye bulging, compared with 14.9% of those receiving a placebo, after 24 weeks of therapy. The researchers performed the new analysis to see whether patients' gender, smoking status and age influenced the drug's response rate.

The participants were randomly assigned to receive teprotumumab or a placebo. At week 24, significantly more patients receiving teprotumumab had improvements in their eye bulging compared with those who received a placebo, regardless of their gender, smoking status or age: (male: 73.1% vs. 5.0%, female: 79.3% vs. 17.9%; smokers: 70.0% vs. 23.1%, non-smokers 79.7% vs. 11.5%; younger than age 65: 76.1% vs. 16.2%, age 65 or older: 84.6% vs. 7.7%).

The average reduction in eye bulging was also significantly greater after 24 weeks of treatment in all subgroups of patients treated with teprotumumab compared with the placebo group.

Credit: 
The Endocrine Society