Tech

Can financial stress lead to physical pain in later years?

Financial stress can have an immediate impact on well-being, but can it lead to physical pain nearly 30 years later? The answer is yes, according to new research from University of Georgia scientists.

The study, published in Stress & Health, reveals that family financial stress in midlife is associated with a depleted sense of control, which is related to increased physical pain in later years.

"Physical pain is considered an illness on its own with three major components: biological, psychological and social," said Kandauda A.S. Wickrama, first author and professor in the College of Family and Consumer Sciences. "In older adults, it co-occurs with other health problems like limited physical functioning, loneliness and cardiovascular disease."

Most pain research is neurological, but it's important to also connect it to stressful family experiences, according to the researchers.

"Dr. Wickrama and I are both interested in the context surrounding families and how that context impacts the relational, physical and mental health of the individuals in the family," said lead author Catherine Walker O'Neal, associate research scientist in the College of Family and Consumer Sciences. "Finances are an important component of our work because it's such a relevant contextual stressor families face."

The authors used data from the Iowa Youth and Family Project, a longitudinal study that provides 27 years of data on rural families from a cluster of eight counties in north-central Iowa. The data was collected in real time from husbands and wives in 500 families who experienced financial problems associated with the late 1980s farm crisis. Most of the individuals are now over 65 years old, and the couples are in enduring marriages--some as long as 45 years.

Even after the researchers controlled for concurrent physical illnesses, family income and age, they found a connection between family financial hardship in the early 1990s and physical pain nearly three decades later. Additional findings from their study show it's more likely that financial strain influences physical pain, though physical pain can in turn influence financial strain through additional health care costs.

Physical pain is a biopsychosocial phenomenon, according to Wickrama. The research suggests that stressful experiences like financial strain erode psychological resources like a sense of control. This depletion of resources activates brain regions that are sensitive to stress, launching pathological, physiological and neurological processes that lead to health conditions like physical pain, physical limitations, loneliness and cardiovascular disease.

"In their later years, many complain about memory loss, bodily pain and lack of social connections," he said. "Nearly two-thirds of adults complain of some type of bodily pain, and nearly that many complain of loneliness. That percentage is going up, and the health cost for that is going up. That is a public health concern."

Credit: 
University of Georgia

FSU engineers improve performance of high-temperature superconductor wires

image: Abiola Temidayo Oloye, left, a fifth-year doctoral candidate and the lead author of a study published in Superconductor Science and Technology, at an electron microscope with Fumitake Kametani, an associate professor of mechanical engineering and principal investigator for the study at the FAMU-FSU College of Engineering.

Image: 
Mark Wallheiser/FAMU-FSU College of Engineering

Florida State University researchers have discovered a novel way to improve the performance of electrical wires used as high-temperature superconductors (HTS), findings that have the potential to power a new generation of particle accelerators.

An image of Bi-2212, bismuth-based superconducting wires. (Mark Wallheiser/FAMU-FSU College of Engineering)
Researchers used high-resolution scanning electron microscopy to understand how processing methods influence grains in bismuth-based superconducting wires (known as Bi-2212). Those grains form the underlying structures of high-temperature superconductors, and scientists viewing the Bi-2212 grains at the atomic scale successfully optimized their alignment in a process that makes the material more efficient in carrying a superconducting current, or supercurrent. Their work was published in the journal Superconductor Science and Technology.

The researchers found that the individual grains have a long rectangular shape, with their longer side pointing along the same axis as the wire -- a so-called biaxial texture. They are arranged in a circular pattern following the path of the wire, so that orientation is only apparent at very small scale. Those two properties together give the Bi-2212 grains a quasi-biaxial texture, which turned out to be an ideal configuration for supercurrent flow.

"By understanding how to optimize the structure of these grains, we can fabricate the HTS round wires that carry higher currents in the most efficient way," said Abiola Temidayo Oloye, a doctoral candidate at the FAMU-FSU College of Engineering, researcher at the National High Magnetic Field Laboratory (MagLab) and the paper's lead author.

Superconductors, unlike conventional conductors such as copper, can transport electricity with perfect efficiency because electrons encounter no friction while traveling in the superconducting wire. Bi-2212 wires belong to a new generation of high-field superconductors for building superconducting magnets, which are crucial tools for scientific research at labs around the world, including the National High Magnetic Field Laboratory where the team of researchers conducted their experiments.

High-temperature superconductors like Bi-2212 can conduct current at much higher magnetic fields than low-temperature superconductors (LTS) and are a key part of the designs for even more powerful particle accelerators at the Large Hadron Collider at the European Organization for Nuclear Research (CERN).

"We optimized the Bi-2212 round wires to carry more current, while keeping in mind the scale difference between the lab and manufacturer," Oloye said. "The process we develop in the lab has to scale to the manufacturing level for the technology to be commercially viable and we were able to do that in the study."

Previous work done by Fumitake Kametani, an associate professor of mechanical engineering at the FAMU-FSU College of Engineering, MagLab researcher, and principal investigator for the study, showed the importance of quasi-biaxial texture in Bi-2212 round wires for currents. This paper continued the premise and demonstrated the factors needed to achieve optimal quasi-biaxial texture.

"The microstructural characterization used is unique in analyzing the crystal structure of Bi-2212 round wires," Kametani said."The technique is usually used for analyzing metals and alloys, and we have adapted it to develop novel sample preparation methods to further the optimization of Bi-2212 HTS wire technologies."

The big-picture goal is to be able to use Bi-2212 round wires in future high-field magnet applications.

"Since it is the only high-temperature superconductor available in round wire form, the material can more easily replace existing technologies using LTS wires made from other materials," Oloye said. "Other HTS such as REBCO and Bi-2223 are only available in tape form, which adds a layer of complexity to magnet design."

Credit: 
Florida State University

Water crisis took toll on Flint adults' physical, mental health

ITHACA, N.Y. - Since state austerity policies initiated a potable water crisis seven years ago in Flint, Michigan, public health monitoring has focused on potential developmental deficits associated with lead exposure in adolescents or fetuses exposed in utero.

New research from Cornell and the University of Michigan offers the first comprehensive evidence that the city's adult residents suffered a range of adverse physical and mental health symptoms potentially linked to the crisis in the years during and following it, with Black residents affected disproportionately.

In a survey of more than 300 residents, 10% reported having been diagnosed by a clinician with elevated blood lead levels - well above national averages - after a state-appointed city manager, as part of a cost-saving measure, switched the city's water source to one that became contaminated with lead and harmful bacteria on April 25, 2014.

Nearly half the survey respondents reported experiencing skin rashes and more than 40% experienced hair loss, among physical symptoms associated with elevated levels of bacteria and heavy metals in water. More than a quarter of respondents reported symptoms of depression or anxiety, and nearly a third had PTSD symptoms specifically related to the water crisis.

"If you don't trust your water and you actively avoid it over persistent concerns on its safety, that's a stark form of psychological trauma in and of itself," said Jerel Ezell, assistant professor in the Africana Studies and Research Center in the College of Arts and Sciences.

Ezell and Elizabeth Chase, a doctoral student at the University of Michigan School of Public Health, are co-authors of "A Population-Based Assessment of Physical Symptoms and Mental Health Outcomes Among Adults Following the Flint Water Crisis," published March 31 in the Journal of Urban Health.

The researchers conducted surveys in late 2019 as part of the Flint Community Engagement Project, a longitudinal study started in 2017 for which Ezell, a native of the Flint area, serves as principal investigator. Even several years after the city switched back to its original water source in 2016, the researchers said, federal, state and local government guidance, and guidance from healthcare practitioners in the city, about tap water safety remained ambiguous and often contradictory.

The surveys were administered at nine public sites - including libraries, a laundromat, a café and a bus station - in an effort to capture the racial and socioeconomic diversity across the low-income, predominantly Black city.

Ezell and Chase found that more than half the respondents were never screened for elevated blood lead levels, but that Black residents were nearly twice as likely to seek screening as whites - possibly an indication that they perceived a higher threat level, Ezell said, similar to the gap in threat perception seen across race in relation to COVID-19's severity.

Nearly 60% of Black respondents reported skin rashes beyond what they considered normal before the crisis, compared with 33.9% of whites. Black residents also reported significantly higher percentages of hair loss, nausea and emotional agitation. The more physical symptoms one reported, the study determined, the more likely they were to report psychological symptoms.

The study used validated surveys to measure feelings of depression or anxiety and of post-traumatic stress disorder, as was observed in New Orleans after Hurricane Katrina and more recently in Puerto Rico after Hurricane Maria. They asked, for example, if respondents had persistent and ongoing thoughts about the quality of their tap water, or if they blamed themselves or someone else for the city's water crisis.

The results - 26.3% of residents exhibited depressive or anxious symptoms, and 29% met criteria for trauma - revealed "a steep and broad mental health toll," the researchers said.

The authors acknowledged limitations to the study, including that the survey sample was not randomly selected and that symptoms were self-reported and could have been affected by recall bias. Factors other than water contamination, they cautioned, could have contributed to elevated blood lead levels and other reported symptoms.

The data nonetheless suggests, Ezell said, that Flint's adult residents experienced significantly more adverse health symptoms during and in the years after the water crisis' initiation than would be expected from the city's population.

"Flint adults, particularly Blacks," Ezell and Chase concluded, "experienced deleterious physical and mental health outcomes following the city's water crisis that appear to represent a substantial burden of excess cases."

The findings, they said, point to the need for continued testing of Flint's water quality and any potential negative health impacts, and a broader imperative to restore civic trust by addressing "macrosocial forces, many of which have racist and classist antecedents," that contributed to the crisis.

"It is these forces," they wrote, "that ultimately laid the groundwork for the devaluation of Flint's water and negligence towards residents' health."

Credit: 
Cornell University

From smoky skies to a green horizon: Scientists convert fire-risk wood waste into biofuel

image: Author Carolina Araujo Barcelos preparing the woody biomass for deconstruction into fermentable sugars.

Image: 
Berkeley Lab

Reliance on petroleum fuels and raging wildfires: Two separate, large-scale challenges that could be addressed by one scientific breakthrough.

Teams from Lawrence Berkeley National Laboratory (Berkeley Lab) and Sandia National Laboratories have collaborated to develop a streamlined and efficient process for converting woody plant matter like forest overgrowth and agricultural waste - material that is currently burned either intentionally or unintentionally - into liquid biofuel. Their research was published recently in the journal ACS Sustainable Chemistry & Engineering.

"According to a recent report, by 2050 there will be 38 million metric tons of dry woody biomass available each year, making it an exceptionally abundant carbon source for biofuel production," said Carolina Barcelos, a senior process engineer at Berkeley Lab's Advanced Biofuels and Bioproducts Process Development Unit (ABPDU).

However, efforts to convert woody biomass to biofuel are typically hindered by the intrinsic properties of wood that make it very difficult to break down chemically, added ABPDU research scientist Eric Sundstrom. "Our two studies detail a low-cost conversion pathway for biomass sources that would otherwise be burned in the field or in slash piles, or increase the risk and severity of seasonal wildfires. We have the ability to transform these renewable carbon sources from air pollution and fire hazards into a sustainable fuel."

In a study led by Barcelos and Sundstrom, the scientists used non-toxic chemicals, commercially available enzymes, and a specially engineered strain of yeast to convert wood into ethanol in a single reactor, or "pot." Furthermore, a subsequent technological and economic analysis helped the team identify the necessary improvements required to reach ethanol production at $3 per gasoline gallon equivalent (GGE) via this conversion pathway. The work is the first-ever end-to-end process for ethanol production from woody biomass featuring both high conversion efficiency and a simple one-pot configuration. (As any cook knows, one-pot recipes are always easier than those requiring multiple pots, and in this case, it also means lower water and energy usage.)

In a complementary study, led by John Gladden and Lalitendu Das at the Joint BioEnergy Institute (JBEI), a team fine-tuned the one-pot process so that it could convert California-based woody biomass - such as pine, almond, walnut, and fir tree debris - with the same level of efficiency as existing methods used to convert herbaceous biomass, even when the input is a mix of different wood types.

"Removing woody biomass from forests, like the overgrown pines of the Sierra, and from agricultural areas like the almond orchards of California's Central Valley, we can address multiple problems at once: disastrous wildfires in fire-prone states, air pollution hazards from controlled burning of crop residues, and our dependence on fossil fuels," said Das, a postdoctoral fellow at JBEI and Sandia. "On top of that, we would significantly reduce the amount of carbon added to the atmosphere and create new jobs in the bioenergy industry."

Ethanol is already used as an emissions-reducing additive in conventional gasoline, typically constituting about 10% of the gas we pump into our cars and trucks. Some specialty vehicles are designed to operate on fuel with higher ethanol compositions of up to 83%. In addition, the ethanol generated from plant biomass can be used as an ingredient for making more complex diesel and jet fuels, which are helping to decarbonize the difficult-to-electrify aviation and freight sectors. Currently, the most common source of bio-based ethanol is corn kernels - a starchy material that is much easier to break down chemically, but requires land, water, and other resources to produce.

These studies indicate that woody biomass can be efficiently broken down and converted into advanced biofuels in an integrated process that is cost-competitive with starch-based corn ethanol. These technologies can also be used to produce "drop-in" biofuels that are chemically identical to compounds already present in gasoline and diesel.

The next steps in this effort is to develop, design, and deploy the technology at the pilot scale, which is defined as a process that converts 1 ton of biomass per day. The Berkeley Lab teams are working with Aemetis, an advanced renewable fuels and biochemicals company based in the Bay Area, to commercialize the technology and launch it at larger scales once the pilot phase is complete.

Credit: 
DOE/Lawrence Berkeley National Laboratory

AI pinpoints local pollution hotspots using satellite images

image: A new AI algorithm picked out these city-block-sized satellite images as local hotspots (top) and cool spots (bottom) for air pollution in Beijing.

Image: 
Tongshu Zheng, Duke University

DURHAM, N.C. - Researchers at Duke University have developed a method that uses machine learning, satellite imagery and weather data to autonomously find hotspots of heavy air pollution, city block by city block.

The technique could be a boon for finding and mitigating sources of hazardous aerosols, studying the effects of air pollution on human health, and making better informed, socially just public policy decisions.

"Before now, researchers trying to measure the distribution of air pollutants throughout a city would either try to use the limited number of existing monitors or drive sensors around a city in vehicles," said Mike Bergin, professor of civil and environmental engineering at Duke. "But setting up sensor networks is time-consuming and costly, and the only thing that driving a sensor around really tells you is that roads are big sources of pollutants. Being able to find local hotspots of air pollution using satellite images is hugely advantageous."

The specific air pollutants that Bergin and his colleagues are interested in are tiny airborne particles called PM2.5. These are particles that have a diameter of less than 2.5 micrometers -- about three percent of the diameter of a human hair -- and have been shown to have a dramatic effect on human health because of their ability to travel deep into the lungs.

The Global Burden of Disease study ranked PM2.5 fifth on its list of mortality risk factors in 2015. The study indicated that PM2.5 was responsible in one year for about 4.2 million deaths and 103.1 million years of life lost or lived with disability. A recent study from the Harvard University T.H. Chan School of Public Health also found that areas with higher PM2.5 levels are associated with higher death rates due to COVID-19.

But the Harvard researchers could only access PM2.5 data on a county-by-county level within the United States. While a valuable starting point, county-level pollution statistics can't drill down to a neighborhood next to a coal-fired power plant versus one next to a park that is 30 miles upwind. And most countries outside of the Western world don't have that level of air quality monitoring.

"Ground stations are expensive to build and maintain, so even large cities aren't likely to have more than a handful of them," said Bergin. "So while they might give a general idea of the amount of PM2.5 in the air, they don't come anywhere near giving a true distribution for the people living in different areas throughout that city."

In previous work with doctoral student Tongshu Zheng and colleague David Carlson, assistant professor of civil and environmental engineering at Duke, the researchers showed that satellite imagery, weather data and machine learning could provide PM2.5 measurements on a small scale.

Building off that work and focusing on Beijing, the team has now improved their methods and taught the algorithm to automatically find hotspots and cool spots of air pollution with a resolution of 300 meters -- about the length of a New York City block.

The advancement was made by using a technique called residual learning. The algorithm first estimates the levels of PM2.5 using weather data alone. It then measures the difference between these estimates and the actual levels of PM2.5 and teaches itself to use satellite images to make its predictions better.

"When predictions are made first with the weather, and then satellite data is added later to fine-tune them, it allows the algorithm to take full advantage of the information in satellite imagery," said Zheng.

The researchers then used an algorithm initially designed to adjust uneven illumination in an image to find areas of high and low levels of air pollution. Called local contrast normalization, the technique essentially looks for city-block-sized pixels that have higher or lower levels of PM2.5 than others in their vicinity.

"These hotspots are notoriously difficult to find in maps of PM levels because some days the air is just really bad across the entire city, and it is really difficult to tell if there are true differences between them or if there's just a problem with the image contrast," said Carlson. "It's a big advantage to be able to find a specific neighborhood that tends to stay higher or lower than everywhere else, because it can help us answer questions about health disparities and environmental fairness."

While the exact methods the algorithm teaches itself can't transfer from city to city, the algorithm could easily teach itself new methods in different locations. And while cities might evolve over time in both weather and pollution patterns, the algorithm shouldn't have any trouble evolving with them. Plus, the researchers point out, the number of air quality sensors is only going to increase in coming years, so they believe their approach will only get better with time.

"I think we'll be able to find built environments in these images that are related to the hot and cool spots, which can have a huge environmental justice component," said Bergin. "The next step is to see how these hotspots are related to socioeconomic status and hospital admittance rates from long-term exposures. I think this approach could take us really far and the potential applications are just amazing."

Credit: 
Duke University

Medically savvy smartphone imaging systems

image: Smartphone-based imaging for various biomedical applications grouped into four clinical workflows.

Image: 
Hunt et al., doi 10.1117/1.JBO.26.4.040902

Smartphones get smarter every day. These "Swiss Army knives" of mobile computing become even more useful with specialized attachments and applications to improve healthcare. Based on inherent capabilities like built-in cameras, touchscreens, and 3D sensing, as well wearable peripheral devices, custom interfaces for smartphones can yield portable, user-friendly biomedical imaging systems to guide and facilitate diagnosis and treatment in point-of-care settings.

What are the most effective ways to leverage and augment smartphone capabilities? Helpful guidelines are provided in a critical review of emerging smartphone-based imaging systems recently published in the Journal of Biomedical Optics (JBO).

According to author Brady Hunt, a research scientist at Dartmouth College's Thayer School of Engineering, "The ubiquity of the smartphone is frequently cited as a justification that smartphone-based systems are inherently low-cost, easy-to-use, and scalable biomedical imaging solutions." But, as Hunt and his co-authors point out, most systems developed are limited to a single phone model, like an iPhone 12 or the ultra-rugged Caterpillar S61, and involve manual, often fragmented image acquisition and analysis pipelines.

Focusing specifically on live (in vivo) applications for a diverse array of point-of-care-imaging, Hunt and his co-authors survey and assess recent research, identifying numerous design challenges, as well as areas with strong potential. Their focus on real-world usability provides meaningful direction for prospective designers of custom hard- and software for smartphone interfaces. Generally, the most effective use-case scenarios for medically savvy smartphone imaging systems are those in which handheld, noninvasive image guidance is needed and accommodated by the clinical workflow.

Among the top emerging technologies identified for diagnostic and treatment guidance applications are handheld systems for multispectral and quantitative fluorescence imaging. These applications often require embedded electronics to control light delivery, and the authors note that wireless communication to embedded electronics is an underutilized yet promising way to improve and customize control.

Ways to improve

Three high-priority areas are proposed to advance research in smartphone-based imaging systems for healthcare:

Improved hardware design to accommodate the rapidly changing smartphone ecosystem

Creation of open-source image acquisition and analysis pipelines

Adoption of robust calibration techniques to address phone-to-phone variability

Variability among smartphone platforms is a significant problem for reproducibility, which the authors suggest may be addressed by the creation of templates that support the core functionality necessary for biomedical imaging. These platform-specific templates would ideally include support for RAW image acquisition and standardized processing routines for common biomedical image analysis tasks.

As smartphones grow smarter, their built-in capabilities will render them increasingly versatile and better able to contribute to biomedical imaging. Harnessing those smarts to benefit healthcare invites creative, collaborative biomedical engineering.

Credit: 
SPIE--International Society for Optics and Photonics

Snake species from different terrains surrender surface secrets behind slithering success

WASHINGTON, April 15, 2021 -- Some snake species slither across the ground, while others climb trees, dive through sand or glide across water. Today, scientists report that the surface chemistry of snake scales varies among species that negotiate these different terrains. The findings could have implications for designing durable materials, as well as robots that mimic snake locomotion to cross surfaces that would otherwise be impassable.

The researchers will present their results today at the spring meeting of the American Chemical Society (ACS). ACS Spring 2021 is being held online April 5-30. Live sessions will be hosted April 5-16, and on-demand and networking content will continue through April 30. The meeting features nearly 9,000 presentations on a wide range of science topics.

The research began as a collaboration with Woodland Park Zoo in Seattle, explains Tobias Weidner, Ph.D., the project's principal investigator. One of the zoo's biologists told Weidner that not much was known about the chemistry of snake surfaces. "Biologists typically don't have techniques that can identify molecules on the outermost layer of a surface such as a snake scale," he says. "But I'm a chemist -- a surface scientist -- so I felt I could add something to the picture with my lab's methods."

In that initial project, the researchers discovered that land snakes are covered with a lipid layer. This oily layer is so thin -- a mere one or two nanometers -- that no one had noticed it before. The team also found that the molecules in this layer are disorganized on the snake's back scales but highly organized and densely packed on belly scales, an arrangement that provides lubrication and protection against wear.

"Some people are afraid of snakes because they think they're slimy, but biologists tell them snakes aren't slimy; they're dry to the touch," Weidner says. "That's true, but it's also not true because at the nanoscale we found they actually are greasy and slimy, though you can't feel it. They're 'nanoslimy.'"

In the new study, the team wanted to find out if this nanoslimy surface chemistry differs in species adapted to various habitats, says Mette H. Rasmussen, a graduate student who is presenting the latest findings at the meeting. Both Weidner and Rasmussen are at Aarhus University in Denmark.

Working with recently shed skins, Rasmussen compared the surface chemistry of ground, tree and sand snakes. She used laser spectroscopy and an electron microscopy technique that probes the chemistry of the surface by knocking electrons out of it with X-rays. The project was a collaboration with Joe Baio, Ph.D., at Oregon State University; Stanislav Gorb, Ph.D., at Kiel University and researchers at the U.S. National Institute of Standards and Technology.

Rasmussen found that the tree snake has a layer of ordered lipid molecules on its belly, just like the ground snake. But the sand snake, which dives through sand, has an ordered lipid layer on both its front and back. "From a snake's point of view, it makes sense," she says. "You would like to have this friction reduction and wear resistance on both sides if you're surrounded by your environment instead of only moving across it." Next, the researchers want to find out where the lipids come from and to look at variations across other snake species, including those that live in water. They would also like to identify the lipids, though Weidner suspects the chemical makeup of the lipid layer is less important than the organization and density of the lipid molecules it contains.

The work could have broad applications. "A snake's slithering locomotion requires constant contact with the surface it's crossing, which poses stringent requirements for friction, wear and mechanical stability," Rasmussen says. Learning how snakes maintain the integrity of their skin when encountering sharp rocks, hot sand and other challenges could help in the design of more durable materials.

In addition, the researchers say, multiple groups are developing robots that mimic a snake's slithering or sidewinding locomotion and -- unlike robots with wheels -- can therefore negotiate difficult terrain such as steep, sandy slopes. These groups have recently begun taking into account the microstructure of snake scales, Rasmussen notes, but scales' surface chemistry is also critical to their performance. Bringing these fields together could one day lead to snakelike robots capable of helping in rescue operations or freeing a Mars rover stuck in sand, she says.

Credit: 
American Chemical Society

Good dental health may help prevent heart infection from mouth bacteria

DALLAS, April 15, 2021 - Maintenance of good oral health is more important than use of antibiotics in dental procedures for some heart patients to prevent a heart infection caused by bacteria around the teeth, according to a new American Heart Association (AHA) scientific statement published today in the association's flagship journal, Circulation.

Infective endocarditis (IE), also called bacterial endocarditis, is a heart infection caused by bacteria that enter the bloodstream and settle in the heart lining, a heart valve or a blood vessel. It is uncommon, but people with heart valve disease or previous valve surgery, congenital heart disease or recurrent infective endocarditis have a greater risk of complications if they develop IE. Intravenous drug use also increases risk for IE. Viridans group streptococcal infective endocarditis (VGS IE) is caused by bacteria that collect in plaque on the tooth surface and cause inflammation and swelling of the gums. There's been concern that certain dental procedures may increase the risk of developing VGS IE in vulnerable patients.

The new guidance affirms previous recommendations that only four categories of heart patients should be prescribed antibiotics prior to certain dental procedures to prevent VGS IE due to their higher risk for complications from the infection:

those with prosthetic heart valves or prosthetic material used for valve repair;

those who have had a previous case of infective endocarditis;

adults and children with congenital heart disease; or

people who have undergone a heart transplant.

"Scientific data since the 2007 AHA guidelines support the view that limited use of preventive antibiotics for dental procedures hasn't increased cases of endocarditis and is an important step at combating antibiotic overuse in the population," said Walter R. Wilson, M.D., chair of the statement writing group and a consultant for the Division of Infectious Diseases, Department of Internal Medicine at Mayo Clinic in Rochester, Minn.

It has been over a decade since recommendations for preventing infective endocarditis were updated amid concerns of antibiotic resistance due to overprescribing. The American Heart Association's 2007 guidelines, which presented the biggest shift in recommendations from the Association on the prevention of infective endocarditis in more than 50 years, more tightly defined which patients should receive preventive antibiotics before certain dental procedures to the four high-risk categories. This change resulted in about 90% fewer patients requiring antibiotics.

The scientific statement writing group reviewed data on VGS IE since the 2007 guidelines to determine if the guidelines had been accepted and followed, whether cases of and mortality due to VGS IE have increased or decreased, and if the guidance might need to be adjusted.

The writing committee reports their extensive review of related research found:

There was good general awareness of the changes in the 2007 guidelines, however, adherence to the guidelines was variable. There was about a 20% overall reduction in prescribing preventive antibiotics among high-risk patients, a 64% decrease among moderate-risk patients, and a 52% decrease in those patients at low- or unknown-risk.

In a survey of 5,500 dentists in the U.S., 70% reported prescribing preventive antibiotics to patients even though the guidelines no longer recommend it, and this was most often for patients with mitral valve prolapse and five other cardiac conditions. The dentists reported that about 60% of the time the antibiotic regimen was recommended by the patient's physician, and 1/3 of the time was according to patient preference.

Since the stricter 2007 antibiotic guidelines, there is no convincing evidence of an increase in cases of VGS IE or increased mortality due to VGS IE.

The writing group supports the 2007 recommendation that only the highest risk groups of patients receive antibiotics prior to certain dental procedures to help prevent VGS IE.

In the presence of poor oral hygiene and gingival disease, VGS IE is far more likely to develop from bacteria attributable to routine daily activities such as toothbrushing than from a dental procedure.

Maintenance of good oral hygiene and regular access to dental care are considered as important in preventing VGS IE as taking antibiotics before certain dental procedures.

It is important to connect patients with services to facilitate access to dental care and assistance with insurance for dental coverage, especially in those patients at high risk for VGS IE.

It is still appropriate to follow the recommendation to use preventive antibiotics with high-risk patients undergoing dental procedures that involve manipulation of the gum tissue or infected areas of the teeth, or perforation of the membrane lining the mouth.

The scientific statement was prepared by the volunteer writing committee on behalf of the American Heart Association's Young Hearts Rheumatic Fever, Endocarditis and Kawasaki Disease Committee; the Council on Lifelong Congenital Heart Disease and Heart Health in the Young; the Council on Cardiovascular and Stroke Nursing; and the Council on Quality of Care and Outcomes Research.

Credit: 
American Heart Association

With the right carbon price path there is no need for excessive CO2 removal

Technologies to remove CO2 from the atmosphere, such as reforestation or bioenergy with carbon capture and storage (BECCS), are an indispensable part in most scenarios to limit climate change. However, excessive deployment of such technologies would carry risks such as land conflicts or enhanced water scarcity due to a high demand for bioenergy crops. To tackle this trade-off, a team of researchers from Potsdam and Berlin has now identified requirements for a dynamic, long-term carbon price pathway to reduce the demand for CO2 removal technologies and thus effectively limit long-term risks. The approach minimizes governance and sustainability concerns by proposing a market-based and politically feasible approach.

"The CO2 price needs to be high enough at the outset, to make sure that emissions are reduced quickly and to achieve emissions neutrality relatively fast," explains lead author Jessica Strefler from the Potsdam-Institute for Climate Impact Research PIK. "Once we have achieved this, the price curve should flatten to avoid excessive CO2 removal (carbon dioxide removal - CDR). It can be a real win-win: Such a price path reduces both the risks associated with increasing reliance on CO2 removals and the economic risks of very high CO2 prices in the second half of the century."

Costs, eco-systems, land-use conflicts

Currently discussed and in part already implemented carbon removal technologies such as reforestation, direct air capture or bioenergy, both combined with geological carbon storage, could be promising ways to complement emissions reduction efforts. These technologies are necessary to compensate the remaining few percent of emissions and achieve emissions neutrality. However, if rolled out on a planetary scale, substantial risks such as high economic costs, enhanced water scarcity, or land-use conflicts could arise.

Such a large-scale deployment would only be necessary if emissions were reduced too little or too late, such that net-negative emissions would become necessary to reduce global mean temperature again after the target has been reached. Both effects could be avoided with a high enough carbon price early on. Even if not necessary, excessive CDR could still be incentivized if the carbon price continues to increase after emission neutrality.

After steep increase, carbon pricing must remain constant

"Carbon pricing is key to reach net zero greenhouse gas emissions - there is frankly no other way to reach that target," says co-author Ottmar Edenhofer, Director of both PIK and the Mercator Research Institute on Global Commons and Climate Change. "After a high start and a rather steep increase, the price curve should flatten once emission neutrality is achieved, but it needs to remain on a high level if we want to maintain both a fossil-free world and a reasonable amount of carbon dioxide removal. Our calculations in fact show that we need a substantial pricing of CO2 emissions throughout the 21st century - with beneficial effects for both the economy and the people."

Credit: 
Potsdam Institute for Climate Impact Research (PIK)

German National HPC Centre provides resources to look for cracks in the standard model

image: Does the magnetic moment of muons fit into our understanding of the laws governing the physical world around us?

Image: 
Uni Wuppertal / thavis gmbh

Since the 1970s, the Standard Model of Physics has served as the basis from which particle physics are investigated. Both experimentalists and theoretical physicists have tested the Standard Model's accuracy, and it has remained the law of the land when it comes to understanding how the subatomic world behaves.

This week, cracks formed in that foundational set of assumptions. Researchers of the "Muon g-2" collaboration from the Fermi National Accelerator Laboratory (FNAL) in the United States published further experimental findings that show that muons--heavy subatomic relatives of electrons--may have a larger "magnetic moment" than earlier Standard Model estimates had predicted, indicating that an unknown particle or force might be influencing the muon. The work builds on anomalous results first uncovered 20 years ago at Brookhaven National Laboratory (BNL), and calls into question whether the Standard Model needs to be rewritten.

Meanwhile, researchers in Germany have used Europe's most powerful high-performance computing (HPC) infrastructure to run new and more precise lattice quantum chromodynamics (lattice QCD) calculations of muons in a magnetic field. The team found a different value for the Standard Model prediction of muon behaviour than what was previously accepted. The new theoretical value is in agreement with the FNAL experiment, suggesting that a revision of the Standard Model is not needed. The results are now published in Nature.

The team primarily used the supercomputer JUWELS at the Jülich Supercomputing Centre (JSC), with the computational time provided by the Gauss Centre for Supercomputing (GCS) as well at JSC's JURECA system, along with extensive computations performed at the other two GCS sites--on Hawk at the High-Performance Computing Center Stuttgart (HLRS) and on SuperMUC-NG at the Leibniz Supercomputing Centre (LRZ).

Both the experimentalists and theoretical physicists agreed that further research must be done to verify the results published this week. One thing is clear, however: the HPC resources provided by GCS were essential for the scientists to achieve the precision necessary to get these groundbreaking results.

"For the first time, lattice results have a precision comparable to these experiments. Interestingly our result is consistent with the new FNAL experiment, as opposed to previous theory results, that are in strong disagreement with it," said Prof. Kalman Szabo, leader of the Helmholtz research group, "Relativistic Quantum Field Theory" at JSC and co-author of the Nature publication. "Before deciding the fate of the Standard Model, one has to understand the theoretical differences, and new lattice QCD computations are inevitable for that."

Minor discrepancies, major implications

When BNL researchers recorded unexplained muon behaviour in 2001, the finding left physicists at a loss--the muon, a subatomic particle 200 times heavier than an electron, showed stronger magnetic properties than predicted by the Standard Model of Physics. While the initial finding suggested that muons may be interacting with previously unknown subatomic particles, the results were still not accurate enough to definitely claim a new finding.

Over the next 20 years, heavy investments in new, hyper-sensitive experiments done at particle accelerator facilities as well as increasingly sophisticated approaches based in theory have sought to confirm or refute the BNL group's findings. During this time, a research group led by the University of Wuppertal's Prof. Zoltan Fodor, another co-author of the Nature paper, was progressing with big steps in lattice QCD simulations on the supercomputers provided by GCS. "Though our results on the muon g-2 are new, and have to be thoroughly scrutinized by other groups, we have a long record of computing various physical phenomena in quantum chromodynamics." said Prof. Fodor. "Our previous major achievements were computing the mass of the proton, the proton-neutron mass difference, the phase diagram of the early universe and a possible solution for the dark matter problem. These paved the way to our most recent result."

Lattice QCD calculations allow researchers to accurately plot subatomic particle movements and interactions with extremely fine time resolution. However, they are only as precise as computational power allows--in order to perform these calculations in a timely manner, researchers have had to limit some combination of simulation size, resolution, or time. As computational resources have gotten more powerful, researchers have been able to do more precise simulations.

"This foundational work shows that Germany's world-class HPC infrastructure is essential for doing world-class science in Europe", said Prof. Thomas Lippert, Director of the Jülich Supercomputing Centre, Professor for Quantum Computing and Modular Supercomputing at Goethe University Frankfurt, current Chairman of the GCS Board of Directors, and also co-author of the Nature paper. "The computational resources of GCS not only play a central role in deepening the discourse on muon measurements, but they help European scientists and engineers become leaders in many scientific, industrial, and societal research areas."

While Fodor, Lippert, Szabo, and the team who published the Nature paper currently use their calculations to cool the claims of physics beyond the Standard Model, the researchers are also excited to continue working with international colleagues to definitively solve the mystery surrounding muon magnetism. The team anticipates that even more powerful HPC systems will be necessary to prove the existence of physics beyond the Standard Model. "The FNAL experiment will increase the precision by a factor of four in two years. We theorists have to keep up with this pace if we want to fully exploit the new physics discovery potential of muons." Szabo said.

Credit: 
Gauss Centre for Supercomputing

TPU scientists find method to more effectively predict properties of ClO2 isotopologues

Scientists of Tomsk Polytechnic University has conducted research on the 35ClO2 isotope and developed a mathematical model and software, which allow predicting characteristics by 10 folds more accurate than already known results. The research work was conducted by a research team of Russian, German and Swiss scientists. The research findings are published in the Physical Chemistry Chemical Physics (IF: 3,4; Q1) academic journal and listed as one of the best articles.

The ClO2 molecule is extremely important for medicine and biophysics, as well as for the Earth atmosphere. It is used in medicine for disinfection and sterilization. On a global scale, ClO2 plays one of the crucial roles in the formation and migration of ozone holes.

"The theoretical background for nonlinear molecules in so-called non-singlet electronic states, including ClO2, has been poorly developed until very recently. To study such molecules, scientists use a mathematical apparatus for linear molecules. As the molecule and its structure are different, there are large observational errors.

We created a mathematical model that takes into account subtle effects, the interaction of rotations and spin-rotational interactions in nonlinear molecules. The mathematical model gives the results with high accuracy that allows obtaining unique data and, the most important is that, predicting the properties of molecules with high accuracy," Oleg Ulenekov, Professor of the TPU Research School of High-Energy Physics, the co-author of the article, says.

The TPU scientists compiled the mathematical model of the 35ClO2 molecule for doublet electronic states and included it in computer codes. This software application can read and predict experimental data, that is properties of a molecule in the given range and its state transitions. Spectral analysis of the molecule based on the compiled model possesses the result by 10 folds accurate than already known ones.

Based on the created model, the scientists conducted an analysis of rotational-vibrational spectra in a degenerate electronic state. The experimental basis of the research work was conducted in the Laboratory for Molecular Spectroscopy at Technical University of Braunschweig (Germany) and ETH Zurich (Switzerland).

According to the scientists, the compiled model possesses a more unique character and it can be developed and adapted to the other ranges.

"Having published the results, the editorial staff of the journal reported that the article was selected and put in the hot topic section, the so-called pool of the best articles. Such recognition of the work of the international research team is very important and valuable. We are planning to continue the research work and apply the model for analysis of the 37ClO2 isotope," Elena Bekhtereva and Olga Gromova, Professors of the TPU Research School of High-Energy Physics, the co-authors of the article, add.

Credit: 
Tomsk Polytechnic University

New benefits from anti-diabetic drug metformin

image: Low dose metformin was found to have a nephroprotective effect similar to losartan.

Image: 
Professor Hirofumi Kai

Researchers from Kumamoto University (Japan) have found that the anti-diabetic drug metformin significantly prolongs the survival of mice in a model that simulates the pathology of non-diabetic chronic kidney disease (ND-CKD) by ameliorating pathological conditions like reduced kidney function, glomerular damage, inflammation and fibrosis. Metformin's mechanism is different from existing therapeutics which only treat symptoms, such as the blood pressure drug losartan, so the researchers believe that a combination of these medications at low dose will be highly beneficial.

CKD (chronic kidney disease) is a general term for kidney damage that results from persistent decline in kidney function due to proteinuria, kidney inflammation, or fibrosis. As CKD progresses, patients are forced to undergo dialysis, and diabetes is one of its biggest risk factors. CKD can also occur in association with lifestyle-related conditions such as hypertension, insufficient exercise, smoking, hyperuricemia, and mutations in kidney-related genes. This type of CKD is classified as non-diabetic chronic kidney disease (ND-CKD) and has limited treatment options.

Alport syndrome is an inherited kidney disease that falls under the ND-CKD umbrella. In Alport syndrome, abnormalities in type 4 collagen, a constituent of the membrane responsible for urine filtration in the kidney, cause abnormal glomerular filtration which results in chronic loss of kidney function. It is a serious disease that eventually progresses to end-stage renal failure, requiring dialysis or kidney transplant. As with diabetic kidney disease and ND-CKD, Alport syndrome is currently treated by maintaining kidney function using blood pressure-lowering drugs but patients eventually transition to end-stage renal failure. Therefore, a new therapeutic agent that is effective and safe enough to be administered to patients for a long period of time is needed.

Metformin is used as a treatment for type 2 diabetes because it improves insulin sensitivity. It is an inexpensive and safe drug that has been used by diabetics for many years. Interestingly, because of its mechanism of action, metformin was also known to be protective against many diseases involving inflammation and fibrosis, and was known to improve the renal pathology of diabetic kidney disease. However, it was unclear whether metformin also had a protective effect on ND-CKD, which is not caused by diabetes.

Researchers selected an Alport syndrome mouse model for their ND-CKD experiments and worked to identify novel therapeutic targets based on pathogenic mechanisms. They focused on drugs traditionally used for CKD patients, metformin and losartan--which works by lowering blood pressure and inhibiting proteinuria caused by increased glomerular filtration.

Administration of metformin or losartan to ND-CKD model mice significantly suppressed proteinuria and serum creatinine, which are indicators of CKD. Inflammation and fibrosis, which also reduce kidney function, significantly improved. Furthermore, metformin was found to have a nephroprotective effect similar to losartan.

The results of a detailed gene expression analysis found that the renal pathology of the ND-CKD mouse model was caused by abnormal expression of genes related to glomerular epithelial cell podocytes (cells responsible for kidney filtering) and genes involved in intracellular metabolism. Interestingly, the improvement caused by losartan was limited to genes involved in podocyte abnormalities. Metformin, on the other hand, improved the expression of genes related to podocyte abnormalities and those related to intracellular metabolism. In other words, metformin clearly has a different target of action (also improved targeting of metabolic abnormalities) from that of losartan.

Finally, they found that administration of low-dose metformin and losartan to model mice significantly prolonged their survival. Researchers also found that in studies using doses at which metformin alone was not effective, the combination of metformin and losartan significantly prolonged mice survival. Put plainly, this study showed that an appropriate combination of the two therapeutic drugs could effectively treat the ND-CKD (Alport syndrome) mouse model.

This study raises the possibility that metformin, a proven and inexpensive diabetic drug, may delay the progression of kidney pathology in ND-CKD, including Alport syndrome. Metformin is currently available for use in patients with diabetes in clinical practice, but not in non-diabetic patients.

"This study appears to show that metformin has therapeutic effects for both diabetic and non-diabetic kidney disease," said Professor Hirofumi Kai, who led the research project. "However, metformin is contraindicated in patients with severe renal dysfunction (eGFR

This research found that the appropriate combination of metformin and losartan significantly improved renal pathology and prolonged survival in a ND-CKD mouse model. This suggests that the old inexpensive drug metformin could become a new inexpensive drug for patients with chronic kidney disease.

Credit: 
Kumamoto University

First 3D-printed proton-conductive membrane paves way for tailored energy storage devices

image: An example of charge-discharge behavior of capacitor

Image: 
Tohoku University

The advent and increased availability of 3D printing is leading to more customizable parts at lower costs across a spectrum of applications, from wearable smart devices to autonomous vehicles. Now, a research team based at Tohoku University has 3D printed the first proton exchange membrane, a critical component of batteries, electrochemical capacitors and fuel cells. The achievement also brings the possibility of custom solid-state energy devices closer to reality, according to the researchers.

The results were published on March 29 in ACS Applied Energy Materials, a journal of the American Chemical Society.

"Energy storage devices whose shapes can be tailored enable entirely new possibilities for applications related, for example, to smart wearable, electronic medical devices, and electronic appliances such as drones," said Kazuyuki Iwase, paper author and assistant professor in professor Itaru Honma's group at the Institute of Multidisciplinary Research for Advanced Materials at Tohoku University. "3D printing is a technology that enables the realization of such on-demand structures."

Current 3D printing fabrication focuses on structural parts contributing to a final product's function, rather than imbuing parts with their own function.

"However, 3D printing of energy storage devices requires specialized, functional inks," Iwase said. "We developed a fabrication process and synthesized functionalized nano inks that enables the realization of quasi-solid-state energy storage devices based on 3D printing."

The team mixed inorganic silica nanoparticles with photo-curable resins and liquid capable of conducting protons, with rapt attention paid to the viscosity of the resulting ink. Previous studies, the researchers said, resulted in inks that could not be 3D printed. By mixing the ratios of the ingredients, the researchers developed inks that could be employed in a dispensing 3D printer and still retain their properties even after cured with ultraviolet irradiation. To test the properties, the researchers assembled a printed membrane between two carbon electron electrodes to make an operational quasi-solid-state electrochemical capacitor - a key component needed to facilitate energy storage and discharge in electronic devices.

"As we can freely choose the inorganic materials or resins for curing, we hypothesize that this technique can be applied to various types of quasi-solid-state energy conversion devices," Iwase said.

"Compared to conventional fabrication techniques, the ability to 3D print such devices opens up new possibilities for proton-conducting devices, such as shapes that can be adjusted to fit to the devices they power or that can be adapted to the personal needs of a patient wearing a smart medical device," Iwase said.

The team plans to improve the ink formulas with the goal of fully 3D printing energy storage devices with more complex shapes and look for industrial partners who might be interested in applying this technique or other possibilities to commercialize it.

Credit: 
Tohoku University

BIO Integration Journal, Volume 2, Issue Number 1, publishes

Guangzhou, April 8, 2021: New journal BIO Integration (BIOI) publishes its fifth issue, volume 2, issue 1. BIOI is a peer-reviewed, open access, international journal, which is dedicated to spreading multidisciplinary views driving the advancement of modern medicine. Aimed at bridging the gap between the laboratory, clinic, and biotechnology industries, it will offer a cross-disciplinary platform devoted to communicating advances in the biomedical research field and offering insights into different areas of life science, to encourage cooperation and exchange among scientists, clinical researchers, and health care providers.

The issue contains an editorial, two mini review articles, two opinion articles and an interview offering insights into different areas of life science in both China and internationally:

The first opinion article is "CT Imaging Features of Patients Infected with 2019 Novel Coronavirus" (http://ow.ly/lOsu30rDNnF) by authors Tianhong Yao, Huirong Lin, Jingsong Mao, Shuaidong Huo and Gang Liu. Novel coronavirus pneumonia is an acute, infectious pneumonia caused by a novel coronavirus infection. Computed tomographic (CT) imaging is one of the main methods to screen and diagnose patients with this disease. In this article the authors discuss the importance and clinical value of chest CT examination in the diagnosis of COVID-19, and the pulmonary CT findings of COVID-19 patients in different stages are briefly summarized, providing a reference document for the CT diagnosis of COVID-19 patients.

The first mini review is entitled "Modifying an Implant: A Mini-review of Dental Implant Biomaterials" (http://ow.ly/rICT30rDNq8) by authors Oliver K. Semisch-Dieter, Andy H. Choi and Martin P. Stewart. Biomaterials have become essential for modern implants. A suitable implant biomaterial integrates into the body to perform a key function, whilst minimizing negative immune response. Focusing on dentistry, the use of dental implants for tooth replacement requires a balance between bodily response, mechanical structure and performance, and aesthetics. The authors address the use of biomaterials in dental implants with significant comparisons drawn between Ti and zirconia.

The second opinon article in this issue is "Ferroptosis Resistance in Cancer: An Emerging Crisis of New Hope (http://ow.ly/uU5230rDNti)" by Daiyun Xu, Yonghui Lü, Yongxiao Li, Shengbin Li, Zhe Wang and Junqing Wang. Ferroptosis is a lethal consequence of accumulated lipid peroxidation catalyzed by ferrous iron and oxygen. This unique cell death process appears to involve many diseases, such as neurodegeneration, ischemia/ reperfusion injury, kidney disease, and a druggable target in therapy-resistant cancers. Ferroptosis may provide hope for the treatment of as yet incurable diseases. However, ferroptosis susceptibility is linked to various regulation pathways. In this article the authors integrate the current understanding of signaling mechanisms for ferroptotic defences with a view to development of novel cancer therapeutic strategies.

The second mini review is entitled "Sonoporation: Underlying Mechanisms and Applications in Cellular Regulation" (http://ow.ly/raPx30rDNtt) by Yue Li, Zhiyi Chen and Shuping Ge. Ultrasound combined with microbubble-mediated sonoporation has been applied to enhance drug or gene intracellular delivery. Sonoporation leads to the formation of openings in the cell membrane, triggered by ultrasound-mediated oscillations and destruction of microbubbles. Multiple mechanisms are involved in the occurrence of sonoporation, including ultrasonic parameters, microbubbles size, and the distance of microbubbles to cells. Recent advances are beginning to extend applications through the assistance of contrast agents, which allow ultrasound to connect directly to cellular functions such as gene expression, cellular apoptosis, differentiation, and even epigenetic reprogramming.

Other articles include:

Editorial

Looking Back and Forward: From the Perspective of BIO Integration (http://ow.ly/Nmjt30rDNm8)

Voice Series

Interview with Prof. Kwang Soo Kim, Harvard Medical School (http://ow.ly/g7tS30rDNne)

Credit: 
Compuscript Ltd

eBird data used to shape eagle management

ITHACA, N.Y. - Millions of people donate billions of dollars' worth of their time to citizen-science projects each year. While these efforts have broadened our understanding of everything from birds to bees to bracken ferns, rarely has citizen-science data informed policy at the highest levels of government. But that may be changing.

One of the world's largest citizen-science efforts, the Cornell Lab of Ornithology's eBird, is now helping the federal government streamline and refine its process for assessing eagle populations and informing eagle management.

New research out this week in the Journal of Applied Ecology finds that citizen-science data collected by 180,000 birdwatchers through eBird is the most accurate and reliable data source for the U.S. Fish and Wildlife Service to use to help identify areas of low and high abundance of Bald Eagles as the Service fine-tunes its eagle permitting policy.

The research team, including scientists at the U.S. Fish and Wildlife Service and the Cornell Lab of Ornithology found that eBird data and the advanced statistical models generated by eBird Status and Trends provided the best available picture of Bald Eagles across space and time, when compared to other datasets.

"It is important to account for eagle use throughout the entire year, at a refined spatial scale, in order to have confidence that activities in the low-exposure zones would pose less risk to eagles overall," says Emily Bjerre, wildlife biologist at the National Raptor Program at the USFWS.

Brian Millsap, national raptor coordinator at the U.S. Fish and Wildlife Service, says that the "wall-to-wall" coverage provided by eBird was critically important. "The other data or surveys we evaluated generally cover a specific 'season'--for example, winter or breeding," says Millsap, "but then you don't have any information of what eagle abundance in that area is the rest of the year."

Thanks to advances in quantitative methods, researchers can now overcome biases, such as observer error, that are often associated with citizen-science data. Viviana Ruiz Gutierrez, assistant director of the Cornell Lab's Center for Avian Population Studies and lead author of the study, says the key is the validation process--cross-checking eBird models with existing bird population data to confirm their accuracy.

The research suggests that eBird data can be the primary data source used to define areas of low abundance for Bald Eagles, paving the way for this kind of citizen-science data to potentially be used in the future to shape policy decisions at the federal level.

Millsap says that using eBird for these kinds of assessments could provide significant cost-savings in the future, for example potentially reducing or eliminating the need for multiyear survey periods to inform energy-related eagle permitting decisions--a win-win for Bald Eagles and green energy.

According to Ruiz Gutierrez, the collaboration between the U.S. Fish and Wildlife Service resulted in a framework that other agencies and governing bodies can follow to make use of citizen-science data going forward. "This case study could help convince other agencies and governments around the world to use citizen-science data to compliment existing methods of assessing and safeguarding bird populations," she says.

Credit: 
Cornell University