Tech

Doctoral thesis introduces a scale to measure human's trust in technology

image: Siddharth Nakul Gulati

Image: 
Tallinn University

In the present world we are witnessing headlines which regularly chronicle technology-based issues such as security hacks, inappropriate or illegal surveillance, misuse of personal data, spread of misinformation, algorithmic bias, and lack of transparency.

Siddharth Nakul Gulati finds these aspects about trust important: "Quite recently, elections in the US have sparked debate over whether or not the systems put in place for voting can be trusted. So, trust is an important concept binding together different facets of human interaction with technology and it is important to be able to both study and measure it." To reach the result, he used a sequence of studies with different kinds of technology using an empirical technique called structural equation modelling. Each study built upon the results of the previous one.

Siddharth further explains the importance of trust in human technology interactions.

The development of intelligent algorithms, made possible due to advancements in artificial intelligence (AI) and machine learning (ML) are driving most of our day-to-day interactions with digital technology. From a simple Google search to product and movie recommendations on Amazon and Netflix to more complex tasks such as being able to manage an electric power grid or helping doctors make crucial decisions when diagnosing patients, intelligent algorithms are encroaching more into our day to day lives and have the capacity to reason and make decisions on behalf of their human counterparts.

Developments in AI and ML have also allowed complex concepts such as driverless cars, autonomous drone delivery systems, collaborative robots as teammates, robotic concierges/hosts etc, which were once thought of as science fiction to now become reality, and are being used across the globe on a daily basis. As these intelligent technologies slowly become the norm and encroach more into our daily lives, they are also making several decisions on people's behalf. This naturally raises the questions about trust toward these autonomous systems. Can the advice and recommendations offered to use by these systems be trusted? How can this trust be measured? These were some of the questions guiding the doctoral thesis.

There were four studies conducted to develop the scale to measure trust. Before carrying out the studies, Siddharth identified an initial model which consisted of seven factors which affect trust. This model was tested in a first study to understand trust perception of individuals with the Estonian e-voting service. Even though Siddharth was able to find that some factors from this initial model do not predict trust, he ran more studies to be able to identify with a high degree of statistical certainty which out of these seven initial factors actually do affect trust.

So, Siddharth carried out a second study, this time to study trust with Siri, Apple's intelligent personal assistant and was able to identify four factors from the initial seven which affect trust.

To be able to claim with a high degree of statistical certainty, a third study was run using a novel technique called Design fiction, where instead of studying trust with actual technical artefacts, Siddharth used fictional scenarios to gauge user trust perception with technologies or devices that do not actually exist, but that are on plausible future trajectories. He used two such fictional scenarios and was able to identify three factors with a high degree of statistical certainty that predict trust in human technology interactions. After identifying these three factors, the final scale was developed as part of Siddharth's PhD consisting of 12 statements which can be used to measure trust in human technology interactions.

There are different use cases of the developed scale. As an example, researchers and practitioners can use it to calculate trust score for an individual product or service, or can compare trust scores for two different products or services or two different versions of the same product or service, or can compare trust scores of multiple products. "If there is a research project which involves understanding and measuring how much the individuals trust Covid-19 tracing applications, the scale developed during my thesis could be used. The results obtained can then help researchers and practitioners to better design these applications should user trust levels with them be low," he adds.

Credit: 
Estonian Research Council

Engineers go microbial to store energy, sequester CO2

ITHACA, N.Y. - By borrowing nature's blueprints for photosynthesis, Cornell University bioengineers have found a way to efficiently absorb and store large-scale, low-cost renewable energy from the sun - while sequestering atmospheric carbon dioxide to use later as a biofuel.

The key: Let bioengineered microbes do all the work.

Buz Barstow, assistant professor of biological and environmental engineering at Cornell University, and doctoral candidate Farshid Salimijazi have assembled theoretical solutions and models that calculate efficiency in microbes, which could take in electricity and store carbon dioxide at least five times more efficiently than photosynthesis, the process by which plants turn sunlight into chemical energy.

"Soon, we will be living in a world with plentiful renewable electricity," Barstow said. "But in order to bring the bountiful energy to the grid, we will need energy storage with a capacity thousands of times greater than we have today."

The research, "Constraints on the Efficiency of Engineered Electromicrobial Production," was published in October in the journal Joule. Salimijazi is lead author.

Electromicrobial production technologies fuse biology and electronics so that energy gathered from wind, sun and water can get converted into renewable electricity in the form of energy-storage polymers (engineered microbes). Solving a storage problem, these microbes can be used on demand or to create low-carbon transportation fuels.

"We need think about how we can store energy for rainy days or for when the wind doesn't gust," he said, noting that battery or fuel-cell technology can take up a lot of space. "We need solutions on how to store this large amount of energy in a cheap and clean way."

In the paper, the researchers suggest taking advantage of microbial electrosynthesis, in which incoming electrons are fed directly to an engineered microbe, which would convert carbon dioxide into non-carbon molecules. More research is necessary to determine the best microbes for the job.

Postdoctoral researcher Alexa Schmitz, a member of Barstow's lab, said the engineered microbes both store energy and absorb carbon dioxide. The CO2 can be converted into a hydrocarbon fuel - effectively neutralizing the carbon cycle, resulting in net-zero carbon emissions.

"While the hydrocarbon fuel would not be carbon negative, carbon neutrality is still very good in this case," Schmitz said. "For a lot of machinery or in aviation, society may still need low-density hydrocarbon fuels for that sector."

That scenario is much better than carbon expansion, she said. "We want to be able to make low-carbon fuel without digging for oil or getting gas out of the ground," she said, "and then releasing the carbon into the atmosphere.

"The microbes act as an efficient microscopic fuel cell," said Barstow, a Cornell Atkinson fellow. "That's why we're offering this road map for the best ways to exploit this potential. More research is necessary to determine the best microbes for the job, as everything comes down to efficiency at the end of the day."

Credit: 
Cornell University

Weak force has strong impact on nanosheets

image: A transmission electron microscope image at left and a color map version at right highlights deformations in silver nanosheets laid over iron oxide nanospheres. Rice University scientists determined that van der Waals forces between the spheres and sheets are sufficient to distort the silver, opening defects in their crystalline lattices that could be used in optics or catalysis.

Image: 
The Jones Lab/Rice University

HOUSTON - (Dec. 15, 2020) - You have to look closely, but the hills are alive with the force of van der Walls.

Rice University scientists found that nature's ubiquitous "weak" force is sufficient to indent rigid nanosheets, extending their potential for use in nanoscale optics or catalytic systems.

Changing the shape of nanoscale particles changes their electromagnetic properties, said Matt Jones, the Norman and Gene Hackerman Assistant Professor of Chemistry and an assistant professor of materials science and nanoengineering. That makes the phenomenon worth further study.

"People care about particle shape, because the shape changes its optical properties," Jones said. "This is a totally novel way of changing the shape of a particle."

Jones and graduate student Sarah Rehn led the study in the American Chemical Society's Nano Letters.

Van der Waals is a weak force that allows neutral molecules to attract one another through randomly fluctuating dipoles, depending on distance. Though small, its effects can be seen in the macro world, like when geckos walk up walls.

"Van der Waals forces are everywhere and, essentially, at the nanoscale everything is sticky," Jones said. "When you put a large, flat particle on a large, flat surface, there's a lot of contact, and it's enough to permanently deform a particle that's really thin and flexible."

In the new study, the Rice team decided to see if the force could be used to manipulate 8-nanometer-thick sheets of ductile silver. After a mathematical model showed them it was possible, they placed 15-nanometer-wide iron oxide nanospheres on a surface and sprinkled prism-shaped nanosheets over them.

Without applying any other force, they saw through a transmission electron microscope that the nanosheets acquired permanent bumps where none existed before, right on top of the spheres. As measured, the distortions were about 10 times larger than the width of the spheres.

The hills weren't very high, but simulations confirmed that van der Waals attraction between the sheet and the substrate surrounding the spheres were sufficient to influence the plasticity of the silver's crystalline atomic lattice. They also showed that the same effect would occur in silicon dioxide and cadmium selenide nanosheets, and perhaps other compounds.

"We were trying to make really thin, large silver nanoplates and when we started taking images, we saw these strange, six-fold strain patterns, like flowers," said Jones, who earned a multiyear Packard Fellowship in 2018 to develop advanced microscopy techniques.

"It didn't make any sense, but we eventually figured out that it was a little ball of gunk that the plate was draped over, creating the strain," he said. "We didn't think anyone had investigated that, so we decided to have a look.

"What it comes down to is that when you make a particle really thin, it becomes really flexible, even if it's a rigid metal," Jones said.

In further experiments, the researchers saw nanospheres could be used to control the shape of the deformation, from single ridges when two spheres are close, to saddle shapes or isolated bumps when the spheres are farther apart.

They determined that sheets less than about 10 nanometers thick and with aspect ratios of about 100 are most amenable to deformation.

The researchers noted their technique creates "a new class of curvilinear structures based on substrate topography" that "would be difficult to generate lithographically." That opens new possibilities for electromagnetic devices that are especially relevant to nanophotonic research.

Straining the silver lattice also turns the inert metal into a possible catalyst by creating defects where chemical reactions can happen.

"This gets exciting because now, most people make these kinds of metamaterials through lithography," Jones said. "That's a really powerful tool, but once you've used that to pattern your metal, you can never change it.

"Now we have the option, perhaps someday, to build a material that has one set of properties and then change it by deforming it," he said. "Because the forces required to do so are so small, we hope to find a way to toggle between the two."

Credit: 
Rice University

To the brain, reading computer code is not the same as reading language

CAMBRIDGE, MA -- In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

However, although reading computer code activates the multiple demand network, it appears to rely more on different parts of the network than math or logic problems do, suggesting that coding does not precisely replicate the cognitive demands of mathematics either.

"Understanding computer code seems to be its own thing. It's not the same as language, and it's not the same as math and logic," says Anna Ivanova, an MIT graduate student and the lead author of the study.

Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in eLife. Researchers from MIT's Computer Science and Artificial Intelligence Laboratory and Tufts University were also involved in the study.

Language and cognition

A major focus of Fedorenko's research is the relationship between language and other cognitive functions. In particular, she has been studying the question of whether other functions rely on the brain's language network, which includes Broca's area and other regions in the left hemisphere of the brain. In previous work, her lab has shown that music and math do not appear to activate this language network.

"Here, we were interested in exploring the relationship between language and computer programming, partially because computer programming is such a new invention that we know that there couldn't be any hardwired mechanisms that make us good programmers," Ivanova says.

There are two schools of thought regarding how the brain learns to code, she says. One holds that in order to be good at programming, you must be good at math. The other suggests that because of the parallels between coding and language, language skills might be more relevant. To shed light on this issue, the researchers set out to study whether brain activity patterns while reading computer code would overlap with language-related brain activity.

The two programming languages that the researchers focused on in this study are known for their readability -- Python and ScratchJr, a visual programming language designed for children age 5 and older. The subjects in the study were all young adults proficient in the language they were being tested on. While the programmers lay in a functional magnetic resonance (fMRI) scanner, the researchers showed them snippets of code and asked them to predict what action the code would produce.

The researchers saw little to no response to code in the language regions of the brain. Instead, they found that the coding task mainly activated the so-called multiple demand network. This network, whose activity is spread throughout the frontal and parietal lobes of the brain, is typically recruited for tasks that require holding many pieces of information in mind at once, and is responsible for our ability to perform a wide variety of mental tasks.

"It does pretty much anything that's cognitively challenging, that makes you think hard," Ivanova says.

Previous studies have shown that math and logic problems seem to rely mainly on the multiple demand regions in the left hemisphere, while tasks that involve spatial navigation activate the right hemisphere more than the left. The MIT team found that reading computer code appears to activate both the left and right sides of the multiple demand network, and ScratchJr activated the right side slightly more than the left. This finding goes against the hypothesis that math and coding rely on the same brain mechanisms.

Effects of experience

The researchers say that while they didn't identify any regions that appear to be exclusively devoted to programming, such specialized brain activity might develop in people who have much more coding experience.

"It's possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system," Fedorenko says. "In people who are familiar with coding and can efficiently do these tasks, but have had relatively limited experience, it just doesn't seem like you see any specialization yet."

In a companion paper appearing in the same issue of eLife, a team of researchers from Johns Hopkins University also reported that solving code problems activates the multiple demand network rather than the language regions.

The findings suggest there isn't a definitive answer to whether coding should be taught as a math-based skill or a language-based skill. In part, that's because learning to program may draw on both language and multiple demand systems, even if -- once learned -- programming doesn't rely on the language regions, the researchers say.

"There have been claims from both camps -- it has to be together with math, it has to be together with language," Ivanova says. "But it looks like computer science educators will have to develop their own approaches for teaching code most effectively."

Credit: 
Massachusetts Institute of Technology

Plastics pose threat to human health

WASHINGTON, DC--Plastics contain and leach hazardous chemicals, including endocrine-disrupting chemicals (EDCs) that threaten human health. An authoritative new report, Plastics, EDCs, & Health, from the Endocrine Society and the IPEN (International Pollutants Elimination Network), presents a summary of international research on the health impacts of EDCs and describes the alarming health effects of widespread contamination from EDCs in plastics.

EDCs are chemicals that disturb the body's hormone systems and can cause cancer, diabetes, reproductive disorders, and neurological impairments of developing fetuses and children. The report describes a wealth of evidence supporting direct cause-and-effect links between the toxic chemical additives in plastics and specific health impacts to the endocrine system.

Conservative estimates point to more than a thousand manufactured chemicals in use today that are EDCs. Known EDCs that leach from plastics and threaten health include bisphenol A and related chemicals, flame retardants, phthalates, per- and polyfluoroalkyl substances (PFAS), dioxins, UV-stabilizers, and toxic metals such as lead and cadmium. Plastic containing EDCs is used extensively in packaging, construction, flooring, food production and packaging, cookware, health care, children's toys, leisure goods, furniture, home electronics, textiles, automobiles and cosmetics.

Key findings in the report include:

One hundred and forty four chemicals or chemical groups known to be hazardous to human health are actively used in plastics for functions varying from antimicrobial activity to colorants, flame retardants, solvents, UV-stabilizers, and plasticizers.

Exposure can occur during the entire life span of plastic products, from the manufacturing process to consumer contact, recycling, to waste management and disposal.

EDC exposure is a universal problem. Testing of human samples consistently shows nearly all people have EDCs in their bodies.

Microplastics contain chemical additives, which can leach out of the microplastic and expose the population. They can also bind and accumulate toxic chemicals from the surrounding environment, such as seawater and sediment, functioning as carriers for toxic compounds.

Bioplastics/biodegradable plastics, promoted as more ecological than plastics, contain similar chemical additives as conventional plastics and also have endocrine-disrupting effects.

"Many of the plastics we use every day at home and work are exposing us to a harmful cocktail of endocrine-disrupting chemicals," said the report's lead author, Jodi Flaws, Ph.D., of the University of Illinois at Urbana-Champaign in Urbana, Ill. "Definitive action is needed on a global level to protect human health and our environment from these threats."

The Swiss Ambassador for the Environment, Franz Xavier Perrez, commented, "Plastics, EDCs, and Health, synthesizes the science on EDCs and plastics. It is our collective responsibility to enact public policies to address the clear evidence that EDC in plastics are hazards threatening public health and our future."

In May, the Swiss Government submitted a proposal to the Stockholm Convention to list the first ultra-violet (UV) stabilizer, plastic additive UV-328, for listing under the Stockholm Convention. UV stabilizers are a common additive to plastics and are a subset of EDCs described in this report. The Stockholm Convention is the definitive global instrument for assessing, identifying, and controlling the most hazardous chemical substances on the planet.

The need for effective public policy to protect public health from EDCs in plastics is all the more urgent given the industry's dramatic growth projections. Pamela Miller, IPEN Co-Chair, commented, "This report clarifies that the current acceleration of plastic production, projected to increase by 30-36% in the next six years, will greatly exacerbate EDC exposures and rising global rates of endocrine diseases. Global policies to reduce and eliminate EDCs from plastic and reduce exposures from plastic recycling, plastic waste, and incineration are imperative. EDCs in plastics are an international health issue that is felt acutely in the global south where toxic plastic waste shipments from wealthier countries inundate communities."

"Endocrine-disrupting chemical exposure is not only a global problem today, but it poses a serious threat to future generations," said co-author Pauliina Damdimopoulou, Ph.D., of the Karolinska Institutet in Stockholm, Sweden. "When a pregnant woman is exposed, EDCs can affect the health of her child and eventual grandchildren. Animal studies show EDCs can cause DNA modifications that have repercussions across multiple generations."

Credit: 
The Endocrine Society

'Peecycling' payoff: Urine diversion shows multiple environmental benefits when used at city scale

Diverting urine away from municipal wastewater treatment plants and recycling the nutrient-rich liquid to make crop fertilizer would result in multiple environmental benefits when used at city scale, according to a new University of Michigan-led study.

The study, published online Dec. 15 in the journal Environmental Science & Technology, modeled large-scale, centralized urine-diversion and fertilizer-processing systems--none of which currently exist--and compared their expected environmental impacts to conventional wastewater treatment and fertilizer production methods.

The researchers found that urine diversion and recycling led to significant reductions in greenhouse gas emissions, energy use, freshwater consumption and the potential to fuel algal blooms in lakes and other water bodies. The reductions ranged from 26% to 64%, depending on the impact category.

"Urine diversion consistently had lower environmental impacts than conventional systems," said lead author Stephen Hilton, who conducted the study for his master's thesis at U-M's School for Environment and Sustainability.

"Our analyses clearly indicate that the well-defined benefits--reduced wastewater management requirements and avoided synthetic fertilizer production--exceed the environmental impacts of urine collection, processing and transport, suggesting that further efforts to develop such systems are warranted."

Urine contains the essential nutrients nitrogen, phosphorus and potassium and has been used as a crop fertilizer for thousands of years. In recent years, urine recycling has been studied as a way to produce renewable fertilizers while reducing the amount of energy and chemicals needed to treat wastewater.

While no city-scale urine-diversion and recycling systems exist, several small-scale demonstration projects are underway, including one at U-M and a Vermont project led by the Rich Earth Institute. Hilton used data from both projects to model the likely environmental impacts of city-scale urine diversion and recycling.

Wastewater treatment was a major focus of the study, and data from treatment plants in Michigan, Vermont and Virginia were used in the analysis. The Virginia plant is located in the Chesapeake Bay region and served as an example of treatment plants with strict requirements for nitrogen and phosphorus removal.

Using a technique called life-cycle assessment, which provides a comprehensive evaluation of multiple environmental impacts, Hilton and his colleagues compared the performance of large-scale, centralized urine-diversion and fertilizer-production facilities to conventional wastewater treatment plants and the production of synthetic fertilizers using non-renewable resources.

Urine diversion and recycling was the clear winner in most categories and in some cases eliminated the need for certain wastewater-treatment chemicals. On the downside, one method for making urine-derived fertilizer led to consistent increases in acidification.

A few previous life-cycle assessments have compared the environmental impacts of urine recycling to conventional systems. But the new U-M study is the first to include detailed modeling of wastewater treatment processes, allowing the researchers to compare the amount of energy and chemicals used in each method.

"This is the first in-depth analysis of the environmental performance and benefits of large-scale urine recycling relative to conventional wastewater treatment and fertilizer production," said Greg Keoleian, senior author of the ES&T paper and director of the Center for Sustainable Systems at the U-M School for Environment and Sustainability. He also chaired Hilton's thesis committee.

About half of the world's food supply depends on synthetic fertilizers produced from nonrenewable resources. Phosphate rock is mined and processed to make phosphate fertilizer. The production of nitrogen fertilizer is an energy-intensive process that uses natural gas and is responsible for 1.2% of world energy use and associated greenhouse gas emissions.

At the same time, water and wastewater systems consume 2% of U.S. electricity, with nutrient removal being one of the most energy-intensive processes.

Diversion of urine to recover and recycle nitrogen and phosphorus has been advocated as a way to improve the sustainability of both water management and food production. It has the potential to reduce the amount of energy and chemicals needed to treat wastewater while decreasing the flow of nutrients that fuel harmful algal blooms in lakes.

However, large-scale diversion and recycling would require systems to collect and transport urine, process it into fertilizer, then ship the end product to customers. Each of those steps has environmental impacts.

In 2016, U-M researchers were awarded a $3 million grant from the National Science Foundation to study the potential of converting human urine into safe crop fertilizer. The project is led by Nancy Love and Krista Wiggington of the U-M Department of Civil and Environmental Engineering and involves testing advanced urine-treatment methods and investigating attitudes people hold about the use of urine-derived fertilizers. Love is also a co-author of the new Environmental Science & Technology paper.

As part of the NSF-funded effort, urine-diverting demonstration toilets were installed on U-M's North Campus, along with a lab where the urine is converted to fertilizer. Hilton, who was a dual-degree master's student at the U-M School for Environment and Sustainability and the Department of Civil and Environmental Engineering, used data from the project to help model a large-scale system that diverts urine to make fertilizer.

"These new findings are encouraging because they demonstrate the potential environmental benefits of large-scale urine-diversion and recycling systems, suggesting that we're on the right track and should continue to develop these technologies," said study co-author Glen Daigger, a U-M professor of civil and environmental engineering and a member of Hilton's thesis committee.

Credit: 
University of Michigan

Novel MRI contrast agent sidesteps toxic effects of current products

BOSTON - Adding a contrast-enhancing agent to magnetic resonance imaging (MRI) significantly improves image quality and allows radiologists who interpret MRI scans to pick up subtle anatomic details and abnormalities that might otherwise be missed.

But this important diagnostic tool is often denied to patients with chronic kidney disease because all commercially available contrast agents are gadolinium-based contrast agents (GBCAs). Gadolinium, a heavy metal, is associated with the devastating condition nephrogenic systemic fibrosis that has been observed in renally impaired patients. Gadolinium from GBCAs is also retained in the brain, bones, skin and other organs, even in patients with normal kidney function.

Now, researchers at Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) are developing an alternative MRI contrast agent based on manganese, an essential element in human nutrition -- found in nuts, legumes, seeds, leafy green vegetables and whole grains -- that is easily processed and eliminated by the body. Manganese has magnetic properties similar to those of gadolinium, but without gadolinium's toxicity.

Their work is described in a study published in the journal Investigative Radiology.

"This manganese-based contrast agent Mn-PyC3A does everything a GBCA would do," says Eric M. Gale, PhD, an investigator in Biomedical Engineering at MGH and assistant professor of Radiology at HMS, who is co-inventor of Mn-PyC3A.

"This is obviously important for patients with chronic kidney disease and other forms of renal insufficiency that might require careful risk/benefit analysis before undergoing a GBCA-enhanced MRI, but we can also envision giving Mn-PyC3A to any patient requiring a contrast-enhanced MRI," he says. "There are patients who require many GBCA-enhanced MRI examinations over the course of years for disease surveillance or screening."

Previous preclinical imaging studies demonstrated how Mn-PyC3A is diagnostically equivalent to a GBCA for visualization of blood vessels and tumors.

MRI contrast agents belong to a class of molecules called chelates in which a metal ion (charged particle) is wrapped up by an organic molecule in order to avoid patient exposure to the metal ion, which may deposit in tissues. In the case of manganese, it is very difficult to develop a chelate that binds the metal ion tightly without compromising the MRI-signal-generating properties of manganese. Mn-PyC3A was optimized to hold manganese very tightly and to generate MRI contrast as effectively as commercial GBCAs, Gale explains.

In their study, Gale and co-authors used simultaneous positron emission tomography and MRI (PET-MRI) to compare Mn-PyC3A against an older manganese-based contrast agent called Mn-DPDP, which is approved for use in liver imaging but is no longer marketed. The lead author of the study, Iris Yuwen Zhou, PhD, who is an MGH researcher and instructor in Radiology at HMS, explains that labeling the manganese-based contrast agents with a positron-emitting isotope of manganese enabled the authors to use PET-MRI to "visualize how Mn-PyC3A and Mn-DPDP move about and are eliminated from the body in real time and then to identify and quantify trace levels of residual manganese hours and days after injection."

The PET-MRI data highlight key differences between Mn-PyC3A and Mn-DPDP. One key finding is that substantial amounts of residual manganese are identified in organs like the bone, salivary glands, liver and gastrointestinal tract after Mn-DPDP injection, whereas manganese injected as Mn-PyC3A is rapidly and completely eliminated from the body and does not accumulate in any tissue. Peter Caravan, PhD, who is co-inventor of Mn-PyC3A, co-director of the MGH Institute for Innovation in Imaging and professor of Radiology at HMS, points out: "PET-MRI spotlights major differences in manganese biodistribution between Mn-PyC3A and Mn-DPDP, and demonstrates how robust Mn-PyC3A is against releasing the manganese ion."

PET imaging showed that Mn-PyC3A is eliminated predominantly through the kidneys, but that a fraction is also eliminated through the liver and excreted into the feces. In order to understand how renal impairment could affect the body's ability to clear Mn-PyC3A, the authors also used PET-MRI to study Mn-PyC3A in a rat model of renal impairment. Their data demonstrate that Mn-PyC3A is also rapidly and efficiently eliminated from renally impaired rats, the major difference being that a greater fraction of Mn-PyC3A is cleared through the liver. "Clinical GBCAs are eliminated only through the kidney, and thus remain in renally impaired patients for longer periods resulting in increased gadolinium exposure. Our imaging data shows how for Mn-PyC3A, the liver compensates for diminished renal function and ensures rapid and complete elimination of Mn-PyC3A," says Zhou.

Lastly, the authors performed an experiment to quantify manganese and gadolinium retained in tissues seven days after an equal dose of Mn-PyC3A and gadoterate, the state-of-the-art GBCA with respect to tissue gadolinium retention, in renally impaired rats. The experiment showed significantly more efficient whole-body elimination of manganese, further underscoring how efficiently Mn-PyC3A is eliminated.

Credit: 
Massachusetts General Hospital

Discovering gaps in food safety practices of small Texas farms

Editor's Note: This study and its survey were conducted prior to Institutional Review Board (IRB) approval, a necessary step in the University's compliance protocols to ensure federal regulations and ethical principles are followed. Because the survey was conducted prior to the required approval, the principal investigator has voluntarily retracted the manuscript. An internal review will follow.

Credit: 
University of Houston

Oceanographers have an explanation for the Arctic's puzzling ocean turbulence

image: This image shows the activity of eddies simulated in the Arctic Ocean. The left panel shows seasonal changes in eddy activity at the surface of the ocean, compared to the right panel, where eddy behavior is unaffected by the seasons, and remains the same at deeper levels of the ocean.

Image: 
Courtesy of: Gianluca Meneghello

Eddies are often seen as the weather of the ocean. Like large-scale circulations in the atmosphere, eddies swirl through the ocean as slow-moving sea cyclones, sweeping up nutrients and heat, and transporting them around the world.

In most oceans, eddies are observed at every depth and are stronger at the surface. But since the 1970s, researchers have observed a peculiar pattern in the Arctic: In the summer, Arctic eddies resemble their counterparts in other oceans, popping up throughout the water column. However, with the return of winter ice, Arctic waters go quiet, and eddies are nowhere to be found in the first 50 meters beneath the ice. Meanwhile, deeper layers continue to stir up eddies, unaffected by the abrupt change in shallower waters.

This seasonal turn in Arctic eddy activity has puzzled scientists for decades. Now an MIT team has an explanation. In a paper published today in the Journal of Physical Oceanography, the researchers show that the main ingredients for driving eddy behavior in the Arctic are ice friction and ocean stratification.

By modeling the physics of the ocean, they found that wintertime ice acts as a frictional brake, slowing surface waters and preventing them from speeding into turbulent eddies. This effect only goes so deep; between 50 and 300 meters deep, the researchers found, the ocean's salty, denser layers act to insulate water from frictional effects, allowing eddies to swirl year-round.

The results highlight a new connection between eddy activity, Arctic ice, and ocean stratification, that can now be factored into climate models to produce more accurate predictions of Arctic evolution with climate change.

"As the Arctic warms up, this dissipation mechanism for eddies, i.e. the presence of ice, will go away, because the ice won't be there in summer and will be more mobile in the winter," says John Marshall, professor of oceanography at MIT. "So what we expect to see moving into the future is an Arctic that is much more vigorously unstable, and that has implications for the large-scale dynamics of the Arctic system."

Marshall's co-authors on the paper include lead author Gianluca Meneghello, a research scientist in MIT's Department of Earth, Atmospheric and Planetary Sciences, along with Camille Lique, Pal Erik Isachsen, Edward Doddridge, Jean-Michel Campin, Healther Regan, and Claude Talandier.

Beneath the surface

For their study, the researchers assembled data on Arctic ocean activity that were made available by the Woods Hole Oceanographic Institution. The data were collected between 2003 and 2018, from sensors measuring the velocity of the water at different depths throughout the water column.

The team averaged the data to produce a time series to produce a typical year of the Arctic Ocean's velocities with depth. From these observations, a clear seasonal trend emerged: During the summer months with very little ice cover, they saw high velocities and more eddy activity at all depths of the ocean. In the winter, as ice grew and increased in thickness, shallow waters ground to a halt, and eddies disappeared, whereas deeper waters continued to show high-velocity activity.

"In most of the ocean, these eddies extend all the way to the surface," Marshall says. "But in the Arctic winter, we find that eddies are kind of living beneath the surface, like submarines hanging out at depth, and they don't get all the way up to the surface."

To see what might be causing this curious seasonal change in eddy activity, the researchers carried out a "baroclinic instability analysis." This model uses a set of equations describing the physics of the ocean, and determines how instabilities, such as weather systems in the atmosphere and eddies in the ocean, evolve under given conditions.

An icy rub

The researchers plugged various conditions into the model, and for each condition they introduced small perturbations similar to ripples from surface winds or a passing boat, at various ocean depths. They then ran the model forward to see whether the perturbations would evolve into larger, faster eddies.

The researchers found that when they plugged in both the frictional effect of sea ice and the effect of stratification, as in the varying density layers of the Arctic waters, the model produced water velocities that matched what the researchers initially saw in actual observations. That is, they saw that without friction from ice, eddies formed freely at all ocean depths. With increasing friction and ice thickness, waters slowed and eddies disappeared in the ocean's first 50 meters. Below this boundary, where the water's density, i.e. its stratification, changes dramatically, eddies continued to swirl.

When they plugged in other initial conditions, such as a stratification that was less representative of the real Arctic ocean, the model's results were a weaker match with observations.

"We're the first to put forward a simple explanation for what we're seeing, which is that subsurface eddies remain vigorous all year round, and surface eddies, as soon as ice is around, get rubbed out because of frictional effects," Marshall explains.

Now that they have confirmed that ice friction and stratification have an effect on Arctic eddies, the researchers speculate that this relationship will have a large impact on shaping the Arctic in the next few decades. There have been other studies showing that summertime Arctic ice, already receding faster year by year, will completely disappear by the year 2050. With less ice, waters will be free to swirl up into eddies, at the surface and at depth. Increased eddy activity in the summer could bring in heat from other parts of the world, further warming the Arctic.

At the same time, the wintertime Arctic will be ice covered for the foreseeable future, notes Meneghello. Whether a warming Arctic will result in more ocean turbulence throughout the year or in a stronger variability over the seasons will depend on sea ice's strength.

Regardless, "if we move into a world where there is no ice at all in the summer and weaker ice during winter, the eddy activity will increase," Meneghello says. "That has important implications for things moving around in the water, like tracers and nutrients and heat, and feedback on the ice itself."

Credit: 
Massachusetts Institute of Technology

Supporting renewable electricity: EU member states should coordinate reform efforts

The European Union recently adopted more ambitious climate goals for 2030 - their implementation is now the focus of debate. What do the Member States need to consider? A new study shows how important it is that governments coordinate policy reforms to support renewable electricity. Otherwise, many investors are likely to shift their focus to technologies that will continue to be subsidized or to countries where subsidies are still available. This outcome would increase the overall costs of expanding renewable electricity generation in Europe.

Many European countries have phased out fixed remuneration feed-in tariffs in recent years, replacing these with auction schemes that award supply contracts to the lowest bidder. With the cost of renewable electricity falling significantly in recent years, political pressure has been building to abolish fixed-price tariffs altogether and to push renewables onto the free market. This issue has been the subject of debate among scientists. But how would different models of support influence the decision-making of investors? IASS researchers Marc Melliger and Johan Lilliestam have investigated this question.

Large investors respond more flexibly to political reforms

Investors' preferences are clear: if they were free to choose, most would invest in their home country in photovoltaic or onshore wind projects with the lowest possible price risks. Many investors would rather invest in a different technology or abroad if this would enable them to tap into fixed price support schemes. Larger investors are more willing and able to shift their activities to new countries should they identify an attractive, low-risk market situation.

"In other words: larger projects would relocate. These shifts could skew the European energy mix in a way that fosters dependency on a single, less mature technology or a specific generation region. For example, photovoltaics first became competitive in the sunnier countries of southern Europe. If these countries were to phase out their support schemes, investors would favour photovoltaic plants in northern European countries that still provide subsidies. This would increase the overall cost of the European energy transition," explains lead author Marc Melliger. Under these circumstances, it is vital that reforms are coordinated across Europe.

Strengthening coordination can keep investment "on track"

While policy coordination across the European Union has increased in recent years, countries retain a high degree of freedom in policy design and implementation. "Increased coordination between countries would add complexity and raise the required policy effort, but it could also help keep investments on track," points out co-author Johan Lilliestam. Marc Melliger adds: "Policy changes seeking to expose renewables to the free market aim to reduce costs. But if these reforms are not coordinated, there is a risk that costs will ultimately be higher."

Credit: 
Research Institute for Sustainability (RIFS) – Helmholtz Centre Potsdam

How long do doctor visits last? Electronic health records provide new data on time with patients

December 15, 2020 - How much time do primary care physicians actually spend one-on-one with patients? Analysis of timestamp data from electronic health records (EHRs) provides useful insights on exam length and other factors related to doctors' use of time, reports a study in the January issue of Medical Care. The journal is published in the Lippincott portfolio by Wolters Kluwer.

"By using timestamps recorded when information is accessed or entered, EHR data allow for potentially more objective and reliable measurement of how much time physicians spend with their patients," according to the new research by Hannah T. Neprash, PhD, of University of Minnesota School of Public Health and colleagues. That may help to make appointment scheduling and other processes more efficient, optimizing use of doctors' time.

More precise estimates of primary care visit times

Using a national source of EHR data for primary care practices, the researchers analyzed exam lengths for more than 21 million doctor visits in 2017. The study focused on exam lengths and discrepancies between scheduled and actual visit times.

Based on EHR timestamps, the mean exam time was 18 minutes, with a median of 15 minutes. "The mean exam lasted 1.2 minutes longer than scheduled, while the median exam ran 1 minute short of its scheduled duration," Dr. Neprash and coauthors write. The longer the scheduled visit, the longer the exam time.

"However, shorter scheduled appointments tended to run over while longer appointments often ended early," the researchers add. Scheduled 10-minute visits ran over by an average of 5 minutes; in contrast, scheduled 30-minute visits averaged less than 24 minutes.

More than two-thirds of visits deviated from the schedule for 5 minutes or more. About 38 percent of scheduled 10-minute visits lasted more than 5 minutes, while 60 percent of scheduled 30-minute visits lasted less than 25 minutes.

The findings suggest "scheduling inefficiencies in both directions," according to the authors. "Primary care offices' overuse of brief appointment slots may lead to appointment overrun, increasing wait time for patients and overburdening providers." In contrast, "longer appointments are critical for clinically complex patients, but misallocation of these extended visits represents potentially inefficient use of clinical capacity."

The time doctors spend with patients has a major impact on care. Average visit times seem to have increased over the years - yet physicians may still feel pressed to do more in the available time, including documentation, patient monitoring, and prevention/screening steps.

Estimates of medical visit times have been largely based on national surveys, which rely on information reported by office-based practices. For several reasons, these estimates may not accurately reflect the actual time doctors spend with patients in the examination room.

Routine data collected by EHRs provide a new way to measure length of physician visits, Dr. Neprash and colleagues write. Their method excluded visits where EHR data didn't seem to be recorded in real time and accounted for overlapping visits due to "double-booking."

Health systems could use EHR data to track discrepancies between schedules and actual visit lengths, enabling more efficient scheduling for patients with different needs. While acknowledging some limitations and challenges of this approach, the researchers believe their findings "support the development of a scalable approach to measure exam length using EHR data."

Credit: 
Wolters Kluwer Health

AI model shows promise to generate faster, more accurate weather forecasts

video: On the left is the new paper's "Deep Learning Weather Prediction" forecast. The middle is the actual weather for the 2017-18 year, and at right is the average weather for that day.

Image: 
Weyn et al./ Journal of Advances in Modeling Earth Systems

Today's weather forecasts come from some of the most powerful computers on Earth. The huge machines churn through millions of calculations to solve equations to predict temperature, wind, rainfall and other weather events. A forecast's combined need for speed and accuracy taxes even the most modern computers.

The future could take a radically different approach. A collaboration between the University of Washington and Microsoft Research shows how artificial intelligence can analyze past weather patterns to predict future events, much more efficiently and potentially someday more accurately than today's technology.

The newly developed global weather model bases its predictions on the past 40 years of weather data, rather than on detailed physics calculations. The simple, data-based A.I. model can simulate a year's weather around the globe much more quickly and almost as well as traditional weather models, by taking similar repeated steps from one forecast to the next, according to a paper published this summer in the Journal of Advances in Modeling Earth Systems.

"Machine learning is essentially doing a glorified version of pattern recognition," said lead author Jonathan Weyn, who did the research as part of his UW doctorate in atmospheric sciences. "It sees a typical pattern, recognizes how it usually evolves and decides what to do based on the examples it has seen in the past 40 years of data."

Although the new model is, unsurprisingly, less accurate than today's top traditional forecasting models, the current A.I. design uses about 7,000 times less computing power to create forecasts for the same number of points on the globe. Less computational work means faster results.

That speedup would allow the forecasting centers to quickly run many models with slightly different starting conditions, a technique called "ensemble forecasting" that lets weather predictions cover the range of possible expected outcomes for a weather event - for instance, where a hurricane might strike.

"There's so much more efficiency in this approach; that's what's so important about it," said author Dale Durran, a UW professor of atmospheric sciences. "The promise is that it could allow us to deal with predictability issues by having a model that's fast enough to run very large ensembles."

Co-author Rich Caruana at Microsoft Research had initially approached the UW group to propose a project using artificial intelligence to make weather predictions based on historical data without relying on physical laws. Weyn was taking a UW computer science course in machine learning and decided to tackle the project.

"After training on past weather data, the A.I. algorithm is capable of coming up with relationships between different variables that physics equations just can't do," Weyn said. "We can afford to use a lot fewer variables and therefore make a model that's much faster."

To merge successful A.I. techniques with weather forecasting, the team mapped six faces of a cube onto planet Earth, then flattened out the cube's six faces, like in an architectural paper model. The authors treated the polar faces differently because of their unique role in the weather as one way to improve the forecast's accuracy.

The authors then tested their model by predicting the global height of the 500 hectopascal pressure, a standard variable in weather forecasting, every 12 hours for a full year. A recent paper, which included Weyn as a co-author, introduced WeatherBench as a benchmark test for data-driven weather forecasts. On that forecasting test, developed for three-day forecasts, this new model is one of the top performers.

The data-driven model would need more detail before it could begin to compete with existing operational forecasts, the authors say, but the idea shows promise as an alternative approach to generating weather forecasts, especially with a growing amount of previous forecasts and weather observations.

Credit: 
University of Washington

Scientists discover a new complex europium hydride

image: The novel strongly correlated Europium superhydride

Image: 
Dmitrii V. Semenok et al/The Journal of Physical Chemistry Letters

A team of researchers from Russia, the United States, and China led by Skoltech Professor Artem R. Oganov has discovered an unexpected very complex europium hydride, Eu8H46. The paper detailing the discovery has been published in The Journal of Physical Chemistry Letters.

Superhydrides of rare-earth metals are interesting compounds that form under pressure: some exhibit high-temperature superconductivity that scientists have been chasing for over 100 years, and some possess magnetic properties. Although devoid of superconductivity, europium hydrides are very interesting in view of chemical anomalies that make europium different from other rare-earth atoms.

Armed with the efficient and reliable USPEX crystal structure prediction tool developed by Oganov and his students, the team predicted the structure of the remarkably complex compound, Eu8H46, which helped understand and explain experimental data.

"I am pleasantly surprised that USPEX has easily predicted a highly complex structure of 54 atoms, which is quite a lot. Curiously enough, our colleagues obtained this hydride in an experiment earlier but got the structure and composition wrong, assuming it was EuH5. Now we know that the compound is much trickier," Oganov comments.

"Such unusual compounds can be predicted in theory and proved by experiment, but there is no simple rule for identifying probable chemical compositions of stable compounds without performing arduous calculations," says Dmitrii Semenok, the first author of the paper and a PhD student at Skoltech.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Bermudagrass versus the armyworm

image: Fall Armyworm feeding on bermudagrass clippings.

Image: 
Gurjit Singh

Tifton, Georgia: A study out of the University of Georgia sought to determine the level of host plant resistance that can be assured by several promising experimental bermudagrass genotypes against potential damages committed by the fall armyworm.

Researchers Gurjit Singh, Shimat Joseph, and Brian Schwartz evaluated 14 different varieties of new bermudagrasses to determine their comparative levels of host resistance in the laboratory, and they published their findings in the article "Screening Newly Developed Bermudagrasses for Host Plant Resistance against Fall Armyworm (Lepidoptera: Noctuidae)" in HortScience.

The fall armyworm (Spodoptera frugiperda) is particularly destructive to warm-season turfgrass species, including bermudagrass, a widely popular turfgrass predominantly used of golf courses, athletic grounds, and ornamental landscapes across the country and throughout the world. Spodoptera frugiperda infestation is often sporadic; however, when it does occur, damage can be severe.

Schwartz notes, "Searching for genetic resistance to the fall armyworm in bermudagrass has been ongoing for over a half century in Tifton. It will certainly take a concentrated and collaborative effort for many more years if we are to make an impact for turf managers in the future."

In Georgia alone, the turfgrass industry is worth $7.8 billion. From July to late November, landscape maintenance companies and homeowners often apply environmentally unsound insecticides to protect residential and public lawns in urban and suburban areas. Golf courses and sod farms typically use an abundance of insecticides such as bifenthrin to maintain vast stretches of turfgrass for fall armyworm protection.

Host plant resistance against Spodoptera frugiperda could be a valuable tool for reducing or preventing the use of such insecticides.

Singh adds, "The bermudagrass fall armyworm is a fun system to work on but requires careful handling of early 1st and 2nd instar larvae while feeding and taking measurements for larval survival and development. Extra attention was paid to avoid the larval mortality due to manual handling, which ultimately helped getting a better picture of the results."

The early larval stages of the fall armyworm generally go undetected because they remain hidden within the turfgrass canopy during the daytime until the larvae reach the fourth or fifth instar. The young larvae feed on the grass blades, whereas the late instar larvae consume both the stems and the grass blades. Severely affected turfgrass appears brown because most of the grass blades are consumed. Compared with young instars, late instar larvae are more tolerant to insecticides.

Turfgrass breeding programs have always emphasized the improvement of aesthetic characteristics and tolerance to abiotic factors, such as drought and foot traffic. Because insecticide resistance and nontarget effects of insecticide applications pose serious concerns to the turfgrass industry, alternative control options have recently been emphasized.

As there are no known Spodoptera frugiperda-resistant bermudagrass cultivars available to the turfgrass industry, the researchers tested 14 promising experimental bermudagrass genotypes for resistance to the damages caused by the fall armyworm and to compare their performance to that of the emerging standard bermudagrass 'TifTuf'. These experimental genotypes are considered "elite" because of their superior turfgrass quality, drought tolerance, shade persistence, rapid growth, and resistance to foot traffic during multiple years of field testing.

For the study, all the turfgrass genotypes were maintained in a greenhouse at the University of Georgia, Griffin Campus. The bermudagrass cultivar 'TifTuf' was used as a susceptibility control, and 'Zeon' bermudagrass was used as a control for its resistance to the fall armyworm.

The experiment was conducted by adding treatments of armyworms to each isolated bermudagrass cultivar. Larval survival and development were recorded at 2-day intervals. To document larval development, larval length from the head to the tip of the abdomen, head capsule width, and larval weight were recorded.

To determine the performance of the bermudagrasses relative to the controls, survival, development, and overall susceptibility indices were developed. A criterion was established to compare and contrast the performance of the bermudagrasses with the commercial standard, 'TifTuf'. Fulfillment of the criterion was "high" if the genotype proved more resistant than the commercial standard, "comparable" if it proved similar, and "low" if it failed by comparison.

The researchers identified a few of the new bermudagrasses that were comparable to the industry standard 'TifTuf' in terms of susceptibility to armyworms. Some genotypes were less susceptible to neonates, whereas other bermudagrasses showed reduced development rates, potentially exposing the larvae to severe weather and predation. The results also illustrated that resistance or susceptibility screening can be achieved by evaluating pupal parameters, which will be especially useful when breeders screen for armyworm resistance or susceptibility using several genotypes at one time.

Although this study identifies few promising experimental genotypes, more studies are warranted to understand the consistency of their performance in field conditions because host plant resistance continues to be a desirable goal in the management of fall armyworms in turfgrass.

Joseph observes, "The sod producers, golf course superintendents, and homeowners spent a substantial amount of money to manage fall armyworm." He goes on to state, "Toward developing fall armyworm resistant bermudagrass, this study is an important first step."

Credit: 
American Society for Horticultural Science

Evapotranspiration in an arid environment

LAS VEGAS, NEVADA: Evapotranspiration is an important process in the water cycle because it is responsible for 15% of the atmosphere's water vapor. Without that input of water vapor, clouds could not form, and precipitation would never fall. It is the process by which water is transferred from the land to the atmosphere by evaporation from the soil and other surfaces and by transpiration from plants.

Now, in an era when impending water scarcity has become a legitimate concern, irrigating to meet evapotranspiration while avoiding overirrigation with precious available water will take informed judgement.

Researchers Tamara Wynne and Dale Devitt of the University of Nevada, Las Vegas, conducted a study designed to quantify water usage of landscape plants while irrigating to meet evapotranspiration to avoid a drainage component.

Their findings are in the article "Evapotranspiration of Urban Landscape Trees and Turfgrass in an Arid Environment: Potential Trade-offs in the Landscape" published in HortScience.

As Wynne and Devitt point out, irrigation in arid urban landscapes can use significant amounts of water. Water conservation must be based on plant species and the ability to meet plant water requirements while minimizing overirrigation. However, actual evapotranspiration estimates for landscape trees and turfgrass in arid environments such as the Mojave Desert are poorly documented.

Continued population growth in the arid southwestern United States is placing greater demand on available water resources. Much of this growth is in sprawling metropolises where water is used outdoors to support urban landscapes. The overall driving force of evapotranspiration of landscape vegetation in arid environments is mostly contingent on the amount of water made available to plants.

One of the objectives of this study was to quantify the evapotranspiration of 10 landscape trees and two turfgrass species using a soil-water balance approach to determine tree grass water use ratios and what this might mean in terms of water use trade-offs in the landscape.

The trees were grown in a plot with a high-density planting. A complete morphological assessment was made on each tree, and monitoring of plant water status was conducted weekly. A water balance was maintained on each tree by quantifying irrigation input, drainage output and change in soil water storage.

In addition, the researchers quantified transpiration using sap-flow sensors, allowing them to indirectly estimate evaporation. The research was conducted at the University of Nevada Las Vegas Center for Urban Water Conservation in North Las Vegas.

Wynne and Devitt reported that, as trees grow, their water use requirements increase, but their water use on a basal canopy area may actually decrease, meaning that greatest water savings in urban landscapes with mature trees would occur by removing the turfgrass, not the trees--especially cool-season grasses.

The researchers submit their findings as a benchmark for further study and understanding, indicating the importance of honing irrigation practices concerning landscape plants in arid environments, demonstrating good stewardship of water storage.

Wynne notes, "Previous research by Dr. Devitt revealed young trees used more water than turfgrass, and it was interesting to see mature landscape trees become more water efficient over time and use less water per area than turfgrass."

Credit: 
American Society for Horticultural Science