Tech

New AI model learns from thousands of possibilities to suggest medical diagnoses & tests

AI has, for some time, been applied to diagnose medical conditions in specific fields. It can build on knowledge of particular disciplines to hone in on details such as the shape of a tumor that suggests breast cancer or abnormal cells that indicate cervical cancer. While AI is very good when trained on years of human data in specific domains, it has not been able to deal with the huge number of diagnostic tests (about 5000) and disorders (about 14,000) of modern clinical practice. Now, a new algorithm developed by engineers at the USC Viterbi School of Engineering can think and learn just like a doctor but with essentially infinite experience.

The work comes out of the lab of Gerald Loeb, a professor of biomedical engineering, pharmacy and neurology at USC Viterbi School of Engineering and a trained physician. Loeb spent years applying AI algorithms to haptics and building robots to sense and identify materials and objects. His previous research on this surpassed the state of the art. While the state of AI for haptics was to identify about 10 objects with about 80 percent accuracy, Loeb and Jeremy Fishel, his graduate student at the time, were able to identify 117 objects with 95 percent accuracy. When they extended it to 500 objects and 15 different possible tests, their algorithm got even faster and more accurate. That, Loeb says, is when he started thinking about adapting it for medical diagnosis.

Loeb's new form of AI suggests the best diagnostic strategies by mining electronic healthcare records in databases. This could lead to faster, better and more efficient diagnoses and treatments. The work was published in the Journal of Biomedical Informatics.

The algorithm works just like a doctor- "thinking about what to do next at each stage of the medical work-up," said Loeb, a pioneer in the field of neural prosthetics and one of the original developers of the cochlear implant, now widely used to treat hearing loss. "The difference is that it has the benefit of all the experiences in the collective healthcare records."

How it Works

Conventional AI has long used a specific algorithm to suggest to physicians the most likely diagnoses given a set of observations. Called Bayesian Inference, it uses whatever information is currently available to suggest which diagnoses are the most likely.

Loeb's algorithm reverses this process and instead seeks those tests that would most likely identify the correct illness or condition, no matter how obscure. He calls this Bayesian Exploration. The algorithm can also take into account the costs and delays associated with various diagnostic tests.

"This hasn't been done before," he said. "This is new."

Loeb said his new algorithm has several benefits.First, this algorithm could help doctors make better diagnostic and testing decisions by suggesting several good options, including some a practitioner might not have otherwise considered. Next, the diagnostic software would automatically update and improve, as myriad physicians input additional data into electronic medical records.

In addition, Loeb believes doctors would more easily generate complete and accurate medical records. Instead of having to hunt for codes or work their way through many drop-down menus, they could simply select a particular illness or diagnostic procedure suggested by the AI, which would automatically input the correct information into the electronic records.

Loeb emphasizes that physicians could, of course, override the AI and go with their own judgment.

"The algorithm isn't meant to make decisions for doctors or replace them," Loeb said. "It's meant to complement and support them."

Looking to the future

Loeb believes this algorithm could revolutionize medical and testing diagnostics. But the USC Viterbi and Keck School of Medicine professor acknowledges the huge financial and technological challenges of applying AI to electronic health records. The United States' balkanized medical system and spotty usage of electronic medical records he believes make it an inhospitable environment for his technology to take root.

Loeb says his system would be much easier to introduce in other countries, for example in Scandinavia or in South Korea, places with nationalized healthcare and the widespread usage of electronic medical records. However, its implementation would face major challenges even there, including the large expense and brainpower needed to develop and deploy the massive database and user interfaces for the widespread adoption and integration of his algorithm.

Instead, Loeb puts his faith in tech. He believes that Amazon, Microsoft and Google have the resources and know-how to disrupt American healthcare the way Uber and Lyft upended the taxicab industry.

"If the promise of success is great enough, then people are going to be motivated to do it," Loeb said. "And that's what we think this algorithm provides: the possibility, the promise of offering a solution to a huge problem that wastes a lot of resources, trillions of dollars' worth."

Credit: 
University of Southern California

How to prevent short-circuiting in next-gen lithium batteries

As researchers push the boundaries of battery design, seeking to pack ever greater amounts of power and energy into a given amount of space or weight, one of the more promising technologies being studied is lithium-ion batteries that use a solid electrolyte material between the two electrodes, rather than the typical liquid.

But such batteries have been plagued by a tendency for branch-like projections of metal called dendrites to form on one of the electrodes, eventually bridging the electrolyte and shorting out the battery cell. Now, researchers at MIT and elsewhere have found a way to prevent such dendrite formation, potentially unleashing the potential of this new type of high-powered battery.

The findings are described in the journal Nature Energy, in a paper by MIT graduate student Richard Park, professors Yet-Ming Chiang and Craig Carter, and seven others at MIT, Texas A&M University, Brown University, and Carnegie Mellon University.

Solid-state batteries, Chiang explains, have been a long-sought technology for two reasons: safety and energy density. But, he says, "the only way you can reach the energy densities that are interesting is if you use a metal electrode." And while it's possible to couple that metal electrode with a liquid electrolyte and still get good energy density, that does not provide the same safety advantage as a solid electrolyte does, he says.

Solid state batteries only make sense with metal electrodes, he says, but attempts to develop such batteries have been hampered by the growth of dendrites, which eventually bridge the gap between the two electrode plates and short out the circuit, weakening or inactivating that cell in a battery.

It's been known that dendrites form more rapidly when the current flow is higher -- which is generally desirable in order to allow rapid charging. So far, the current densities that have been achieved in experimental solid-state batteries have been far short of what would be needed for a practical commercial rechargeable battery. But the promise is worth pursuing, Chiang says, because the amount of energy that can be stored in experimental versions of such cells, is already nearly double that of conventional lithium-ion batteries.

The team solved the dendrite problem by adopting a compromise between solid and liquid states. They made a semisolid electrode, in contact with a solid electrolyte material. The semisolid electrode provided a kind of self-healing surface at the interface, rather than the brittle surface of a solid that could lead to tiny cracks that provide the initial seeds for dendrite formation.

The idea was inspired by experimental high-temperature batteries, in which one or both electrodes consist of molten metal. According to Park, the first author of the paper, the hundreds-of-degrees temperatures of molten-metal batteries would never be practical for a portable device, but the work did demonstrate that a liquid interface can enable high current densities with no dendrite formation. "The motivation here was to develop electrodes that are based on carefully selected alloys in order to introduce a liquid phase that can serve as a self-healing component of the metal electrode," Park says.

The material is more solid than liquid, he explains, but resembles the amalgam dentists use to fill a cavity -- solid metal, but still able to flow and be shaped. At the ordinary temperatures that the battery operates in, "it stays in a regime where you have both a solid phase and a liquid phase," in this case made of a mixture of sodium and potassium. The team demonstrated that it was possible to run the system at 20 times greater current than using solid lithium, without forming any dendrites, Chiang says. The next step was to replicate that performance with an actual lithium-containing electrode.

In a second version of their solid battery, the team introduced a very thin layer of liquid sodium potassium alloy in between a solid lithium electrode and a solid electrolyte. They showed that this approach could also overcome the dendrite problem, providing an alternative approach for further research.

The new approaches, Chiang says, could easily be adapted to many different versions of solid-state lithium batteries that are being investigated by researchers around the world. He says the team's next step will be to demonstrate this system's applicability to a variety of battery architectures. Co-author Viswanathan, professor of mechanical engineering at Carnegie Mellon University, says, "We think we can translate this approach to really any solid-state lithium-ion battery. We think it could be used immediately in cell development for a wide range of applications, from handheld devices to electric vehicles to electric aviation."

Credit: 
Massachusetts Institute of Technology

Patient wait times reduced thanks to new study by Dartmouth engineers

The first known study to explore optimal outpatient exam scheduling given the flexibility of inpatient exams has resulted in shorter wait times for magnetic resonance imaging (MRI) patients at Lahey Hospital & Medical Center in Burlington, Mass. A team of researchers from Dartmouth Engineering and Philips worked to identify sources of delays for MRI procedures at Lahey Hospital in order to optimize scheduling and reduce overall costs for the hospital by 23 percent.

The Dartmouth-led study, "Stochastic programming for outpatient scheduling with flexible inpatient exam accommodation," was sponsored by Philips and recently published by Health Care Management Science in collaboration with Lahey Hospital.

"Excellence in service and positive patient experiences are a primary focus for the hospital. We continuously monitor various aspects of patient experiences and one key indicator is patient wait times," said Christoph Wald, chair of the department of radiology at Lahey Hospital and professor of radiology at Tufts University Medical School. "With a goal of wanting to improve patient wait times, we worked with data science researchers at Philips and Dartmouth to help identify levers for improvement that might be achieved without impeding access."

Prior to working with the researchers, on an average weekday, outpatients at Lahey Hospital waited about 54 minutes from their arrival until the beginning of their exam. Researchers determined that one of the reasons for the routine delays was a complex scheduling system, which must cater to emergency room patients, inpatients, and outpatients; while exams for inpatients are usually flexible and can be delayed if necessary, other appointments cannot.

"Mathematical models and algorithms are crucial to improve the efficiency of healthcare systems, especially in the current crisis we are going through. By analyzing the patient data, we found that delays were prominent because the schedule was not optimal," said first author Yifei Sun, a Dartmouth Engineering PhD candidate. "This research uses optimization and simulation tools to help the MRI centers of Lahey Hospital better plan their schedule to reduce overall cost, which includes patient waiting time."

First, the researchers reviewed data to analyze and identify sources of delays. They then worked on developing a mathematical model to optimize the length of each exam slot and the placement of inpatient exams within the overall schedule. Finally, the researchers developed an algorithm to minimize the wait time and cost associated with exam delays for outpatients, the idle time of equipment, employee overtime, and cancelled inpatient exams.

"This iterative improvement process did result in measurable improvements of patient wait times," said Wald. "The construction and use of a simulation model have been instrumental in educating the Lahey team about the benefits of dissecting workflow components to arrive at an optimized process outcome. We have extended this approach to identify bottlenecks in our interventional radiology workflow and to add additional capacity under the constraints of staffing schedules."

The researchers believe their solutions are broadly applicable, as the issue is common to many mid-sized hospitals throughout the country.

"We also provided suggestions for hospitals that don't have optimization tools or have different priorities, such as patient waiting times or idle machine times," said Sun, who worked on the paper with her advisor Vikrant Vaze, the Stata Family Career Development Associate Professor of Engineering at Dartmouth.

Credit: 
Thayer School of Engineering at Dartmouth

FSU researchers discover how 'cryptic species' respond differently to coral bleaching

image: The coral reef in Moorea before bleaching killed the larger corals in 2019.

Image: 
FSU Coastal and Marine Laboratory/Scott Burgess

Certain brightly colored coral species dotting the seafloor may appear indistinguishable to many divers and snorkelers, but Florida State University researchers have found that these genetically diverse marine invertebrates vary in their response to ocean warming, a finding that has implications for the long-term health of coral reefs.

The researchers used molecular genetics to differentiate among corals that look nearly identical and to understand which species best coped with thermal stress. Their research was published in the journal Ecology.

"Being able to recognize the differences among these coral species that cannot be identified in the field -- which are known as 'cryptic species' -- will help us understand new ways for how coral reefs maintain resilience in the face of disturbance," said Associate Professor of Biological Science Scott Burgess, the paper's lead author.

The researchers were studying the coral ecosystem at the island of Moorea in French Polynesia when a coral bleaching event struck in 2019.

Corals get their color from algae that live in their tissues and with which they have a symbiotic relationship. But when corals are stressed -- by high water temperature, for example -- algae leave the coral, which turns white, hence the term "bleaching." Bleached corals are not dead, but they are more vulnerable and more likely to die.

Most of the coral at Moorea belong to the genus Pocillopora. During the event, the researchers saw that about 72 percent of the coral colonies from this genus bleached, and up to 42 percent died afterward.

At first, it seemed that the largest colonies were more likely to bleach, but when the scientists examined tissue samples from the coral, they found that colonies belonging to a certain genetic lineage, not coral size, was most important in determining the fate of the corals.

"Because Pocillopora species look so similar, they cannot be reliably identified in the field, which, in the past, has forced researchers to study them as a single group," said Erika Johnston, a postdoctoral researcher in the Department of Biological Science and a co-author of the paper. "Molecular genetics allows us to reconstruct their evolutionary ancestry and are an essential step to species identification in this case."

About 86 percent of the Pocillopora corals that died belonged to a group that shares a set of DNA variations, which is known as a haplotype and reflects their common evolutionary ancestry.

"The good news is that not all of the corals died from bleaching, and many species survived," Burgess said. "The bad news is that the species that died is, as far as we are aware at the moment, endemic to that specific region. So on the one hand, we're worried about losing an endemic species, but on the other hand, our results show how co-occurring cryptic species can contribute to coral resilience."

It's an ecological analogy to having a diverse financial portfolio, where a variety of investments decreases the likelihood of a complete loss.

"Having multiple species that perform a similar function for the reef ecosystem but differ in how they respond to disturbances should increase the chance that Pocillopora corals continue to perform their role in the system, even though the exact species may be shuffled around," Burgess said.

Maintaining healthy ecological portfolios may be a better management option than attempting to restore a specific species.

"If we maintain the right type of diversity, nature in a way can pick the winners and losers," Burgess said. "However, the worry for us scientists is that unless the leaders of governments and corporations take action to reduce CO2 emissions, ecological portfolios that can maintain coral reef resilience will be increasingly eroded under current and ongoing climate change. This is concerning because coral reef ecosystems provide economic, health, cultural and ecological goods and services that humans rely on."

Future research will look into the composition of the algae that live inside the coral, the depth distributions of each cryptic coral species and the evolutionary relationships among the cryptic species.

Credit: 
Florida State University

Oncotarget: Folinic acid in colorectal cancer: Esquire or fellow knight?

image: No difference was found in univariate OS analysis for folinic salt use - 37,7 vs 33,4 months - p value 0,151 - maybe due to therapy switch over of a large number of patients who underwent NaLF-based therapy in second line after first-line CaLF-based therapy, partially disguising the overall survival difference between the two groups.

Image: 
Correspondence to - Francesco Jacopo Romano - francesco_jacopo@libero.it

Oncotarget published "Folinic acid in colorectal cancer: esquire or fellow knight? Real-world results from a mono institutional, retrospective study" which reported that the stock of therapeutic weapons available in metastatic colorectal cancer has been progressively grown over the years, with improving both survival and patients' clinical outcome: notwithstanding advances in the knowledge of mCRC biology, as well as advances in treatment, fluoropyrimidine antimetabolite drugs have been for 30 years the mainstay of chemotherapy protocols for this malignancy.

5-Fluorouracil seems to act differently depending on administration method: elastomer-mediated continuous infusion better inhibits Thymidylate Synthase, an enzyme playing a pivotal role in DNA synthetic pathway.

TS overexpression is an acknowledged poor prognosis predicting factor.

The simultaneous combination of 5FU and folinate salt synergistically strengthens fluorouracil cytotoxic effect.

In their experience, levofolinate and 5FU together in continuous infusion prolong progression free survival of patients suffering from mCRC, moreover decreasing death risk and showing a clear clinical benefit for patients, irrespective of RAS mutational status, primitive tumor side and metastases surgery.

Showing a clear clinical benefit for patients, irrespective of RAS mutational status, primitive tumor side and metastases surgery

Dr. Francesco Jacopo Romano from The Antonio Cardarelli Hospital, Oncology Unit in Naples Italy said, "Notwithstanding advances in the knowledge of metastatic colorectal cancer (mCRC) biology, as well as advances in treatment, fluoropyrimidine antimetabolite drugs are currently the mainstay of chemotherapy protocols for this malignancy."

5FU seems to act differently depending on administration method: quick bolus mainly increases incorporation of 5FU in RNA, even yielding a more severe hematological and gastrointestinal toxicity than continuous infusion, whereas elastomer-mediated continuous infusion long inhibits Thymidylate Synthase.

Modulation of 5FU activity has been studied for several years, with the aim to enhance antineoplastic effect by combining bolus and continuous infusion administration to maximize 5FU antitumor efficacy.

Therefore, therapeutic strategies combining bolus and continuous infusion have been made to better exploit both genotoxic effects of fluoropyrimidines incorporation and TS inhibition, with extending infusion time to 48 hours and adding folinic acid.

Usually, 5FU bolus is administered at the middle of a 2-hours folinic acid infusion.

This retrospective, single-center observational study is the first with the aim of evaluating differences between these administration modalities: in particular, the authors wondered if co-administration of 5FU and folinic acid in continuous infusion was as effective as the classic sequential administration, or even more effective in terms of progression free- and overall survival, especially considering the aforementioned preclinical data.

The Romano Research Team concluded in their Oncotarget Research Paper that increased survival for patients undergoing NaLF based therapy could be the consequence of a greater and more effective TS inhibition.

A plausible reason for ALL RAS absence of significance on OS multivariate compared to univariate analysis can be found in extremely wide expression of TS in colon cancer cells.

Indeed, on one hand they have an oncogene addiction toward EGFR pathways in ALL RAS wild type cells, but on the other hand we know that TS is essential for non-oncogenic pathways of cancer cells regardless of EGFR or RAS activation.

Noteworthy, there are coincidentally more RAS mutated patients in NaLF group than the CaLF counterpart.

Finally, they highlight the therapy switch over of a large number of patients who underwent NaLF-based therapy in the second line after first-line CaLF-based therapy, partially disguising the overall survival difference between the two groups.

Credit: 
Impact Journals LLC

Oncotarget: MicroRNA-4287 is controlling epithelial-to mesenchymal transition in prostate cancer

image: miR-4287 overexpression regulates EMT in prostate cancer cell lines.

Image: 
Correspondence to - Sharanjot Saini - ssaini@augusta.edu

The cover for issue 51 of Oncotarget features Figure 5, "miR-4287 overexpression regulates EMT in prostate cancer cell lines," published in "MicroRNA-4287 is a novel tumor suppressor microRNA controlling epithelial-to mesenchymal transition in prostate cancer" by Bhagirath, et al. which reported that the authors analyzed the role of miR-4287 in PCa using clinical tissues and cell lines.

Receiver operating curve analysis showed that miR-4287 distinguishes prostate cancer from normal with a specificity of 88.24% and with an Area under the curve of 0.66. Further, these authors found that miR-4287 levels correlate inversely with patients' serum prostate-specific antigen levels.

Ectopic over-expression of miR-4287 in PCa cell lines showed that miR-4287 plays a tumor suppressor role.

miR-4287 led to an increase in G2/M phase of cell cycle in PCa cell lines.

Further, ectopic miR-4287 inhibited PCa epithelial-to-mesenchymal transition by directly repressing SLUG and stem cell marker CD44. Since miR-4287 specifically targets metastasis pathway mediators, miR-4287 has potential diagnostic and therapeutic significance in preventing advanced, metastatic disease.

miR-4287 has potential diagnostic and therapeutic significance in preventing advanced, metastatic disease

Dr. Sharanjot Saini from The Augusta University said, "Prostate cancer (PCa) is the second leading cause of cancer related deaths among men in the United States."

Prostate Specific Antigen, a glycoprotein that is synthesized and released by normal and tumor cells, is often used for early detection and diagnosis of prostate cancer.

Previous research from these researcher's laboratory has shown an important tumor suppressor role of these miRNAs including miR-3622a, miR-3622b, miR-383 and miR4288 whereby these miRNAs are down-regulated in prostate tumors, mediate an anti-proliferative effect on tumor cells and are involved in inhibiting the metastasis and progression of the disease.

In the present study, they studied the function of a novel miRNA, miR-4287, another miRNA that falls on chromosome 8p 21.1 within the intron of gene scavenger receptor class A member 5, in prostate cancer cell lines.

They observed a similar tumor suppressor role of miR-4287 in prostate cancer as it was found to be downregulated in PCa clinical samples.

Chr8p region has traditionally been associated with PCa initiation through a significantly higher deletion frequency has been reported in advanced PCa reference suggesting its role in PCa progression.

The Saini Research Team concluded in their Oncotarget Research Paper, "in the present study we define a tumor-suppressor role of a novel miRNA- miR-4287- in prostate cancer via its regulation of prostate cancer EMT and stemness. This role of miR-4287 is in line with our earlier defined tumor suppressive role of other miRNAs located within this frequently deleted region on chromosome 8p [14–17, 23], implicating an important mechanistic role of chr8p in driving prostate cancer progression, metastasis and tumor recurrence. Given that these miRNAs play essential roles in PCa progression and are lost in advanced prostate cancer, it will be important to devise strategies to re-instate their expression in tumors via therapeutic interventions to successfully treat aggressive prostate cancer."

Credit: 
Impact Journals LLC

Voltage from the parquet

image: Scanning electron microscopy (SEM) images of balsa wood (left) and delignified wood illustrate the structural changes.

Image: 
ACS Nano / Empa

Ingo Burgert and his team at Empa and ETH Zurich has proven it time and again: Wood is so much more than "just" a building material. Their research aims at extending the existing characteristics of wood in such a way that it is suitable for completely new ranges of application. For instance, they have already developed high-strength, water-repellent and magnetizable wood. Now, together with the Empa research group of Francis Schwarze and Javier Ribera, the team has developed a simple, environmentally friendly process for generating electricity from a type of wood sponge, as they reported last week in the journal Science Advances.

Voltage through deformation

If you want to generate electricity from wood, the so-called piezoelectric effect comes into play. Piezoelectricity means that an electric voltage is created by the elastic deformation of solids. This phenomenon is mainly exploited by metrology, which uses sensors that generate a charge signal, say, when a mechanical load is applied. However, such sensors often use materials that are unsuitable for use in biomedical applications, such as lead zirconate titanate (PZT), which cannot be used on human skin due to the lead it contains. It also makes the ecological disposal of PZT and Co rather tricky. Being able to use the natural piezoelectric effect of wood thus offers a number of advantages. If thought further, the effect could also be used for sustainable energy production. But first of all, wood must be given the appropriate properties. Without special treatment, wood is not flexible enough; when subjected to mechanical stress; therefore, only a very low electrical voltage is generated in the deformation process.

From block to sponge

Jianguo Sun, a PhD student in Burgert's team, used a chemical process that is the basis for various "refinements" of wood the team has undertaken in recent years: delignification. Wood cell walls consist of three basic materials: lignin, hemicelluloses and cellulose. "Lignin is what a tree needs primarily in order to grow to great heights. This would not be possible without lignin as a stabilizing substance that connects the cells and prevents the rigid cellulose fibrils from buckling," explains Burgert. In order to transform wood into a material that can easily be deformed, lignin must at least partially be "extracted". This is achieved by placing wood in a mixture of hydrogen peroxide and acetic acid. The lignin is dissolved in this acid bath, leaving a framework of cellulose layers. "We take advantage of the hierarchical structure of wood without first dissolving it, as is the case in paper production, for example, and then having to reconnect the fibers", says Burgert. The resulting white wood sponge consists of superimposed thin layers of cellulose that can easily be squeezed together and then expand back into their original form - wood has become elastic.

Electricity from wooden floors

Burgert's team subjected the test cube with a side length of about 1.5cm to about 600 load cycles. The material showed an amazing stability. At each compression, the researchers measured a voltage of around 0.63V - enough for an application as a sensor. In further experiments, the team tried to scale up their wooden nanogenerators. For example, they were able to show that 30 such wooden blocks, when loaded in parallel with the body weight of an adult, can light up a simple LCD display. It would therefore be conceivable to develop a wooden floor that is capable of converting the energy of people walking on it into electricity. The researchers also tested the suitability as a pressure sensor on human skin and showed that it could be used in biomedical applications.

Application in preparation

The work described in the Empa-ETH team's latest publication, however, goes one step further: The aim was to modify the process in such a way that it no longer requires the use of aggressive chemicals. The researchers found a suitable candidate that could carry out the delignification in the form of a biological process in nature: the fungus Ganoderma applanatum, the causes of white rot in wood. "The fungus breaks down lignin and hemicellulose in the wood particularly gently," says Empa researcher Javier Ribera, explaining the environmentally friendly process. What's more, the process can be easily controlled in the lab.

There are still a few steps to be taken before the "piezo" wood can be used as a sensor or as an electricity-generating wooden floor. But the advantages of such a simple and at the same time renewable and biodegradable piezoelectric system are obvious - and are now being investigated by Burgert and his colleagues in a follow-up projects. And in order to adapt the technology for industrial applications, the researchers are already in talks with potential cooperation partners.

Credit: 
Swiss Federal Laboratories for Materials Science and Technology (EMPA)

Women veterinarians earn $100K less than men annually

ITHACA, N.Y. - Women veterinarians make less than their male counterparts, new research from Cornell University's College of Veterinary Medicine has found ¬- with an annual difference of around $100,000 among the top quarter of earners.

The disparity predominantly affects recent graduates and the top half of earners, according to the research, the first overarching study of the wage gap in the veterinary industry.

"Veterinarians can take many paths in their careers, all of which affect earning potential," said the paper's senior author, Dr. Clinton Neill, assistant professor in the Department of Population Medicine and Diagnostic Sciences.

"Similar to what's been found in the human medicine world, we found the wage gap was more prominent in the beginning of their careers but dissipates after about 25 years. This has large implications for lifetime wealth and earnings, as men will consequently have a larger sum of wealth at the end of their careers because of this."

Neill and his collaborators examined practice ownership income, experience and specialty certification.

The reasons for the earning inequality are challenging to identify. The researchers cite unconscious bias, size of practices, less external financing and societal expectations as potential factors.

The industrywide effects of this bias can be linked to some common misconceptions, Neill said. While they did find an ownership disparity, this didn't account for the wage gap as a whole.

Their analysis showed that type of ownership also plays a role. Partnerships, for example, are more beneficial for women's income earning potential than sole proprietorships, while any form of ownership benefits men's incomes. When it comes to the number of years worked, the study found that men move into higher income brackets at lower levels of experience than women.

While the paper aimed to lay the groundwork for more solution-oriented studies, the researchers suggested that measures such as industrywide income transparency could help close the gap.

Credit: 
Cornell University

When English and French mix in literature

Do children learning French as a second language see benefits from reading bilingual French-English children's books?

A study recently published in the journal Language and Literacy found that bilingual books, which are not often used in French immersion classrooms, are seen by students as an effective tool for second language learning.

To find out more on this topic, we spoke with the co-author of the paper, Joël Thibeault, Assistant Professor of French education at uOttawa's Faculty of Education.

What is the topic of your research?

"My research focuses on the educational value of bilingual children's books in the teaching of French as a second language. To highlight this value, I zeroed in on elementary students in French immersion and asked them whether they perceived the utility and inutility of this medium. In other words, I asked them how the interaction of French and English within the same book could help them learn how to read."

What was your research method?

"Ian A. Matheson, Professor at Queen's University and co-author on this paper, and I wanted the students taking part in the study to have the opportunity to interact with and be exposed to bilingual books before sharing their perceptions. This is why we asked them to read aloud passages from two different bilingual books before we interviewed them.

"In the first one, the same text appears in French and English. In the second one, passages in French are not identical to those in English. This meant the reader had to have some knowledge of both languages to completely understand the book's content.

"After each read-aloud session, we conducted one-on-one interviews that allowed us to describe how our participants perceived the utility of each book.

"Data collection took place in the spring of 2019. We interviewed French immersion students in Grades 3 and 4, in Saskatchewan, which is an interesting setting because it is Anglo-dominant. This means that school is often the only space where students are exposed to French."

What are your study's main conclusions?

"Our participants were able to identify more advantages than disadvantages when it comes to reading bilingual books. They notably pointed out that bilingual books could help them learn French vocabulary.

"Some students also noted that having French in the book could help them develop knowledge related to English vocabulary.

"Moreover, the students were able to suggest ways to integrate this bilingual literature in immersion classes. For example, some mentioned that it could be used for collaborative reading tasks, when two students have different proficiency levels in French and English. They felt this process could enable a co-construction of meaning where everyone could make use of their knowledge of the languages interacting in the book."

Could your results help change teaching practices?

"Yes, because in immersion, we often opt for monolingual teaching practices; French is the only language allowed. However, we increasingly acknowledge that second language learning relies on the languages that students already know. With that in mind, it could be quite useful to get learners to use their full linguistic repertoire when they learn a second language.

"This study is coherent with this perspective as it explores the positive perceptions that students have regarding bilingual children's books. Thus, it helps to highlight the utility of this novel teaching tool and recognizes the value of the diversified linguistic repertoire that students in immersion have."

Is there anything else you'd like to share?

"One might think that when reading a bilingual book, students would tend to only read the passages in the language they know best. However, our data does not support this idea. While they were reading, students equally focused on French and English. One-on-one interviews also confirmed students fully recognize the advantages related to having two languages in interaction within the same book.

"Of course, it is possible that participants did not want to admit to the researcher that they would only focus on the language they know best if they had read the book by themselves -- this is what we call in research 'social desirability.' However, for now, there is nothing in our study, as well as in those carried out by other researchers before us, that would allow us to state that young bilingual readers would tend to mainly read the passages in the language they know best.

"It would be interesting to conduct further research in different reading environments (in class, at the library, at home, etc.). This could add to the work we have begun by documenting with greater precision how students engage with bilingual books."

Credit: 
University of Ottawa

Migration routes of one of Britain's largest ducks revealed for the first time

image: ©Philip Croft/BTO

Image: 
©Philip Croft/BTO

New research, just published in the journal Ringing & Migration, has used state of the art tracking technology to investigate how one of Britain's largest ducks, the Shelduck, interacts with offshore wind turbines during their migration across the North Sea.

Their findings reveal - for the first time - the length, speed and flight heights of this journey.

Offshore wind farms are a key part of many governments' strategies to reduce carbon emissions and mitigate climate change impacts. However, it is important to understand how they might affect wildlife.

The risk of colliding with wind turbines, is a particular concern to migratory species travelling across the sea, and there is also a potential increased energetic cost if wind farms act as a barrier that migrating birds must fly around.

The majority of British and Irish Shelduck undergo a 'moult migration' to the Wadden Sea, which runs along the coasts of the Netherlands, Germany and Denmark. They make this journey every year in late summer, after they have finished breeding.

Once there, they replace their old and worn out feathers and become flightless in the relative safety that the Wadden Sea offers, before returning to Britain when their moult is complete. However, in journeying to and from the Wadden Sea, Shelduck must cross the North Sea and navigate its growing number of wind farms en route.

Scientists from the British Trust for Ornithology (BTO) used state of the art tags to track four Shelduck from the Alde-Ore Estuary Special Protection Area on the Suffolk coast to the Wadden Sea. Each bird took a separate route across the North Sea, and used previously unreported stopover sites in the Dutch Wadden Sea, before continuing on to moult sites in the Helgoland Bight off the coast of Germany. Incredibly, one bird travelled back and forth between the Dutch and German Wadden Seas four times, adding an extra 1,000 km to its migratory journey.

The reasons why remain a mystery.

Ros Green, Research Ecologist at BTO and lead author on the paper, said, "Having a working knowledge of species' migratory movements is an essential first step in understanding the risks that offshore wind farms may pose to populations of Shelduck and other species. Further, our tags provided data on Shelduck flight speeds and height, giving additional vital information on the magnitude of the risks posed by developments."

She added, "It is well known that British and Irish Shelduck populations move back and forth across the North Sea each year, but this is the first published data on the specific routes taken, how long the migration takes to complete, and how fast and high Shelduck fly."

The four Shelduck were fitted with solar powered GPS-GSM tags, allowing BTO scientists to follow their migratory movements in great detail and in almost real time, as the GPS data are downloaded over mobile phone networks.

Incredibly, although all four birds took very different routes across the North Sea, they all ended their migration in almost exactly the same place in the Dutch Wadden Sea. During the crossing, the birds flew at speeds of up to 55 knots, and up to 354 m above the sea's surface.

The movements recorded indicated apparent interactions with several wind farm sites, though most of these are currently only at the planning phase.

Only one data fix was recorded within an operational wind farm when a bird flew within the Egmond aan Zee wind farm.

This Shelduck was flying at a height of 85 m, which would place it at potential risk of collision with the wind farm's spinning turbine blades, which sweep an area between 25 and 139 m above sea level.

Indeed, the majority of the four Shelducks' flight occurred below 150 m above sea level, which would place them in the 'collision risk zone' of many of the offshore wind farms they may pass through.

The BTO team plans to extend the tracking project and collect more data to investigate whether Shelduck are actually at risk of collision, or whether the population can adapt to this essential renewable energy infrastructure.

"Further work", the research team add, "is also needed on tagging approaches in order to extend the deployment period beyond the main moult, and capture data on the return migration. A larger sample size of tracked birds is needed before firm conclusions on Shelduck migration can be drawn. Ideally this would include birds from a wider geographical range of British breeding sites, as well as Shelduck that breed on the continent but migrate to Britain for the winter."

Credit: 
Taylor & Francis Group

Discovery of 'knock-on chemistry' opens new frontier in reaction dynamics

image: An artist's interpretation of the energy barrier that a reagent fluorine atom must cross upon colliding with a fluoromethyl molecule on its way to forming a product as a result of a chemical reaction. Researchers at the University of Toronto observed the 'knock-on' collinear ejection of the reaction product (encircled in blue) in the continuation of the direction of the incoming reagent molecule (encircled in red).

Image: 
Illustration: Lydie Leung

TORONTO, ON - Research by a team of chemists at the University of Toronto, led by Nobel Prize-winning researcher John Polanyi, is shedding new light on the behaviour of molecules as they collide and exchange atoms during chemical reaction. The discovery casts doubt on a 90-year old theoretical model of the behavior of the "transition state", intermediate between reagents and products in chemical reactions, opening a new area of research.

The researchers studied collisions obtained by launching a fluorine atom at the centre of a fluoromethyl molecule - made up of one carbon atom and three fluorine atoms - and observed the resulting reaction using Scanning Tunneling Microscopy. What they saw following each collision was the ejection of a new fluorine atom moving collinearly along the continuation of the direction-of-approach of the incoming fluorine atom.

"Chemists toss molecules at other molecules all the time to see what happens or in hopes of making something new," says Polanyi, University Professor in the Department of Chemistry in the Faculty of Arts & Science at U of T and senior author of a study published this month in Communications Chemistry. "We found that aiming a reagent molecule at the centre of a target molecule, restricts the motion of the emerging product to a single line, as if the product had been directly 'knocked-on'. The surprising observation that the reaction product emerges in a straight line, moving in the same direction as the incoming reagent atom, suggests that the motions that lead to reaction resemble simple onward transfer of momentum.

"The conservation of linear momentum we observe here suggests a short-lived "transition state", rather than the previous view that there is sufficient time for randomization of motion. Newton would, I think, have been pleased that nature permits a simple knock-on event to describe something as complex as a chemical reaction," says Polanyi.

The team, which included senior research associate Lydie Leung, graduate student Matthew Timm and PhD graduate Kelvin Anggara, had previously established the means to control whether a molecule launched towards another either collides head-on with its target or misses by a chosen amount - a quantity known as the impact parameter. The higher the impact parameter, the greater the distance by which the incoming molecule misses the target molecule. For the new work, the researchers employed an impact parameter of zero to give head-on collision.

"We call this new type of one-dimensional chemical reaction 'knock-on', since we find that the product is knocked-on along the continuation of the direction of reagent approach," says Polanyi. "The motions resemble the knock-on of the steel balls of a Newton's cradle. The steel balls of the cradle don't pass through one another, but efficiently transfer momentum along a single line.

"Similarly, our knock-on reactions transfer energy along rows of molecules, thereby favouring a chain-reaction. This conservation of reaction energy in knock-on chemistry could be useful as the world moves toward energy conservation. For now, it serves as an example of chemical reaction at its simplest."

It has been known for well over a century that there is an energy barrier that chemical reagents must cross on their way to forming reaction products. An energized transition state exists briefly at the top of the barrier in a critical configuration - no transition state, no reaction.

Polanyi says the observation of collinear 'knock-on' provides insight into the reactive collision-complex, which lasts for approximately a million-millionth of a second. "Our results clearly tell us that the transition state at the top of the energy barrier lasts for so little time that it cannot fully scramble its momenta. Instead, it remembers the direction from which the attacking fluorine atom came."

In the 1930s, chemists began calculating the likelihood of forming a transition state on the assumption that it scrambles its energy, like a hot molecule. Although it was an assumption, it appeared well-established and gave rise to the statistical "transition state theory" of reaction rates. This is still the favored method for calculating reaction rates.

"Now, with the ability to observe the reagents and the products at the molecular level, one can see precisely how the reagents approach and subsequently how the products separate," Polanyi says. "But this runs contrary to the classic 90-year old statistical model. If the energy and momentum were randomized in the hot transition state, the products would not exhibit a clear memory of the direction of approach of the reagents. Energy-randomization would work to erase that memory."

The researchers say the observed directional motion of the reaction products favours a deterministic model of the transition state to replace the long-standing statistical model. Additionally, the observed reaction dynamics allow the reagent energy to be passed on in successive collinear collisions.

Credit: 
University of Toronto

University of Utah scientists plumb the depths of the world's tallest geyser

image: The outline of the Steamboat and Cistern plumbing systems with two viewing angles. The structure, color-coded by the depth, delineates the observed seismically active area during the eruption cycles. The solid star, solid square, and open triangles denote Steamboat, Cistern, and station locations on the surface, respectively.

Image: 
Courtesy of Sin-Mei Wu/University of Utah

When Steamboat Geyser, the world's tallest, started erupting again in 2018 in Yellowstone National Park after decades of relative silence, it raised a few tantalizing scientific questions. Why is it so tall? Why is it erupting again now? And what can we learn about it before it goes quiet again?

The University of Utah has been studying the geology and seismology of Yellowstone and its unique features for decades, so U scientists were ready to jump at the opportunity to get an unprecedented look at the workings of Steamboat Geyser. Their findings provide a picture of the depth of the geyser as well as a redefinition of a long-assumed relationship between the geyser and a nearby spring. The findings are published in the Journal of Geophysical Research-Solid Earth.

"We scientists don't really know what controls a geyser from erupting regularly, like Old Faithful, versus irregularly, like Steamboat," says Fan-Chi Lin, an associate professor with the Department of Geology and Geophysics. "The subsurface plumbing structure likely controls the eruption characteristics for a geyser. This is the first time we were able to image a geyser's plumbing structure down to more than 325 feet (100 m) deep."

Meet Steamboat Geyser

If you're asked to name a Yellowstone geyser and "Old Faithful" is the only one that comes to mind,  then you're past due for an introduction to Steamboat. Recorded eruption heights reach up to 360 feet (110 m), tall enough to splash the top of the Statue of Liberty.

"Watching a major eruption of Steamboat Geyser is quite amazing," says Jamie Farrell, a research assistant professor with the University of Utah Seismograph Stations. "The thing that I remember most is the sound. You can feel the rumble and it sounds like a jet engine. I already knew that Steamboat was the tallest active geyser in the world, but seeing it in major eruption blew me away."

Unlike its famous cousin, Steamboat Geyser is anything but faithful. It's only had three periods of sustained activity in recorded history--one in the 1960s, one in the 1980s and one that began in 2018 and continues today. But the current phase of geyser activity has already seen more eruptions than either of the previous phases.

Near Steamboat Geyser is a pool called Cistern Spring. Because Cistern Spring drains when Steamboat erupts, it's been assumed that the two features are directly connected.

"With our ability to quickly deploy seismic instruments in a nonintrusive way, this current period is providing the opportunity to better understand the dynamics of Steamboat Geyser and Cistern Spring which goes a long way to help us understand eruptive behavior," says Farrell.

Giving the geyser a CT scan

For several years now, U scientists have been studying the features of Yellowstone National Park, including Old Faithful, using small, portable seismometers. The football-sized instruments can be deployed by the dozens wherever the researchers need for up to one month per deployment in order to get a picture of what's happening under the ground. Each slight small movement of the ground, even the periodic swells of crowds on Yellowstone's boardwalks, is felt and recorded.

And just as doctors can use multiple X-rays to create a CT scan of the interior of a human body, seismologists can use multiple seismometers recording multiple seismic events (in this case, bubbling within the geyser's superheated water column) to build a sort of image of the subsurface.

In the summers of 2018 and 2019, Farrell and colleagues collaborated with the National Park Service and placed 50 portable seismometers in an array around Steamboat Geyser. The 2019 deployment recorded seven major eruptions, with a range of inter-eruption periods of three to eight days apart, each providing a wealth of data.

Plumbing the depths

The results showed that the underground channels and fissures that comprise Steamboat Geyser extend down at least 450 feet (140 m). That's much deeper than the plumbing of Old Faithful, which is around 260 feet (80 m).

The results didn't show a direct connection between Steamboat Geyser and Cistern Spring, however.

"This finding rules out the assumption that the two features are connected with something like an open pipe, at least in the upper 140 meters," says Sin-Mei Wu, a recently graduated doctoral student working with Lin and Farrell. That's not to say that the two features are totally separate, though. The fact that the pool drains when Steamboat erupts suggests that they are still connected somehow, but probably through small fractures or pores in the rock that aren't detectable using the seismic signals the researchers recorded. "Understanding the exact relationship between Steamboat and Cistern will help us to model how Cistern might affect Steamboat eruption cycles," added Wu.

Will scientists eventually be able to predict when the geyser will erupt? Maybe, Wu says, with a better understanding of hydrothermal tremor and a long-term monitoring system. But, in the meantime, Wu says, this study is really just the beginning of understanding how Steamboat Geyser works.

"We now have a baseline of what eruptive activity looks like for Steamboat," Lin pointed out. "When it becomes less active in the future, we can re-deploy our seismic sensors and get a baseline of what non-active periods look like. We then can continuously monitor data coming from real-time seismic stations by Steamboat and assess whether it looks like one or the other and get a more real-time analysis of when it looks like it is switching to a more active phase."

Credit: 
University of Utah

Lessons learned in Burkina Faso can contribute to a new decade of forest restoration

image: Basin for water capture, stone bunds, zai pits.

Image: 
Alliance of Bioversity and CIAT/B.Vinceti

Forest landscape restoration is attaining new global momentum this year under the Decade of Ecosystem Restoration (2021-2030), an initiative launched by the United Nations. Burkina Faso, in West Africa, is one country that already has a head start in forest landscape restoration, and offers valuable lessons. An assessment of achievements there and in other countries with a history of landscape restoration is critical to informing a new wave of projects aiming for more ambitious targets that are being developed thanks to renewed global interest and political will to improve the environment.

Burkina Faso has been fighting with desertification and climate change, and has seen a progressive degradation of its forested landscapes due to the expansion of agriculture. In 2018, the country planned to restore 5 million hectares of degraded land by 2030, as part of the pan-African initiative AFR100. However, the country is facing many challenges amid growing pressures on natural resources, extreme degradation processes, and changing climatic conditions. So far, restoration initiatives have only partly succeeded due to various constraints and have mainly targeted small areas when compared to the scale of landscape degradation that has happened.

In 2019, researchers at the Alliance of Bioversity International and CIAT interviewed managers of 39 active restoration initiatives in Burkina Faso to understand bottlenecks and opportunities for scaling up ongoing efforts. The initiatives examined were concentrated in the Sahelian and northern part of the Sudanian region, where most degraded lands are located. The majority of these initiatives were less than 3 years old and all aimed at bringing back tree cover in the landscape, among other objectives. They reported their findings in "Sustainability" in December.

Initiatives combined objectives that spanned from recovering ecological functions of the ecosystems, increasing resilience of local communities to climate change and enhancing productivity in agro-sylvo-pastoral systems, in alignment with national policies that promote both livelihood improvement and ecosystems conservation. Most restoration initiatives had a strong involvement of local NGOs and associations, directly engaged in managing activities on the ground, while the funding was primarily provided through multilateral or bilateral international co-operation projects.

Assisted natural regeneration, an approach well-adapted to landscapes in which old tree stumps are sufficiently present and the soil seed bank is not totally depleted, was found to be commonly used to foster tree development. It favors regrowth from existing tree stumps, through their management and protection from disturbance. It is the most cost-efficient approach and have proved to be successful for restoring vast areas in other West African countries.

Other practices, although very labor-intensive, are commonly adopted, as they are indispensable to cultivate in extreme environments where water is scarce and soil fertility limited. These consist in the creation of stone bunds, half-moons, Vallerani trenches and zaï, which are pits filled with seed and manure. Shrubs and grass were often commonly planted along with trees, as they play a role in conserving soils, creating favorable microclimates, stabilizing moisture levels, and providing forage for animals, thereby creating a benefit from the early stage of restoration activities.

Tree planting was implemented by the majority of the restoration initiatives, as natural regeneration alone is not sufficient to sustain the recovery of a tree cover in most contexts. Half of the restoration initiatives sourced at least part of their planting materials from the National Tree Seed Center, a government-run seed conservation and production research center. The center offers seeds of a large range of native species and ensures that collection practices follow best standards, guided by genetic considerations about the origin of the planting material. However, a significant number of initiatives relied exclusively on self-collected, locally procured seed, harvested from sources potentially depauperated and from a limited number of individuals available, raising concerns about quality of the planting material, its growth performance, and capacity to survive in the face of changing climatic conditions.

Participatory approaches to involve local communities were generally adopted across the initiatives examined and capacity building activities were a common denominator, however, the role of local communities in decision-making seemed still limited. Women especially tend to be excluded and have very limited land-access rights.

Despite all the critical aspects identified, the increasing number of restoration initiatives, the diversity of approaches used by local actors to overcome constraints and the support from the government are all encouraging aspects. The renewed interest of international donors in supporting the Great Green Wall for the Sahel and Sahara Initiative (GGW), an African-led initiative, involving 11 countries, to fight land degradation, desertification and drought will provide an ideal framework to achieve multiple objectives, scaling up efforts to restore degraded land, creating job opportunities, and strengthening resilience of rural communities.

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

Biosensing with whispering-gallery mode lasers

image: a, Single-cell monitoring with an intracellular microlaser. b, 3D arrangement of myofibrils around microbeads in neonatal cardiomyocytes (CMs). Cell nucleus (magenta) and microlaser (green). c, WGM spectrum of a microlaser and its shifting. d, Microlaser attached to the atrium of a zebrafish heart. e, Refractive index change between the resting phase, diastole, and peak contraction, systole, for 12 individual cells. f, Extracellular microlaser on top of an adult CM. Scale bar 30 μm. g, Trace of a spontaneously beating neonatal CM during administration of 500 nM nifedipine. Adapted with permission from Schubert M. et al. Monitoring contractility in cardiac tissue with cellular resolution using biointegrated microlasers. Nature Photonics 14, 452-458, (2020).

Image: 
by Nikita Toropov, Gema Cabello, Mariana P. Serrano, Rithvik R. Gutha, Matías Rafti, Frank Vollmer

Label-free optical sensors based on optical whispering-gallery-mode (WGM) microresonators exhibit extraordinary sensitivity for detecting physical, chemical, and biological entities, even down to single molecules. This extreme advancement in label-free optical detection is made possible by application of the optical microresonator, i.e. a 100 um glass microspheres, as optical cavity to enhance the detection signal. Akin to a spherical micromirror, the WGM cavity reflects the light by near-total internal reflection and thereby creates multiple cavity passes that enhance the optical detection of analyte molecules interacting with the evanescent field.

In contrast to the 'cold' WGM microresonators, the emerging active WGM microlasers have the potential to significantly expand the number of possible applications of this class of sensors in biological and chemical sensing, and especially in in vivo sensing. The WGM microlasers can sense from within tissue, organisms and single cells, and they can be used to improve upon the already impressive single-molecule detection limits of the 'cold'-cavity optoplasmonic WGM sensors.

Here, we review the most recent advances of WGM microlasers in biosensing. In contrast to the 'cold' cavity WGM sensors, the active WGM microresonators make use of gain media such as dye molecules and quantum dots to compensate for optical loss and to achieve lasing of the WGM modes. Similar to other conventional lasers, lasing is observed from narrow spectral lines in the WGM emission spectra.

We review the main building blocks of WGM microlasers, recently demonstrated sensing mechanisms, the methods for integrating gain media in WGM sensors, and the prospects for active WGM sensors to become a useful technology in real-world applications. We review WGM microlaser sensing experiments at the molecular-level where lasing spectra are analyzed to study the binding of molecules, to sensing at the cellular level where microlasers are embedded into or integrated with single cells to enable novel in vivo sensing and single-cell tracking applications (see figure).

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Demonstration of the universal quantum error correcting code with superconducting qubits

image: (a) Encoding quantum circuit of the five-qubit code. (b) Expectation values of 31 stabilizers for the encoded logical state |T>_L. (c) Expectation values of logical Pauli operators and state fidelity of the encoded magic state.

Image: 
@Science China Press

Universal fault-tolerant quantum computing relies on the implementation of quantum error correction. An essential milestone is the achievement of error-corrected logical qubits that genuinely benefit from error correction, outperforming simple physical qubits. Although tremendous efforts have been devoted to demonstrate quantum error correcting codes with different quantum hardware, previous realizations are limited to be against certain types of errors or to prepare special logical states. It remains one of the greatest and also notoriously difficult challenges to realize a universal quantum error correcting code for more than a decade.

In a new research article published in the Beijing-based National Science Review, scientists at the University of Science and Technology of China, the Tsinghua University, and at the University of Oxford, present their latest work on experimental exploration of five-qubit quantum error correcting code with superconducting qubits. The authors realized the [[5,1,3]] code on a superconducting quantum processor, verified the viability of experimental realization of quantum error correcting codes with superconducting qubits.

These scientists completed the important step towards the implementation of quantum error correction. This is achieved first by dedicated experimental optimization of superconducting quantum qubits, enabling the realization of more than a hundred quantum gates. Focusing on the five-qubit quantum error correcting code, the so-called 'perfect code' that corrects single generic qubit errors, they theoretically compiled and optimized its encoding process to have the minimal possible number (eight) of nearest-neighbor controlled-phase gates. These experimental and theoretical advances finally enabled the realization of the basic ingredients of a fully functional five-qubit error correcting code, involving the encoding of a general logical qubit into an error correcting code, with the subsequent verification of all key features including the identification of an arbitrary physical error, the power for transversal manipulation of the logical state, and state decoding.

"The device for the implementation of the five-qubit error correcting code is a 12-qubit superconducting quantum processor. Among these 12 qubits, we chose five adjacent qubits to perform the experiment. The qubits are capacitively coupled to their nearest neighbours. The capacitively coupled XY control lines enable the application of single-qubit rotation gates by applying microwave pulses, and the inductively coupled Z control lines enable the double-qubit controlled-phase gates by adiabatically tune the two-qubit state |11> close to the avoid level crossing of |11> and |02>. After careful calibrations and gate optimizations, we have the average gate fidelities as high as 0.9993 for single-qubit gates and 0.986 for two-qubit gates. With the implementation of only single-qubit rotation gates and double-qubit controlled-phase gates, we realized the circuit for encoding and decoding of the logical state." they state in an article titled "Experimental exploration of five-qubit quantum error correcting code with superconducting qubits."

"On a superconducting quantum processor, we experimentally realised the logical states |0>_L, |1>_L, |±>_L, and |±i>_L that are eigenstates of the logical Pauli operators X_L, Y_L, and Z_L, and the magic state |T>_L= (|0>_L+e^{i\pi/4}|1>_L)/\sqrt{2} that cannot be realized by applying Clifford operations on any eigenstate of the logical Pauli operators," they add. "Finally, the state fidelity of |T>_L reaches 54.5(4)%."

"The quality of the prepared logical states can also be divided into its overlap with the logical code space and its agreement with the target logical state after projecting it into the code space," they stated. After projecting to the code space, the average value is as high as 98.6(1)%. "Since projecting to the code space is equivalent to post-selecting all +1 stabilizer measurements, our result also indicates the possibility of high fidelity logical state preparation with future non-destructive stabilizer measurements."

After the realisation of logical state, the scientists proceed with the verification of error correction/detection ability of the five qubit code. "As shown in Fig.2(a) we do indeed find, for each case, the corresponding syndrome pattern that identifies the location of the single-qubit error," they added.

Then, the scientists implemented and verified the transversal logical operations, and performed the quantum process tomography within the code space to characterize these logical operations. "We determine gate fidelities of the logical X_L, Y_L, and Z_L operations to be 97.2(2)%, 97.8(2)%, and 97.3(2)%, respectively," they stated.

"Finally, after encoding the single-qubit input state into the logical state, we apply the decoding circuit, see Fig. 4(a), to map it back to the input state," they added. "After quantum process tomography from the four output states, the process fidelity is determined as 74.5(6)% as shown in Fig. 4(b)."

"An essential milestone on the road to fault-tolerant quantum computing is the achievement of error-corrected logical qubits that genuinely benefit from error correction, outperforming simple physical qubits," they add. "Direction for future works include the realization of non-destructive error detection and error correction, and the implementation of logical operations on multiple logical qubits for the five-qubit code. Our work also has applications in error mitigation for near-term quantum computing."

Credit: 
Science China Press