Culture

New clinical trial examines a potential noninvasive solution for overactive bladders

image: Keck Medicine urologist David Ginsberg, MD, a professor of clinical urology at the Keck School of Medicine of USC

Image: 
Keck School of Medicine

LOS ANGELES - Keck Medicine of USC urologists are launching a clinical trial to evaluate the effectiveness of spinal cord stimulation in patients with an overactive bladder due to neurological conditions, such as a spinal cord injury or stroke, and idiopathic (unknown) causes.

Researchers will use a technique known as Transcutaneous Electrical Spinal Cord Neuromodulation (TESCoN), a noninvasive therapy that delivers low-intensity electric impulses to the spinal cord.

The trial follows a recent study published in Frontiers in Systems Neuroscience led by Evgeniy Kreydin, MD, a Keck Medicine urologist and assistant professor of clinical urology at the Keck School of Medicine of USC. Kreydin and colleagues treated 14 patients with bladder dysfunction due to either spinal cord injury, stroke, multiple sclerosis or idiopathic cause with spinal cord stimulation sessions three times a week for eight weeks. All patients reported improved bladder sensation as well as a reduced number of incontinence episodes and night-time bladder voiding.

An overactive bladder causes several urological problems, such as frequent urination and incontinence. Existing treatments for the condition can cause adverse side-effects or require invasive, highly specialized procedures.

"TESCoN is well-tolerated by patients and easy for doctors to administer," says Kreydin, who will be co-investigating the trial along with Keck Medicine urologist David Ginsberg, MD, a professor of clinical urology at the Keck School.

In this double-blinded, sham-controlled trial, half the participants will receive two one-hour sessions of TESCoN per week over 12 weeks. The stimulation will be applied via electrodes attached to a device that emits low-intensity impulses through adhesive pads placed on the patient's back. The other half of the participants will receive a placebo, or sham stimulation over course of the trial.

The trial will track subjects' number of daily urination and incontinence episodes over 72-hour periods. Researchers will then compare the data between the beginning and the conclusion of the trial, and between those receiving the real or sham stimulation. In addition, trial participants will complete bladder symptom questionnaires before and after receiving the treatments to further track improvement.

TESCoN is developed by the medical device company spineX. Company founders will be initiating discussions with the U.S. Food and Drug Administration to gain regulatory approval for the procedure.

The exact mechanism of how TESCoN improves bladder function is not known. Kreydin speculates that the stimulation retrains the spinal neural networks to properly store and void urine and to regain bladder sensation. For patients with a spinal cord injury or a neurological disorder, the nerves controlling the bladder are either cut off or disrupted.

"The ultimate goal of the trial is to improve peoples' sense of well-being," says Kreydin. "An overactive bladder can cause discomfort, inconvenience and embarrassment. The more control patients have over their bladders, the more control they have over their lives."

Credit: 
University of Southern California - Health Sciences

Biomass fuels can significantly mitigate global warming

image: Photo of the Biofuels Cropping System Experiment in Michigan, taken by an unmanned aerial vehicle (UAV). Research conducted by Ben-Gurion University and Michigan State University

Image: 
Photo Ryan Mater

BEER-SHEVA, ISRAEL...March 10, 2020 - Biomass fuels derived from various grasses could significantly mitigate global warming by reducing carbon, according to a long-term field study by researchers at Ben-Gurion University of the Negev (BGU) and Michigan State University (MSU).

In a new paper published in Environmental Science and Technology, the researchers examined a number of different cellulosic biofuel crops to test their potential as a petroleum alternative in ethanol fuel and electric light-duty vehicles which includes passenger cars and small trucks.

Climate change mitigation scenarios limiting global temperature increases to 1.5 °C rely on decarbonizing vehicle fuel with bioenergy production together with carbon capture and storage (BECCS). Carbon Capture and Storage (CCS) is a technology that can capture up to 90% of the carbon dioxide (CO2) emitted during electricity generation and industrial processes, which prevents atmospheric increase of CO2 concentration. Using both CCS and renewable biomass is one of the few carbon abatement technolo¬gies resulting in a 'carbon-negative' mode ¬- actually removing carbon dioxide from the atmosphere.

This research, for the first time, evaluates bioenergy feedstocks grown side-by-side. The seven crops included, switchgrass, giant miscanthus, poplar trees, maize residuals, restored native prairie, and a combination of grasses and vegetation that grows spontaneously following field abandonment.

"Every crop we tested had a very significant mitigation capacity despite being grown on very different soils and under natural climate variability," says Dr. Ilya Gelfand, of the BGU French Associates Institute for Agriculture and Biotechnology of Drylands, The Jacob Blaustein Institutes for Desert Research. "These crops could provide a very significant portion of the decarbonization of U.S. light-duty vehicle transport to curb CO2 emissions and slow global warming." Decarbonization of transportation is critical to limit rising temperatures."

In the study, when compared with petroleum only emissions, ethanol with bioenergy was 78-290% better in reducing carbon emissions; ethanol was 204-416% improved, biomass powered electric vehicles powered by biomass was 74-303% cleaner and biomass-powered electric vehicles combined with CSS was 329-558% superior.

The study was conducted at Michigan State University's (MSU) Kellogg Biological Station and the University of Wisconsin's Arlington Research Station which is part of the U.S. Department of Energy's Great Lakes Bioenergy Research Center. Interestingly, the crops grown at MSU did as well as those grown at the more fertile Wisconsin site.

"This is significant because it means that we're likely to be able to produce these crops on marginal lands and still get high productivity," says Prof. Phil Robertson of MSU, senior author of the study. "Long-term field experiments that include weather extremes such as drought, and actual rather than estimated greenhouse gas emissions, are crucial for stress-testing models assumptions."

The next phase of research is to assess other environmental and economic aspects of bioenergy crops. The best biofuel crops need to be economically attractive to farmers, don't add more nitrogen or pesticides to the environment and are conservation friendly.

Credit: 
American Associates, Ben-Gurion University of the Negev

Making more MXene

image: Drexel researchers can now produce MXene materials in batches as large as 50 grams, with a new, scalable production system.

Image: 
Drexel University

For more than a decade, two-dimensional nanomaterials, such as graphene, have been touted as the key to making better microchips, batteries, antennas and many other devices. But a significant challenge of using these atom-thin building materials for the technology of the future is ensuring that they can be produced in bulk quantities without losing their quality. For one of the most promising new types of 2D nanomaterials, MXenes, that's no longer a problem. Researchers at Drexel University and the Materials Research Center in Ukraine have designed a system that can be used to make large quantities of the material while preserving its unique properties.

The team recently reported in the journal Advanced Engineering Materials that a lab-scale reactor system developed at the Materials Research Center in Kyiv, can convert a ceramic precursor material into a pile of the powdery black MXene titanium carbide, in quantities as large as 50 grams per batch.

Proving that large batches of material can be refined and produced with consistency is a critical step toward achieving viability for manufacturing. For MXene materials, which have already proven their mettle in prototype devices for storing energy, computing, communication and health care, reaching manufacturing standards is the home stretch on the way to mainstream use.

"Proving a material has certain properties is one thing, but proving that it can overcome the practical challenges of manufacturing is an entirely different hurdle - this study reports on an important step in this direction," said Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel's College of Engineering, who has pioneered the research and development of MXene and is a lead author of the paper. "This means that MXene can be considered for widespread use in electronics and energy storage devices."

Researchers at Drexel have been making MXene in small quantities - typically one gram or less - since they first synthesized the material in 2011. The layered nanomaterial, which looks like a powder in its dry form, starts as a piece of ceramic called a MAX phase. When a mixture of hydrofluoric and hydrochloric acid interacts with the MAX phase it etches away certain parts of the material, creating the nanometer-thin flakes characteristic of MXenes.

In the lab, this process would take place in a 60 ml container with the ingredients added and mixed by hand. To more carefully control the process at a larger scale, the group uses a one-liter reactor chamber and a screw feeder device to precisely add MAX phase. One inlet feeds the reactants uniformly into the reactor and another allows for gas pressure relief during the reaction. A specifically designed mixing blade ensures thorough and uniform mixing. And a cooling jacket around the reactor lets the team adjust the temperature of the reaction. The entire process is computerized and controlled by a software program created by the Materials Research Center team.

The group reported successfully using the reactor to make just under 50 grams of MXene powder from 50 grams of MAX phase precursor material in about two days (including time required for washing and drying the product). And a battery of tests conducted by students at Drexel's Materials Science and Engineering Department showed that the reactor-produced MXene retains the morphology, electrochemical and physical properties of the original lab-made substance.

This development puts MXenes in a group with just a handful of 2D materials that have proven they can be produced in industrial-size quantities. But because MXene-making is a subtractive manufacturing process - etching away bits of a raw material, like planing down lumber - it stands apart from the additive processes used to make many other 2D nanomaterials.

"Most 2D materials are made using a bottom-up approach," said Christopher Shuck, PhD, a post-doctoral researcher in the A.J. Drexel Nanomaterials Institute. "This is where the atoms are added individually, one by one. These materials can be grown on specific surfaces or by depositing atoms using very expensive equipment. But even with these expensive machines and catalysts used, the production batches are time-consuming, small and still prohibitively expensive for widespread use beyond small electronic devices."

MXenes also benefit from a set of physical properties that ease their path from processed material to final product - a hurdle that has tripped up even today's widely used advanced materials.

"It typically takes quite a while to build out the technology and processing to get nanomaterials in an industrially usable form," Gogotsi said. "It's not just a matter of producing them in large quantities, it often requires inventing completely new machinery and processes to get them in a form that can be inserted into the manufacturing process - of a microchip or cell phone component, for example."

But for MXenes, integrating into the manufacturing line is a fairly easy part, according to Gogotsi.

"One huge benefit to MXenes is that they be used as a powder right after synthesis or they can be dispersed in water forming stable colloidal solutions," he said. "Water is the least expensive and the safest solvent. And with the process that we've developed, we can stamp or print tens of thousands of small and thin devices, such as supercapacitors or RFID tags, from material made in one batch."

This means it can be applied in any of the standard variety of additive manufacturing systems - extrusion, printing, dip coating, spraying - after a single step of processing.

Several companies are looking developing the applications of MXene materials, including Murata Manufacturing Co, Ltd., an electronics component company based in Kyoto, Japan, which is developing MXene technology for use in several high-tech applications.

"The most exciting part about this process is that there is fundamentally no limiting factor to an industrial scale-up," Gogotsi said. "There are more and more companies producing MAX phases in large batches, and a number of those are made using abundant precursor materials. And MXenes are among very few 2D materials that can be produced by wet chemical synthesis at large scale using conventional reaction engineering equipment and designs."

Credit: 
Drexel University

Modern virtual and augmented reality device can help simulate sight loss

image: Researcher wearing the 'HTC Vive' head mounted display used in the study.

Image: 
Dr Peter Jones, City, University of London

Published today, during World Glaucoma Week 2020, a new study demonstrates how commercially available head mounted displays (HMD) can be used to simulate the day-to-day challenges faced by people with glaucoma.

Glaucoma is an umbrella term for a group of degenerative eye diseases that affect the optic nerve at the back of the eye. It is the leading cause of irreversible blindness worldwide, and is estimated to represent 11% of cases of serious sight impairment in the UK1.

The study, from the Crabb Lab, at City, University of London, suggests potential applications of the technology could include helping policymakers better assess the impact of visual impairment on patients, and helping architects to design more accessible buildings.

Twenty-two volunteers who did not have glaucoma took part in the study. Participants wore a HMD while performing various tasks in either virtual or augmented reality.

In the virtual reality task, participants were placed in a simulation of a typical, 'cluttered' house. Moving their eyes and head allowed them to look around it in order to find a mobile phone hidden somewhere in the house.

In the augmented reality task, participants navigated a real-life, human-sized 'mouse maze', which they viewed through cameras in the front of the HMD.

Sensors in the HMD tracked the position of the participant's eyes, allowing the software to generate a blurred area of vision, known as a 'scotoma', that obstructed the same portion of their visual field, wherever they looked.

The scotoma was created using medical data from a real glaucoma patient, and either restricted vision in the upper part of the participant's visual field, or in the bottom part. In 'control' trials the scotoma was absent.

Similar to real glaucoma patients, participants were slower to perform the tasks when the simulated impairment was present, and made more head and eye movements too. They likewise found the tasks particularly difficult when the vision loss obstructed the bottom part of their visual field. The results also showed how some people were better able to cope than others with an identical impairment.

The software the authors created to simulate the visual impairment (OpenVisSim) has been shared online for others to freely use and develop. It is compatible with most commercially available HMDs and smartphones, and supports a range of visual effects, designed to simulate the different symptoms associated with a range of eye diseases.

First author of the study, Dr Peter Jones, Lecturer at the Crabb Lab, City, University of London, said:

"While it's impossible to recreate exactly what it's like to have glaucoma, our findings suggest that digital simulators can at least allow people to experience some of the challenges that people with glaucoma face every day. We are now working with architects to explore whether sight-loss simulators can be used design more accessible buildings and transport systems."

The study is published in the online journal, npj Digital Medicine.

Credit: 
City St George’s, University of London

Palm oil must be made more sustainable while replacements are made scalable, Bath engineers warn

Efforts to create synthetic replacements for palm oil are still likely to take several years, so immediate attention should be focused on making the existing production process more sustainable, researchers at the University of Bath's Centre for Integrated Bioprocessing Research (CIBR) and Centre for Sustainable Circular Technologies (CSCT) have found.

Palm oil production has long been criticised for its environmental impact through deforestation and despite a strong environmental case for curtailing the industry, at present none of the existing alternative products would be economically or environmentally viable at scale, state the authors Dr Sophie Parsons, Prof Chris Chuck and Dr Sofia Raikova.

Their research paper, The viability and desirability of replacing palm oil, published in Nature Sustainability, finds that despite the strong case to reduce farming of oil palm, in the short term efforts must focus on making the process more sustainable, rather than replacing it.

Efforts to synthesise an alternative are underway and have received significant investment, but these are likely to take several years to bear fruit. Because of this, the team calls on governments and industry players to work together to make current production more sustainable while synthetic alternatives are developed.

Palm oil a threat to climate

Prof Chris Chuck said: "Palm oil is the most widely used land-grown oil crop, and expansion in the market over the past few decades has led to increases in greenhouse gas emissions and the loss of biodiverse tropical forest areas to farming. Whilst action is being taken to improve the sustainability of palm oil cultivation it is not happening as effectively or quickly as it needs to."

The team reviewed existing alternatives to palm oil from a technical, environmental and economic perspective, and grouped the alternatives into three distinct types of alternative technology including existing crop oils, alternative tropical oils and microbial single cell oils

Recommendations to reduce impact

Dr Sophie Parsons added: "Palm oil is challenging to replace as a product because it is very versatile - it is used in a wide range of cooking, food and other consumer goods products, as well as fuels - but it's also cheap to produce compared to the alternatives.

"While they may be able to play a role in replacing palm oil, large scale replacement with alternative crop oils such as sunflower, rapeseed, or exotic oils like coconut oil and shea butter presents significant sustainability and technical challenges. The only viable large-scale direct replacements are single cell oils from algae or yeast, but these require significant further development before being economically viable.

"Governments in producing countries and industry should be working together closely to reduce the impact of the industry while synthetic alternatives are developed for the sake of our climate."

The suite of measures they recommend to reduce the impact of production include empowering the existing Roundtable on Sustainable Palm Oil (RSPO) scheme to take effective enforcement action where needed, supporting through policy increased demand for Certified Sustainable Palm Oil (CSPO), which at present accounts for only 19% of global production, and ensuring local rules prevent further expansion of farming into ecologically-valuable land. Other measures include implementing certification of plantations and mills, and better managing wastage in the production process.

The team is currently working to understand the lowest theoretical cost of a microbial oil, and the further technological development that would need to be created to produce a competitive alternative to palm oil. They aim to publish this research later in 2020.

Credit: 
University of Bath

Inherited arrhythmia in young Finnish Leonbergers under investigation

A new study in Finland has revealed that inherited malignant ventricular arrhythmia is fairly common among Finnish Leonbergers under three years of age. At its worst, such arrhythmia can result in the dog's sudden death.

Arrhythmia and sudden death in Leonbergers have been a subject of research coordinated by Professor Hannes Lohi since 2016 at the Faculty of Veterinary Medicine, University of Helsinki, the University's Veterinary Teaching Hospital and the Finnish Food Authority.

A total of 46 Leonbergers were enrolled for comprehensive cardiac examinations, of whom 15 per cent were diagnosed with severe arrhythmia and another 15 per cent with milder cardiac changes. In addition, the project involved 21 Leonbergers that had died suddenly before turning three, and who had had a postmortem evaluation performed on them.

"No changes indicative of any other causes of death were identified in the evaluations, which makes cardiac arrhythmia the most likely cause of the sudden deaths," says Maria Wiberg, docent of small animal internal medicine at the Veterinary Teaching Hospital, who coordinated the clinical examinations.

Arrhythmia in dogs comes in varying degrees of severity. Diagnosing ventricular arrhythmia does not necessarily mean that the dog will perish, although the risk of sudden death does increase. For example, in a study previously carried out in the United States on German Shepherd dogs, it was found that arrhythmia becomes less frequent as the dog grows older. The severity of the disorder also varies from day to day.

In the Finnish study, the model of inheritance for arrhythmias was assessed on the basis of family connections between the dogs that had died suddenly and those suffering from arrhythmia.

Arrhythmia is common in Leonbergers, and the disorder is typically litter-specific, making it probable that the factors underlying it are hereditary. As it has not been possible to perform cardiac examinations on the afflicted dogs' parents when they were under the age of three, the precise model of inheritance is yet to be determined. For dogs whose heart has been examined after turning three, the findings do not necessarily reveal arrhythmias suffered when young.

"We are in the process of carrying out a range of DNA analyses to identify the arrhythmia gene, a finding that would facilitate disease diagnostics. Furthermore, it would help compare findings to arrhythmia in humans, potentially increasing understanding of the biological causes of arrhythmias. This would boost early diagnostics, breeding programmes and, potentially, the development of drug therapies. Ventricular tachycardia is also a significant and, to a considerable degree, unsolved problem in human medicine," says Professor Hannes Lohi.

The canine biobank of the University of Helsinki holds DNA samples from roughly 600 Leonbergers.

Credit: 
University of Helsinki

HKU paleontologists discover solid evidence of formerly elusive abrupt sea-level jump

image: Scanning Electron Microscopy image of typical deep-sea (bathyal) ostracod species from the study sites.

Image: 
@The University of Hong Kong

Meltwater pulses (MWPs) known as abrupt sea-level rise due to injection of melt water are of particular interests to scientists to investigate the interactions between climatic, oceanic and glacial systems. Eustatic sea-level rise will inevitably affect cities especially those on coastal plains of low elevation like Hong Kong. A recent study published in Quaternary Science Reviews presented evidence of abrupt sea level change between 11,300-11,000 years ago in the Arctic Ocean. The study was conducted by Ms Skye Yunshu Tian, PhD student of School of Biological Sciences and Swire Institute of Marine Science, the University of Hong Kong (HKU) during her undergraduate final year project in the Ecology & Biodiversity Major, solving the puzzle of second largest meltwater pulse (labelled as "MWP-1B" next to the largest and already well understood MWP-1A).

During the last deglaciation, melting of large ice sheets in the Northern hemisphere had contributed to profound global sea level changes. However, even the second largest MWP-1B is not well understood. Its timing and magnitude remain actively debated due to the lack of clear evidence not only from tropical areas recording near-eustatic sea-level change, but also from high-latitude areas where the ice sheets melted.

The research study, led by Ms Tian under the supervision of Dr Moriaki Yasuhara, Associate Professor of School of Biological Sciences, HKU and Dr Yuanyuan Hong, Postdoctoral Fellow of School of Biological Sciences, HKU, and in collaboration with scientists in HKU and UiT The Arctic University of Norway, discovered a robust evidence of formerly elusive abrupt sea-level jump event during the climatic warming from the last ice age to the current climate state. The study presented evidence of abrupt sea level change between 11,300-11,000 years ago of 40m-80m in Svalbard, the Arctic Ocean. High time-resolution fossil records indicate a sudden temperature rise due to the incursion of warm Atlantic waters and consequent melting of the covering ice sheets. Because of the rebound of formerly suppressed lands underneath great ice load, the sedimentary environment changed from a bathyal setting (having deep-sea species shown in Image 1) to an upper-middle neritic setting (having shallow-marine species shown in Image 2) at the study sites. This is the first solid evidence of relative sea-level change of MWP-1B discovered in ice-proximal areas.

The research group used fossil Ostracoda preserved in two marine sediment cores as a model organism to quantitatively reconstruct the water depth changes in Svalbard in the past 14,000 years, as this small (usually

Abrupt sea level event caused by ice-sheet melting is crucial for us to understand Earth climate system influencing and being influenced by glacial conditions. "Future eustatic sea-level rise may be discontinuous and abrupt, and different from smooth and continuous global warming projected, known as "hockey stick" curve. This has serious implications for our society, especially for cities on coastal plains of low elevation, like our Great Bay area on the Pearl River Delta. Even small sea-level rise can substantially increase damages from typhoons, for example," Dr Yasuhara said.

Credit: 
The University of Hong Kong

Education the key to equal parenting rights for same-sex couples

Same-sex marriage may have been given the green (or rainbow) light in many countries around the world, but it appears there are still some entrenched attitudes in society when it comes to same-sex parenting.

Misconceptions about the impact on children raised by same-sex parents are harmful both in a social and legal sense, says University of South Australia psychologist Dr Stephanie Webb.

Same-sex couples are still struggling to gain equal rights to biological parents - particularly in the event of separation - and on a social level they want to address the fallacies about the impact of children growing up with parents of the same gender.

"The most common myths are that children will be confused about their own sexuality, be less resilient, experience conflict, and suffer other issues as a result of growing up in a same-sex family," Dr Webb says.

"The reality is, children raised in a same-sex family environment are no different to children raised by heterosexual couples. In some cases, they are far more resilient, tolerant and open-minded because they have seen their parents' own struggle for acceptance and equality."

To counter the misconceptions, Dr Webb and colleagues from the University of Canberra and Boise State University in the United States carried out an online survey to assess the impact of an educational campaign on people's attitudes.

A total of 629 people - including 74 per cent who identified as heterosexual and 23 per cent bisexual or homosexual - were split into two groups and presented with fact sheets about smoking (control group) and same-sex parenting.

Before completing the survey, they were asked about their attitudes to same-sex marriage and same-sex parenting.

The fact sheets dispelled many of the concerns that people had over the perceived negative developmental impacts on children with same-sex parents.

"Our study showed a significant reduction in prejudices held after reading the fact sheets," Dr Webb says.

However, the sticking point is that many people believe the central purpose of marriage is to procreate. Since biological children cannot be produced by a same-sex couple, the role of marital equality is not seen as important by some.

This creates legal issues for same-sex couples in the event of separation involving children, where a third party (a biological parent) has legal rights that supersede that of the parent whose genes are not involved.

"Legal rights for same-sex parents are ignored by policymakers and the public alike," Dr Webb says. "By making marriage policies inclusive, regardless of sexuality, it would validate same-sex families and protect them against discrimination."

Dr Webb says education is a crucial step towards achieving legal equality for same-sex families.

Her findings have recently been published in the Australian Journal of Psychology. The survey is a follow up to a 2018 paper which examined the connection between gender role beliefs and support for same-gender family rights.

Credit: 
University of South Australia

Novel error-correction scheme developed for quantum computers

image: Dr Arne Grimsmo is an ARC DECRA Fellow at the University of Sydney Nano Institute and School of Physics at the University of Sydney.

Image: 
Stephanie Zingsheim/University of Sydney

Scientists in Australia have developed a new approach to reducing the errors that plague experimental quantum computers; a step that could remove a critical roadblock preventing them scaling up to full working machines.

By taking advantage of the infinite geometric space of a particular quantum system made up of bosons, the researchers, led by Dr Arne Grimsmo from the University of Sydney, have developed quantum error correction codes that should reduce the number of physical quantum switches, or qubits, required to scale up these machines to a useful size.

"The beauty of these codes is they are 'platform agnostic' and can be developed to work with a wide range of quantum hardware systems," Dr Grimsmo said.

"Many different types of bosonic error correction codes have been demonstrated experimentally, such as 'cat codes' and 'binomial codes'," he said. "What we have done in our paper is unify these and other codes into a common framework."

The research, published this week in Physical Review X, was jointly written with Dr Joshua Combes from the University of Queensland and Dr Ben Baragiola from RMIT University. The collaboration is across two leading quantum research centres in Australia, the ARC Centre of Excellence for Engineered Quantum Machines and the ARC Centre of Excellence for Quantum Computation and Communication Technology.

Robust qubits

"Our hope is that the robustness offered by 'spacing things out' in an infinite Hilbert space gives you a qubit that is very robust, because it can tolerate common errors like photon loss," said Dr Grimsmo from the University of Sydney Nano Institute and School of Physics.

Scientists in universities and at tech companies across the planet are working towards building a universal, fault-tolerant quantum computer. The great promise of these devices is that they could be used to solve problems beyond the reach of classical supercomputers in fields as varied as materials science, drug discovery and security and cryptography.

With Google last year declaring it has a machine that has achieved 'quantum supremacy' - performing an arguably useless task but beyond the scope of a classical computer - interest in the field of quantum computing and engineering continues to rise.

But to build a quantum machine that can do anything useful will require thousands, if not millions of quantum bits operating without being overwhelmed with errors.

And qubits are, by their very nature, error prone. The 'quantumness' that allows them to perform a completely different type of computing operation means they are highly fragile and susceptible to electromagnetic and other interference.

Identifying, removing and reducing errors in quantum computation is one of the central tasks facing physicists working in this field.

Fragile superpositions

Quantum computers perform their tasks by encoding information utilising quantum superposition - a fundamental facet of nature where a final outcome of a physical system is unresolved until it is measured. Until that point, the information exists in a state of multiple possible outcomes.

Dr Grimsmo said: "One of the most fundamental challenges for realising quantum computers is the fragile nature of quantum superpositions. Fortunately, it is possible to overcome this issue using quantum error correction."

This is done by encoding information redundantly, allowing the correction of errors as they happen during a quantum computation. The standard approach to achieve this is to use a large number of distinguishable particles as information carriers. Common examples are arrays of electrons, trapped ions or quantum electrical circuits.

However, this creates a large network of 'physical qubits' in order to operate a single, logical qubit that does the processing work you require.

This need to create a large network of physical qubits to support the work of a single operating qubit is a non-trivial barrier towards constructing large-scale quantum machines.

Indistinguishable bosons

Dr Grimsmo said: "In this work, we consider an alternative approach based on encoding quantum information in collections of bosons." The most common type of boson is the photon, a packet of electromagnetic energy and massless 'light particle'.

By trapping bosons in a particular microwave or optical cavity, they become indistinguishable from one another, unlike, say, an array of trapped ions, which are identifiable by their location.

"The advantage of this approach is that large numbers of bosons can be trapped in a single quantum system such as photons trapped in a high-quality optical or microwave cavity," Dr Grimsmo said. "This could drastically reduce the number of physical systems required to build a quantum computer."

The researchers hope their foundational work will help build a roadmap towards fault tolerance in quantum computing.

Credit: 
University of Sydney

Further evidence shows clinical viability of natural tooth repair method

Over the last five years scientists at King's College London have been investigating a method of stimulating natural tooth repair by activating cells in the tooth to make new dentine. In a paper published today in the Journal of Dental Research, they have found further positive evidence that the method has the potential to be translated into a direct clinical approach.

When teeth suffer damage either by decay or trauma, there are three layers that may be affected:

1. the outer enamel,

2. dentine, the middle part that shields the vital part of the tooth, and

3. the inner part of the tooth; the soft dental pulp.

Previous research has found that the drug Tideglusib could help protect the inner layer by stimulating the production of the middle layer (dentine), allowing the tooth to repair itself.

To continue testing the viability of this approach for use in patients, the research team have now looked at whether the volume of reparative dentine produced is sufficient to repair cavities found in human teeth. They also investigated the range (and hence safety) of the drug used, and whether the mineral composition of the reparative dentine sufficiently is similar to normal dentine to maintain the strength of the tooth.

Led by Professor Paul Sharpe, Dickinson Professor of Craniofacial Biology, the results of this study show further evidence that the method could be successfully translated into clinical practise. They discovered that the repair area is highly restricted to pulp cells in the immediate location of the damage and the root pulp is not affected. They also found that the mineral composition of the area of repair was significantly different to that of bone, and more similar to normal dentine.

In addition, they found the drug can activate repair an area of dentine damage up to ten times larger, mimicking the size of small lesions in humans.

"In the last few years we showed that we can stimulate natural tooth repair by activating resident tooth stem cells. This approach is simple and cost effective. The latest results show further evidence of clinical viability and brings us another step closer to natural tooth repair," said Professor Sharpe Paul Sharpe, Director of the Centre of Craniofacial & Regenerative Biology at King's College London.

Human trials or treatments for this research are not anticipated to begin at this stage. We are not currently seeking volunteers and are unable to offer this treatment at present. For current questions regarding dental work, we suggest you contact your local Dentist.

Credit: 
King's College London

Experts call for more support for parents of children with genetic learning disabilities

Parents of children with genetic conditions that cause learning disabilities are at risk of mental health problems, suggests new research published today in the British Journal of Psychiatry. The teams behind the study have called for greater support for parents whose child receives a genetic diagnosis for their learning disability.

As many as one in 20 families worldwide is thought to include a child with a learning disability, but little is known about how this affects the parents' mental health and wellbeing. Although some parents experience depression and anxiety, it is not clear why some are at greater risk than others.

Professor Claire Hughes from the University of Cambridge Centre for Family Research, said: "It's important that we understand why some parents are at greater risk of mental health problems than others. If a parent experiences long-term mental health problems, this could have a knock-on effect on the whole family, affecting partner relationships, the wellbeing of their child with disability, and the experiences of siblings. That's why interventions are often more successful when they are designed to help parents in order to help children."

To address this question, Professor Hughes assembled an interdisciplinary team of researchers from the Universities of Cambridge and Birmingham to analyse information from 888 families taking part in the IMAGINE-ID study - a UK-wide project examining the links between genetic diagnoses, learning disabilities and mental health. Parents were asked to rate their everyday feelings and the nature and impact of their child's difficulties, as well as to provide information about their family's social circumstances.

One parent who participated in IMAGINE-ID said that professionals tended to focus on the child's needs and did not consider the wider needs of families: "It's very much about getting support for your child. At no point were we ever offered any mental health support, even though we have such a massive role to play in bringing up our children. We need support as well."

The study data shows that rates of negative symptoms such as worry, anxiety and stress were much higher in the IMAGINE-ID group of parents than in the general population of parents. Mothers in the IMAGINE-ID study - who were more likely to be the main caregiver - were particularly affected. Contrary to evidence from previous studies, social factors did not predict a parent's risk of low mood and stress: more important were the type of genetic disorder that affected their child, their child's physical and medical needs, and their child's behaviour.

For the first time, the researchers were able to demonstrate that the cause of a child's disabilities is one factor that predicts the emotional wellbeing of parents. A subgroup of genetic disorders is caused by short missing or duplicated sections of DNA (known as 'copy number variants'). Parents within this subgroup reported that their child's difficulties had a high level of impact on family life as well as restricting their child's activities and friendships, and these impacts were the source of their own distress.

The researchers say there could be a number of explanations for these findings, varying from the complex effects of chromosomal differences on children's development through to the availability of support for these families. They have called for more multi-disciplinary, family-focused research to determine how genetic diagnoses are linked to parents' mental health, so that support for families can be improved in future.

Dr Kate Baker, lead author of the research paper, based at the MRC Cognition and Brain Sciences Unit, University of Cambridge, said: "These results suggest that we need to start looking at genetic diagnoses as useful not just for predicting a child's needs and informing the support that they might receive, but also for predicting the broader impact that the diagnosis will have on their family."

Francesca Wicks, former research coordinator for IMAGINE-ID and now Family Support and Information Officer for Unique, the rare chromosome and single gene disorder support charity, said: "It's clear that not enough care and support is being offered to parents before, during and after their child's diagnosis. The help and support offered by organisations such as Unique is incredibly valuable, but much more needs to be done within health and statutory services. Many of the families I have met have expressed feelings of anxiety and depression over the years, which is why we have produced our Carers Wellbeing guide."

Credit: 
University of Cambridge

Leaving your baby to 'cry it out' has no adverse effects on child development

image: This is Professor Dieter Wolke, from the Department of Psychology at the University of Warwick.

Image: 
University of Warwick

A baby's development at 18 months old is not adversely affected by being left to 'cry it out' a few times or often in infancy researchers at the University of Warwick have found.

The use of letting baby cry it out by parents was rare at term but was increasingly used by parents over the first 18 months of life in this UK sample. A third of parents never let their baby cry it out in infancy.

Mothers who let babies "let it cry out" a few times or often were not less sensitive in their parenting in direct observations of mother-baby interaction.

Letting a baby cry for a while to see whether it can calm her/himself may help babies to learn to self-regulate and provide a first sense of self.

Leaving an infant to 'cry it out' from birth up to 18 months does not adversely affect their behaviour development or attachment, researchers from the University of Warwick have found, they also discovered that those left to cry cried less and for a shorter duration at 18 months of age.

An infant's development and attachment to their parents is not affected by being left to 'cry it out' and can actually decrease the amount of crying and duration.

Researchers from the University of Warwick have today, the 11th of March had the paper 'Parental use of 'cry it out' in infants: No adverse effects on attachment and behavioural development at 18 months' published in the Journal of Child Psychology and Psychiatry.

In the paper they deal with an issue that is discussed for decades by parent websites and parents without much scientific evidence: Should you always immediately intervene when your baby cries?

Researchers followed 178 infants and their mums over 18 months and repeatedly assessed whether parents intervened immediately when baby cried or let the baby let it cry out a few times or often. They found that it made little difference to the baby's development by 18 months.

In fact, they found leaving babies to cry it out a few times at term and often at 3 months was associated with shorter crying duration at 18 months.

The use of parent's leaving their baby to 'cry it out' was assessed via maternal report at term, 3, 6 and 18 months and cry duration at term, 3 and 18 months. Duration and frequency of fussing and crying was assessed at the same ages with the Crying Pattern Questionnaire.

How sensitive the mother is in interaction with their baby was video-recorded and rated at 3 and 18 months of age.

Attachment was assessed at 18 months using a gold standard experimental procedure, the strange situation test, which assesses how securely an infant is attached to the major caregiver during separation and reunion episodes.

Behavioural development was assessed by direct observation in play with the mother and during assessment by a psychologist and a parent-report questionnaire at 18 months.

Researchers found that whether contemporary parents respond immediately or leave their infant to cry it out a few times to often makes no difference on the short - or longer term relationship with the mother or the infants behaviour.

This study shows that 2/3 of mum's parent intuitively and learn from their infant, meaning they intervene when they were just born immediately, but as they get older the mother waits a bit to see whether the baby can calm themselves, so babies learn self-regulation.

This "differential responding" allows a baby to learn over time to self-regulate during the day and also during the night.

Dr Ayten Bilgin from the Department of Psychology at the University of Warwick comments:

"Only two previous studies nearly 50 or 20 years ago had investigated whether letting babies 'cry it out' affects babies' development. Our study documents contemporary parenting in the UK and the different approaches to crying used".

Professor Dieter Wolke, who led the study, comments:

"We have to give more credit to parents and babies. Most parents intuitively adapt over time and are attuned to their baby's needs, wait a bit before intervening when crying and allow their babies the opportunity to learn to self-regulate. Most babies develop well despite their parents intervening immediately or not to crying."

Credit: 
University of Warwick

How heartbreak and hardship shape growing old

From being raised by an emotionally cold mother to experiencing violence, war and bereavement, difficult life events have a profound effect on our physical and mental wellbeing in later life - according to new research from the University of East Anglia.

A new study published today shows how a range of life inequalities and hardships are linked to physical and mental health inequalities in later life.

These stressful and often heart-breaking life inequalities included having emotionally cold parents, poor educational opportunities, losing an unborn child, financial hardship, involvement in conflict, violence and experiencing a natural disaster.

The research team found that people who experienced the greatest levels of hardship, stress and personal loss were five times more likely to experience a lower quality of life, with significantly more health and physical difficulties in later life.

Those brought up by an emotionally cold mother were also significantly less likely to experience a good quality of life and more likely to experience problems in later life such as anxiety, psychiatric problems and social detachment.

The researchers say that policies aimed at reducing inequalities in older age should consider events across the life course.

Dr Nick Steel, form UEA's Norwich Medical School, said: "Everybody lives a unique life that is shaped by events, experiences and their environment.

"We know that inequalities in exposure to different events over a lifetime are associated with inequalities in health trajectories, particularly when it comes to events in childhood such as poverty, bereavement or exposure to violence.

"While the impact of adverse childhood events is well recognised for children and young people, the negative events that shape our entire life courses are rarely discussed for older people.

"We wanted to better understand the effects of events over a life course - to find out how adverse events over a person's lifetime affect their physical, mental and social health in later life. As well as looking at single life events, we also identified groups or patterns of events."

The research team studied data taken from the English Longitudinal Study of Ageing (ELSA) - a longitudinal study of adults over 50 living in England.

Participants were invited to answer a life history questionnaire. The research team took into account responses from 7,555 participants to questions that represented broad topics in life history.

Some of these questions were around their upbringing - such as whether a parent had been emotionally cold and the estimated number of books in their home at 10 years old.

Other questions focused on events in adult life - such as whether they had fought in a war or lost an unborn child.

The researchers analysed the responses to identify patterns of life events, and also took into account factors such as age, ethnicity, sex and socioeconomic status.

Lead researcher Oby Enwo, from UEA's Norwich Medical School, said: "We looked at the life history of each participant and compared it to their quality of life and how well they can perform activities like dressing themselves, bathing, preparing hot meals, doing gardening and money management.

"We also studied whether the participants had a long standing illness, or suffered from anxiety or depression or other psychiatric problems like schizophrenia and psychosis.

"Participants were also asked about their social networks, friendships, and general health," she added.

"We started to see some really strong patterns and associations emerging between exposure to life events that affect physical and mental well-being in later life."

The researchers grouped the participants to four main groups - those who reported few life events, those with an emotionally cold mother, those who had experienced violence in combat and those who had experienced a number of difficult life events.

"We found that people who had suffered many difficult life events were significantly less likely to experience a good quality of life than those who had lived easier lives.

"They were three-times more likely to suffer psychiatric problems, twice as likely to be detached from social networks, and twice as likely to have long-standing illness.

"People raised by an emotionally cold mother were also significantly less likely to experience a good quality of life, and were more likely to report psychiatric problems and be detached from social networks, compared to people who had experienced few difficult life events."

The researchers now hope that clinicians working with older people will start to consider the impact of life course events on health and wellbeing - as part of a patient-centred approach.

They say that policy makers too should take a long-term perspective and target life events which could be changed - for example teaching and improving parenting skills to avoid emotionally negative experiences, and targeting gun and knife crime to limit people's exposure to violence.

Credit: 
University of East Anglia

Crosstalk captured between muscles, neural networks in biohybrid machines

image: Illustration and microscopy images of coculture platform where a neurosphere is cultured with four target tissues. Within three days, neurons extend toward and make connections with targets. Fluorescence microscopy image shows muscles in magenta, neurons in green, and cell nuclei in blue. All scale bars: 500 micrometers

Image: 
Image courtesy of the authors. Fluorescence microscopy image (bottom left) taken at the Core Facilities in Carl R. Woese Institute of Genomic Biology at University of Illinois at Urbana-Champaign.

WASHINGTON, March 10, 2020 -- Scientists watched the formation of a self-emergent machine as stem cell-derived neurons grew toward muscle cells in a biohybrid machine, with neural networks firing in synchronous bursting patterns. The awe-inspiring experiment left them with big questions about the mechanisms behind this growth and a proven method of capturing data for continued study of bioactuators.

In a paper published in APL Bioengineering, from AIP Publishing, the authors were able to capture many of the mechanisms at work where neurons and muscles are cocultured. Using a platform they designed, which holds a suspended neurosphere and several types of muscle cells in different compartments, their work is the first to report a 3D neuromuscular junction in an open platform with multiple muscles.

"The most impactful result is the emergence of a machine where actuators (muscles) emerge from a droplet of a mixture of cell-extracellular matrix, where neurons form a network all by themselves," author Taher Saif said. "It is where neurons reach out to the muscles to form neuromuscular junctions, resulting in a machine that we can operate by shining light, and yet we do not understand with certainty how all of this happened."

Neuromuscular junctions are the source of motor activity, with motor neurons firing to cause muscles to contract. In tiny biorobots using muscle cells as actuators, the ability to tune parameters would allow more precise designs with desirable characteristics and predictable behaviors. Yet, the emerging field of biohybrid robots, including intelligent drug delivery, environment sensing and biohybrid blood circulation pumps, needs proven experimental methods.

"This stage can be compared to the time of Wright brothers trying to fly when potential applications were far and away," Saif said. "The field of biohybrid robots is trying to explore whether machines can at all be made with living cells and scaffolds, what are the scaling laws, and what are the minimum conditions for their emergence."

The authors closely examined the morphology of the neuromuscular units that formed, applied optical stimulation to quantify muscle dynamics, recorded electrical activity of neurospheres and identified mechanisms for modulating bioactuator behavior.

"This is a new design paradigm for biological machines, such as biohybrid robots," Saif said. "Here, the bidirectional interactions emerge and take their own course. If we can understand these interactions, we will be able to guide and modulate them to optimize outcomes, such as high muscle force or synchrony in neuron firing."

Credit: 
American Institute of Physics

Older women with breast cancer may benefit from genetic testing, study suggests

About 1 in 40 postmenopausal women diagnosed with breast cancer before age 65 have cancer-associated mutations in their BRCA1 or BRCA2 genes, according to a Stanford-led study of more than 4,500 participants in the long-running Women's Health Initiative.

The prevalence of the mutations in this group is similar to that of Ashkenazi Jewish women, whom the U.S. Preventive Service Task Force suggests should discuss their cancer risk with their physicians to determine if genetic testing is warranted. Currently, most guidelines don't address testing postmenopausal women with breast cancer in the absence of other risk factors.

The finding is the first to suggest that postmenopausal women who have been newly diagnosed with breast cancer but who don't have any hereditary risk factors, such as close family members diagnosed with breast cancer before age 50, may still benefit from genetic testing for inherited cancer-associated mutations.

Identifying women with inherited cancer-associated mutations, particularly in the BRCA1 and BRCA2 genes, is important because some of the mutations also substantially increase the risk of other cancers, including ovarian cancer. Because these mutations are passed through families, knowing that a woman carries one of these mutations may encourage her healthy relatives to discuss their own risk factors with their doctors.

"There's been a lot of controversy in the field as to whether every woman with breast cancer should receive genetic testing," said Allison Kurian, MD, MSc, associate professor of medicine and of epidemiology and population health at Stanford, "in part because we didn't know how prevalent cancer-associated mutations are in this largest subgroup of newly diagnosed people -- that is, women who develop breast cancer after menopause without the presence of any known hereditary risk factors."

Kurian is the lead author of the study, which will be published March 10 in JAMA. Marcia Stefanick, PhD, professor of medicine and of obstetrics and gynecology at Stanford, is the senior author of the study.

Cancer-associated variants

Unlike mutations that accumulate over time, specifically in cancer cells, germline mutations are inherited and are found in every cell of the body.

Physicians primarily consider a woman's age at diagnosis and her family's cancer history when determining whether to recommend genetic testing. A woman diagnosed with breast cancer before age 50, for example, or a healthy woman with several close family members who have had breast or ovarian cancer, is more likely to be referred for genetic testing than a postmenopausal woman with breast cancer and no other risk factors.

For the study, Kurian and Stefanick and their colleagues set out to compare the prevalence of cancer-associated mutations in 10 breast-cancer risk genes, including BRCA1 and BRCA2. They compared 2,195 women who were diagnosed with breast cancer at an average age of 73 with 2,322 women without breast cancer.

The data for the study came from the Women's Health Initiative, which enrolled more than 160,000 women ages 50 to 79 throughout the United States between 1993 to 1998 to conduct the largest study of postmenopausal health in the country. Stefanick served as chair of the initiative's steering committee for most of the project.

The researchers found that about 3.5% of the women with breast cancer had a cancer-associated mutation in at least one of the 10 genes, compared with about 1.3% of women without cancer. When they narrowed their focus to just the BRCA1 and BRCA2 genes in women diagnosed before age 65, they found that about 2.2% of women with breast cancers had cancer-associated mutations, versus about 1.1% of those without breast cancer.

Only about 31% of those women with cancer and 20% of those without cancer, both with BRCA1 or BRCA2 mutations, were likely to have been recommended for testing under the current guidelines of the National Comprehensive Cancer Network.

"Now we know that the prevalence of cancer-associated BRCA1 and BRCA2 mutations in women diagnosed with breast cancer after menopause rivals that in women of Ashkenazi Jewish descent -- a population that is currently encouraged to discuss genetic testing with their doctors," Kurian said. "We finally have a read on the likely benefit of testing this most common subgroup of breast cancer patients."

Credit: 
Stanford Medicine