Tech

Research team tackles superbug infections with novel therapy

image: This is Daniel Hassett, PhD, shown in a University of Cincinnati College of Medicine laboratory.

Image: 
Colleen Kelley/University of Cincinnati

There may be a solution on the horizon to combating superbug infections resistant to antibiotics. The tenacious bacteria and fungi sicken more than 2.8 million people and lead to more than 35,000 deaths in the United States each year.

An international team of researchers has found that a combination of ingredients (acidified nitrite and ethylenediaminetetraacetic acid) known as AB569 and developed by a University of Cincinnati scientist kills a bacteria (Pseudomonas aeruginosa), one of the most serious pathogens that exhibits multidrug resistance and virulence.

Their findings are available online in the scholarly journal of the Proceedings of the National Academy of Sciences of the United States of America.

"AB569 kills these pathogenic bacteria by targeting their DNA, RNA and protein biosynthesis as well as energy and iron metabolism at concentrations that do not harm human cells," explains Daniel Hassett, a professor in the UC Department of Molecular Genetics, Biochemistry and Microbiology. "These were tested in laboratory mice with humanized cells. Our data implicate that AB569 is a safe and effective means that could be applied to eradicate these superbugs."

Pseudomonas aeruginosa was applied to the lungs of laboratory mice for five days. This pathogen in humans causes pulmonary infections in patients with cystic fibrosis and chronic obstructive pulmonary disease and many other opportunistic infections. Pseudomonas aeruginosa is considered one of six ESKAPE pathogens, known by their acronym and considered among the most resistant and deadly to humans.

The ESKAPE pathogens include Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter spp. These ESKAPE pathogens typically result in hospital-acquired infections leading to illness such as pneumonia and MRSA infections. Urinary tract infections that are resistant to antibiotics are also among illnesses caused by these organisms.

"These superbugs have an ingenious mechanism of being able to resist traditional antibiotic therapies by a vast number of acquired strategies," explains Hassett, also the paper's senior author. "Antibiotics affect specific processes in the bacteria, but not all of them. AB569 affects multiple processes at once leaving the exposed bacteria simply overwhelmed."

AB569 was patented in March 2018 in the United States by Hassett and initially was seen as a potential treatment for many antibiotic-resistant organisms that cause pulmonary infections in patients with cystic fibrosis and chronic obstructive pulmonary disease and many other opportunistic infections.

In addition to tackling chronic obstructive pulmonary disease and cystic fibrosis, AB569 may also be effective in addressing infections related to severe burns, urinary tract disorders, endocarditis and diabetes, said Hassett.

"Multidrug resistance, a trait of superbug bacteria, is one of the greatest threats to global public health," says Hassett. "It is usually caused by excessive drug usage or prescription, inappropriate and often compliance-related use of antimicrobials, overuse in the chicken, beef and pork industries and/or substandard pharmaceuticals."

Hassett says the Centers for Disease Control and Prevention considers antibiotic resistance to be among the most serious threats to human health because pathogens rapidly evolve new means to combat drug therapy leaving those who are susceptible at risk.

"Superbugs typically are found in areas of the world where there is a high population density, thereby facilitating rapid spread of such organisms. People who have traveled to areas of the world with high rates of antibiotic resistant bacteria such as South Asia and the Middle East are more likely to carry superbugs," said Hassett. "This is why patients are often quarantined if they test positive for such organisms while epidemiologists are on their toes tracking a pathogen's spread."

He said these superbugs may be naturally occurring but there are some things individuals can do daily to lessen exposure such as avoiding travel to destinations that report a high incidence of infection, washing hands religiously and if you feel ill and have been to place with an outbreak, notify a physician or appropriate health authority right away.

Credit: 
University of Cincinnati

Researchers show what drives a novel, ordered assembly of alternating peptides

image: It's well established that peptides can self-assemble into nanofibers composed of beta-sheets. However, that self-assembly has previously involved identical copies of the same molecule -- molecule A connects to another molecule A. New work proves not only that alternating peptides can create these beta sheets -- in an ABAB pattern -- but why it happens.

This image is an adaptation of computer simulation of the CATCH(+) and CATCH(-) mixture of peptides.

Image: 
Greg Hudalla

A team of researchers has verified that it is possible to engineer two-layered nanofibers consisting of an ordered row of alternating peptides, and has also determined what makes these peptides automatically assemble into this pattern. The fundamental discovery raises the possibility of creating tailored "ABAB" peptide nanofibers with a variety of biomedical applications.

Peptides are small proteins, made up of short strands of amino acids. It's well established that peptides can self-assemble into nanofibers composed of beta-sheets. However, that self-assembly normally involves identical copies of the same molecule - molecule A connects to another molecule A.

The new work proves not only that alternating peptides can create these beta sheets - in an ABAB pattern - but why it happens.

"Our team drew on computational simulations, nuclear magnetic resonance (NMR) observations and experimental approaches for this work, and we now know what drives the creation of these alternating peptide structures," says Carol Hall, corresponding author of a paper on the work and Camille Dreyfus Distinguished University Professor of Chemical and Biomolecular Engineering at North Carolina State University.

"This is important because once you understand why peptides in these ABAB structures are behaving in this way, you can develop more of them," Hall says.

For this study, researchers worked with a pair of peptides called CATCH(+) and CATCH(-). When introduced into a solution, the peptides array themselves in a row, alternating the two peptides. The peptides also assemble in two beta-sheet layers per nanofiber.

The study itself involved three components. Greg Hudalla's lab at the University of Florida created the peptides, facilitated the co-assembly of the peptide beta sheets and performed experimental work that provided an overview of the system and its behavior. Hudalla co-authored the paper and is an associate professor in UF's J. Crayton Pruitt Family Department of Biomedical Engineering.

Meanwhile, Anant Paravastu's team at Georgia Tech used solid-state NMR to measure the precise relative positions of atoms and molecules in the ABAB peptide beta-sheets. Paravastu co-authored the paper and is an associate professor in Georgia Tech's School of Chemical and Biomolecular Engineering.

Lastly, Hall's team at NC State conducted computational simulations to determine what was driving the behavior seen by the researchers at UF and Georgia Tech.

There appear to be multiple forces at play in guiding the assembly of the alternating peptide structures. One of the two types of peptide is negatively charged, while the second type is positively charged. Because positive and negative attract each other, while peptides of the same charge repel each other, this leads to the alternating order of peptides in the strand.

Another aspect of the system's organization, the stacking, is driven by the types of amino acids in each peptide. Specifically, some of the amino acids in each peptide are hydrophobic, while others are hydrophilic. The hydrophobic amino acids, in effect, want to stick to each other, which results in the two-layer "stacking" effect seen in the beta-sheets.

"It is important that different forces balance to produce the target structure," Hall says. "If any one of the molecular forces is too strong or too weak, the molecules may never dissolve in water or may fail to recognize their intended partners. Rather than an ordered nanostructure, the molecules could form a disorganized mess, or no structure at all."

"We're interested in this because it gives us a glimpse into the fundamental nature of how these systems can work," Hudalla says. "We're not aware of any similar co-assembling systems in nature that resemble the system we've made here.

"Co-assembling peptide systems hold promise for biomedical applications because we can attach proteins to the A or B peptides that have some specific utility. For example, we could create a peptide scaffold that holds a regular array of enzymes, and those enzymes could serve as catalysts for influencing body chemistry in localized areas."

"The structures we're making here are impressive, but they are still not as precise and complex as biological structures that we see in nature," Paravastu says. "By the same token, we're not aware of natural structures that contain this alternating peptide structure. This is a good start. We are excited to see where it goes."

"This work would not have been possible without drawing on the diverse areas of expertise in this research group," Hall says.

Credit: 
North Carolina State University

New artificial intelligence algorithm better predicts corn yield

image: New research from the University of Illinois demonstrates the promise of convolutional neural network algorithm for crop yield prediction.

Image: 
L. Brian Stauffer, University of Illinois

URBANA, Ill. - With some reports predicting the precision agriculture market will reach $12.9 billion by 2027, there is an increasing need to develop sophisticated data-analysis solutions that can guide management decisions in real time. A new study from an interdisciplinary research group at University of Illinois offers a promising approach to efficiently and accurately process precision ag data.

"We're trying to change how people run agronomic research. Instead of establishing a small field plot, running statistics, and publishing the means, what we're trying to do involves the farmer far more directly. We are running experiments with farmers' machinery in their own fields. We can detect site-specific responses to different inputs. And we can see whether there's a response in different parts of the field," says Nicolas Martin, assistant professor in the Department of Crop Sciences at Illinois and co-author of the study.

He adds, "We developed methodology using deep learning to generate yield predictions. It incorporates information from different topographic variables, soil electroconductivity, as well as nitrogen and seed rate treatments we applied throughout nine Midwestern corn fields."

Martin and his team worked with 2017 and 2018 data from the Data Intensive Farm Management project, in which seeds and nitrogen fertilizer were applied at varying rates across 226 fields in the Midwest, Brazil, Argentina, and South Africa. On-ground measurements were paired with high-resolution satellite images from PlanetLab to predict yield.

Fields were digitally broken down into 5-meter (approximately 16-foot) squares. Data on soil, elevation, nitrogen application rate, and seed rate were fed into the computer for each square, with the goal of learning how the factors interact to predict yield in that square.

The researchers approached their analysis with a type of machine learning or artificial intelligence known as a convolutional neural network (CNN). Some types of machine learning start with patterns and ask the computer to fit new bits of data into those existing patterns. Convolutional neural networks are blind to existing patterns. Instead, they take bits of data and learn the patterns that organize them, similar to the way humans organize new information through neural networks in the brain. The CNN process, which predicted yield with high accuracy, was also compared to other machine learning algorithms and traditional statistical techniques.

"We don't really know what is causing differences in yield responses to inputs across a field. Sometimes people have an idea that a certain spot should respond really strongly to nitrogen and it doesn't, or vice versa. The CNN can pick up on hidden patterns that may be causing a response," Martin says. "And when we compared several methods, we found out that the CNN was working very well to explain yield variation."

Using artificial intelligence to untangle data from precision agriculture is still relatively new, but Martin says his experiment merely grazes the tip of the iceberg in terms of CNN's potential applications. "Eventually, we could use it to come up with optimum recommendations for a given combination of inputs and site constraints."

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Finding new clues to brain cancer treatment

image: An example of the MRI scans showing long-term and short-term survival indications.

Image: 
Case Western Reserve University

CLEVELAND--Glioblastoma is an aggressive, killer disease. While victims of this fast-moving brain tumor comprise only about 15% of all people with brain cancer, its victims rarely survive more than a few years after diagnosis.

But research scientists and doctors from the Case Western Reserve University School of Medicine, Case School of Engineering and Cleveland Clinic have blended two very different types of analysis to better understand and combat the brain cancer.

The researchers used the tools of Artificial Intelligence (AI)--in this case, computer image analysis of the initial MRI scans taken of brain cancer patients--and compared that image analysis with genomic research to analyze the cancer.

The result: A new and more accurate way to not only determine the relative life expectancy of glioblastoma victims--but identify who could be candidates for experimental clinical drug trials, said Pallavi Tiwari, an assistant professor of biomedical engineering at Case Western Reserve with dual appointments in the School of Medicine and Case School of Engineering.

The study was led by Tiwari, along with Niha Beig, a PhD student in Tiwari's lab. Their research was published this month in Clinical Cancer Research, a journal of the American Association for Cancer Research.

Unique study of MRI images, gene expression

The AI model used by the researchers leveraged features from the region adjacent to the tumor, as well as inside the tumor to identify which patients had a poor prognosis, Pallavi said. Then, they used gene-expression information to shed light on which biological pathways were associated with those images.

"Our results demonstrated that image features associated with poor prognosis were also linked with pathways that contribute to chemo-resistance in glioblastoma. This could have huge implications in designing personalized treatment decisions in glioblastoma patients, down the road." she said.

"While we're just at the beginning, this is a big step, and someday it could mean that if you have glioblastoma, you could know whether you'll respond to chemotherapy well or to immunotherapy, based on a patient's image and gene profiles," said Manmeet Ahluwalia, MD, Miller Family Endowed Chair of NeuroOncology at the Burkhardt Brain Tumor and Neuro-Oncology Center at Cleveland Clinic, and a co-author of the study.

Beig said the researchers were able to compare the MRI scans of patients' tumors with the corresponding genomic information about that same patient, drawn from a National Institutes of Health database.

"That's why this study is unique," she said. "Most researchers look at one or the other, but we looked at both the MRI features and the gene expression in conjunction."

"We can tell you who is at a better risk of survival," Beig said. "What clinicians want to do is give their patient an idea of quality of life, and since roughly 10% of these patients go on to live more than three years, that's important information."

Anant Madabhushi, the F. Alex Nason Professor II of Biomedical Engineering at Case Western Reserve and a co-author on this study, said the research is also important because it "connects the macro features of the tumor to the molecular." 

Madabhushi said a common criticism of radiomics--drawing conclusions about tumors from the computer analysis of the images alone--is that the process is opaque and not easily interpretable.

"This is the corroborating evidence," he said. "This shows that molecular changes in the tumor are manifesting as unique representations on the scan."

Credit: 
Case Western Reserve University

10,000 times faster calculations of many-body quantum dynamics possible

image: Computing time required for the new G1-G2 method (solid line) as a function of the process duration, compared to the traditional method (logarithmic scale).

Image: 
Niclas Schlünzen, AG Bonitz

How an electron behaves in an atom, or how it moves in a solid, can be predicted precisely with the equations of quantum mechanics. These theoretical calculations agree fully with the results obtained from experiments. But complex quantum systems, which contain many electrons or elementary particles - such as molecules, solids or atomic nuclei - can currently not be described exactly, even with the most powerful computers available today. The underlying mathematical equations are too complex, and the computational requirements are too large. A team led by Professor Michael Bonitz from the Institute of Theoretical Physics and Astrophysics at Kiel University (CAU) has now succeeded in developing a simulation method, which enables quantum mechanical calculations up to around 10,000 times faster than previously possible. They have published their findings in the current issue of the renowned scientific journal Physical Review Letters.

Even with extremely powerful computers, quantum simulations take too long

The new procedure of the Kiel researchers is based on one of the currently most powerful and versatile simulation techniques for quantum mechanical many-body systems. It uses the method of so-called nonequilibrium Green functions: this allows movements and complex interactions of electrons to be described with very high accuracy, even for an extended period. However, to date this method is very computer-intensive: in order to predict the development of the quantum system over a ten times longer period, a computer requires a thousand times more processing time.

With the mathematical trick of introducing an additional auxiliary variable, the physicists at the CAU have now succeeded in reformulating the primary equations of nonequilibrium Green functions such that the calculation time only increases linearly with the process duration. Thus, a ten times longer prediction period only requires ten times more computing time. In comparison with the previously-used methods, the physicists achieved an acceleration factor of approximately 10,000. This factor increases further for longer processes. Since the new approach combines two Green functions for the first time, it is called "G1-G2 method".

Temporal development of material properties predictable for the first time

The new calculation model of the Kiel research team not only saves expensive computing time, but also allows for simulations, which have previously been completely impossible. "We were surprised ourselves that this dramatic acceleration can also be demonstrated in practical applications," explained Bonitz. For example, it is now possible to predict how certain properties and effects in materials such as semiconductors develop over an extended period of time. Bonitz is convinced: "The new simulation method is applicable in numerous areas of quantum many-body theory, and will enable qualitatively new predictions, such as about the behaviour of atoms, molecules, dense plasmas and solids after excitation by intense laser radiation."

Credit: 
Kiel University

Cryptographic 'tag of everything' could protect the supply chain

image: MIT researchers' millimeter-sized ID chip integrates a cryptographic processor, an antenna array that transmits data in the high terahertz range, and photovoltaic diodes for power.

Image: 
Courtesy of Ruonan Han, et. al, edited by MIT News

To combat supply chain counterfeiting, which can cost companies billions of dollars annually, MIT researchers have invented a cryptographic ID tag that's small enough to fit on virtually any product and verify its authenticity.

A 2018 report from the Organization for Economic Co-operation and Development estimates about $2 trillion worth of counterfeit goods will be sold worldwide in 2020. That's bad news for consumers and companies that order parts from different sources worldwide to build products.

Counterfeiters tend to use complex routes that include many checkpoints, making it challenging to verifying their origins and authenticity. Consequently, companies can end up with imitation parts. Wireless ID tags are becoming increasingly popular for authenticating assets as they change hands at each checkpoint. But these tags come with various size, cost, energy, and security tradeoffs that limit their potential.

Popular radio-frequency identification (RFID) tags, for instance, are too large to fit on tiny objects such as medical and industrial components, automotive parts, or silicon chips. RFID tags also contain no tough security measures. Some tags are built with encryption schemes to protect against cloning and ward off hackers, but they're large and power hungry. Shrinking the tags means giving up both the antenna package -- which enables radio-frequency communication -- and the ability to run strong encryption.

In a paper presented yesterday at the IEEE International Solid-State Circuits Conference (ISSCC), the researchers describe an ID chip that navigates all those tradeoffs. It's millimeter-sized and runs on relatively low levels of power supplied by photovoltaic diodes. It also transmits data at far ranges, using a power-free "backscatter" technique that operates at a frequency hundreds of times higher than RFIDs. Algorithm optimization techniques also enable the chip to run a popular cryptography scheme that guarantees secure communications using extremely low energy.

"We call it the 'tag of everything.' And everything should mean everything," says co-author Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Science and head of the Terahertz Integrated Electronics Group in the Microsystems Technology Laboratories (MTL). "If I want to track the logistics of, say, a single bolt or tooth implant or silicon chip, current RFID tags don't enable that. We built a low-cost, tiny chip without packaging, batteries, or other external components, that stores and transmits sensitive data."

Joining Han on the paper are: graduate students Mohamed I. Ibrahim, Muhammad Ibrahim Wasiq Khan, and Chiraag S. Juvekar; former postdoc associate Wanyeong Jung; former postdoc Rabia Tugce Yazicigil; and Anantha P. Chandrakasan, who is the dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

System integration

The work began as a means of creating better RFID tags. The team wanted to do away with packaging, which makes the tags bulky and increases manufacturing cost. They also wanted communication in the high terahertz frequency between microwave and infrared radiation -- around 100 gigahertz and 10 terahertz -- that enables chip integration of an antenna array and wireless communications at greater reader distances. Finally, they wanted cryptographic protocols because RFID tags can be scanned by essentially any reader and transmit their data indiscriminately.

But including all those functions would normally require building a fairly large chip. Instead, the researchers came up with "a pretty big system integration," Ibrahim says, that enabled putting everything on a monolithic -- meaning, not layered -- silicon chip that was only about 1.6 square millimeters.

One innovation is an array of small antennas that transmit data back and forth via backscattering between the tag and reader. Backscatter, used commonly in RFID technologies, happens when a tag reflects an input signal back to a reader with slight modulations that correspond to data transmitted. In the researchers' system, the antennas use some signal splitting and mixing techniques to backscatter signals in the terahertz range. Those signals first connect with the reader and then send data for encryption.

Implemented into the antenna array is a "beam steering" function, where the antennas focus signals toward a reader, making them more efficient, increasing signal strength and range, and reducing interference. This is the first demonstration of beam steering by a backscattering tag, according to the researchers.

Tiny holes in the antennas allow light from the reader to pass through to photodiodes underneath that convert the light into about 1 volt of electricity. That powers up the chip's processor, which runs the chip's "elliptic-curve-cryptography" (ECC) scheme. ECC uses a combination of private keys (known only to a user) and public keys (disseminated widely) to keep communications private. In the researchers' system, the tag uses a private key and a reader's public key to identify itself only to valid readers. That means any eavesdropper who doesn't possess the reader's private key should not be able to identify which tag is part of the protocol by monitoring just the wireless link.

Optimizing the cryptographic code and hardware lets the scheme run on an energy-efficient and small processor, Yazicigil says. "It's always a tradeoff," she says. "If you tolerate a higher-power budget and larger size, you can include cryptography. But the challenge is having security in such a small tag with a low-power budget."

Pushing the limits

Currently, the signal range sits around 5 centimeters, which is considered a far-field range -- and allows for convenient use of a portable tag scanner. Next, the researchers hope to "push the limits" of the range even further, Ibrahim says. Eventually, they'd like many of the tags to ping one reader positioned somewhere far away in, say, a receiving room at a supply chain checkpoint. Many assets could then be verified rapidly.

"We think we can have a reader as a central hub that doesn't have to come close to the tag, and all these chips can beam steer their signals to talk to that one reader," Ibrahim says.

The researchers also hope to fully power the chip through the terahertz signals themselves, eliminating any need for photodiodes.

The chips are so small, easy to make, and inexpensive that they can also be embedded into larger silicon computer chips, which are especially popular targets for counterfeiting.

"The U.S. semiconductor industry suffered $7 billion to $10 billion in losses annually because of counterfeit chips," Wasiq Khan says. "Our chip can be seamlessly integrated into other electronic chips for security purposes, so it could have huge impact on industry. Our chips cost a few cents each, but the technology is priceless," he quipped.

Credit: 
Massachusetts Institute of Technology

Watching TV helps birds make better food choices

By watching videos of each other eating, blue tits and great tits can learn to avoid foods that taste disgusting and are potentially toxic, a new study has found. Seeing the 'disgust response' in others helps them recognise distasteful prey by their conspicuous markings without having to taste them, and this can potentially increase both the birds' and their prey's survival rate.

The study, published in the Journal of Animal Ecology, showed that blue tits (Cyanistes caeruleus) learned best by watching their own species, whereas great tits (Parus major) learned just as well from great tits and blue tits. In addition to learning directly from trial and error, birds can decrease the likelihood of bad experiences - and potential poisoning - by watching others. Such social transmission of information about novel prey could have significant effects on prey evolution, and help explain why different bird species flock together.

"Blue tits and great tits forage together and have a similar diet, but they may differ in their hesitation to try novel food. By watching others, they can learn quickly and safely which prey are best to eat. This can reduce the time and energy they invest in trying different prey, and also help them avoid the ill effects of eating toxic prey," said Liisa Hämäläinen, formerly a PhD student in the University of Cambridge's Department of Zoology (now at Macquarie University, Sydney) and first author of the report.

This is the first study to show that blue tits are just as good as great tits at learning by observing others. Previously, scientists thought great tits were better, but had only looked at learning about tasty foods. This new work shows that using social information to avoid bad outcomes is especially important in nature.

Many insect species, such as ladybirds, firebugs and tiger moths have developed conspicuous markings and bitter-tasting chemical defences to deter predators. But before birds learn to associate the markings with a disgusting taste, these species are at high risk of being eaten because they stand out.

"Conspicuous warning colours are an effective anti-predator defence for insects, but only after predators have learnt to associate the warning signal with a disgusting taste," said Hämäläinen. "Before that, these insects are an easy target for naive, uneducated predators."

Blue tits and great tits forage together in the wild, so have many opportunities to learn from each other. If prey avoidance behaviour spreads quickly through predator populations, this could benefit the ongoing survival of the prey species significantly, and help drive its evolution.

The researchers showed each bird a video of another bird's response as it ate a disgusting prey item. The TV bird's disgust response to unpalatable food - including vigorous beak wiping and head shaking - provided information for the watching bird. The use of video allowed complete control of the information each bird saw.

The 'prey' shown on TV consisted of small pieces of almond flakes glued inside a white paper packet. In some of the packets, the almond flakes had been soaked in a bitter-tasting solution. Two black symbols printed on the outsides of the packets indicated palatability: tasty 'prey' had a cross symbol that blended into the background, and disgusting 'prey' had a conspicuous square symbol.

The TV-watching birds were then presented with the different novel 'prey' that was either tasty or disgusting, to see if they had learned from the birds on the TV. Both blue tits and great tits ate fewer of the disgusting 'prey' packets after watching the bird on TV showing a disgust response to those packets.

Birds, and all other predators, have to work out whether a potential food is worth eating in terms of benefits - such as nutrient content, and costs - such as the level of toxic defence chemicals. Watching others can influence their food preferences and help them learn to avoid unpalatable foods.

"In our previous work using great tits as a 'model predator', we found that if one bird sees another being repulsed by a new type of prey, then both birds learn to avoid it in the future. By extending the research we now see that different bird species can learn from each other too," said Dr Rose Thorogood, previously at the University of Cambridge's Department of Zoology and now at the University of Helsinki's HiLIFE Institute of Life Science in Finland, who led the research. "This increases the potential audience that can learn by watching others, and helps to drive the evolution of the prey species."

Credit: 
University of Cambridge

New research takes p*** out of incontinence

Millions of people might eventually be spared the embarrassment and extreme isolation caused by wetting themselves, thanks to new research.

One in every five people has a lower urinary tract disorder called overactive bladder which, for some, means not being able to hold in urine, needing to go to the toilet often, or waking in the night to empty their bladder.

Some wear sanitary towels or disposable underwear, while others worry that even with absorbent underwear, they'll smell of urine, so they choose instead to stay at home.

Now, scientists at the University of Portsmouth have identified chemicals in urine that are specific to overactive bladder. The next step is to develop a gadget similar to a pregnancy test, to see if these chemical markers are present. Such a device is 12-24 months from clinical trials, but the early signs are encouraging.

Dr John Young and Dr Sepinoud Firouzmand, both in the School of Pharmacy and Biomedical Sciences at Portsmouth, published their research in Nature's Scientific Reports.

Dr Young, who led the research, said: "The first step has been to identify chemicals in urine that are specific to overactive bladder. The next step is to develop a gadget for use in GPs, pharmacies and nursing or care homes which is simple to use, accurate and doesn't need to be sent to a laboratory for processing.

"If successful, it would save millions of patients from painful procedures and long waits for a diagnosis."

It would also save healthcare providers, including the National Health Service (NHS), millions of pounds.

Dr Young said: "This is the first step in transforming the lives of millions of people who suffer in silence, too embarrassed to go out or even to speak about their condition.

"It is not too strong to say this could be a game changer."

If clinical trials bear out the development, it would allow treatment for the condition to begin much earlier.

Urinary disorders affect 20 percent of the population as a whole. By the age of 50, one in three people will have a urinary disorder.

Diagnosing an overactive bladder - when a patient needs to urinate very often, and sometimes wets their pants - is, at best, a cumbersome process. Clinicians need to first rule out a wide range of possible diseases and conditions with the same symptoms, including some cancers, Type 2 diabetes, cystitis, and a urinary tract infection. One of these tests is invasive and painful and costs £1,000 per person. The treatments for each possible disease varies greatly. Some of the tests aren't accurate at giving a clear result, prolonging clinicians' search for a diagnosis.

The conditions can be so complex to diagnose, that patients' health has often worsened by the time the results are finally in.

The dipstick test Dr Young and colleagues are proposing would cost about £10 and take a few minutes to give an accurate result.

Treatment could start immediately, long before the sometimes debilitating symptoms have forced a patient to wear sanitary products, or to stop going out altogether to avoid wetting themselves in public.

"It'd be as simple as a pregnancy test," Dr Young said. "Effective treatment is early treatment. When left untreated, the bladder can change. Additional nerves, blood vessels and cells grow, leaving it smaller than before.
"It isn't good enough that so many millions of people feel forced to isolate themselves in their homes, avoiding all social interaction, with a condition which if caught early, has treatments which can help."

Credit: 
University of Portsmouth

Carrier-assisted differential detection

image: (a) Receiver scheme for CADD; (b) DSP for OFDM modulated signals using the CADD receiver. Inset (i) is the spectrum of signals fed to the CADD receiver, where S1 and S2 are lower and upper sideband signals, respectively. PD: photodiode; BPD: balanced photodiode; FFT: fast Fourier transform; IFFT: inverse fast Fourier transform.

Image: 
by William Shieh, Chuanbowen Sun, and Honglin Ji

In the recent decade, various schemes of field recovery with direct detection have been investigated in short-reach optical communications. Since direct detection generally provides only intensity information, until now, signals have been mainly restricted to the single sideband (SSB) modulation format in various proposed intensity-only detection schemes. For such detection schemes, signal-signal beating interference (SSBI) is the dominant limitation. Additionally, compared to the optical spectral efficiency (SE), a high electrical SE is a more dictating factor for short-reach applications. The electrical SE is intrinsically limited for the SSB modulation format because one sideband is unfilled, and half of the electrical SE is lost. Apart from the electrical SE, SSB signals suffer from noise folding due to the square-law detection of the photodiode. Consequently, rather than SSB signals, it is highly desirable to investigate the direct detection of complex-valued double sideband (DSB) signals with field recovery.

In a new paper published in Light: Science & Application, engineers from the Department of Electrical and Electronic Engineering, The University of Melbourne, Australia developed a novel receiver scheme for detecting complex-valued double sideband signals with field recovery, called carrier-assisted differential detection (CADD). Compared with conventional single-sideband (SSB) modulation, the electrical SE is doubled without sacrificing the receiver sensitivity. In addition, no precise optical filters are needed for the CADD receiver, indicating the potential of utilizing low-cost uncooled lasers for the CADD receiver scheme.

The gist of the new scheme lies in adopting an optical interferometer and 90-degree optical hybrid in the receiver which is capable of detecting both inphase and quadrature components of the linear optical field. Furthermore, the higher-order nonlinear product is mitigated by a novel iterative cancellation algorithm (See Figure below). These engineers summarize the operational principle of their receiver:

"CADD possesses two advantages over conventional carrier-less differential detection (CDD) for field recovery: (i) CADD doubles the electrical SE compared to CDD, as CADD recovers the linear signal while CDD needs to recover the 2nd-order signal-to-signal beating term, and (ii) CADD is insensitive to chromatic dispersion, while CDD is not. This is because without a carrier, the field of CDD can reach zero, which makes differential detection impossible for large chromatic dispersion"

"The advantage of CADD over the Kramers-Kronig (KK) receiver in direct detection is analogous to that of homodyne over heterodyne receivers in coherent detection - although CADD requires a larger number of components, it reduces the optoelectronic bandwidth by half. By adopting photonic integration, either in the InP or silicon photonics (SiP) platform, the large component count in CADD will be much mitigated, while the reduced bandwidth of CADD will greatly reduce the overall implementation cost. Compared to coherent homodyne receivers, CADD does not require highly stable and low-linewidth lasers, leading to a more compact and cost-effective solution suitable for short-reach applications such as intra-data interconnects and ultra-high-speed wireless fronthaul networks" they added.

"The receiver architecture opens a new class of direct detection schemes that are scalable to high baud rate and suitable for photonic integration. It would be very useful for short-reach applications such as intra-data interconnects and ultra-high-speed wireless fronthaul networks" the engineers forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

MicroRNA exhibit unexpected function in driving cancer

PHILADELPHIA -- Researchers long thought that only one strand of a double-stranded microRNA can silence genes. Though recent evidence has challenged that dogma, it's unclear what the other strand does, and how the two may be involved in cancer. New research from Thomas Jefferson University has revealed that both strands of some microRNA coordinate to act on the same cancer pathways, across multiple cancers, to drive aggressiveness and growth - two hallmarks of poor prognosis for cancer patients.

"This coordination of activity is really surprising," says senior author Christine M. Eischen, PhD, professor and Vice Chair of the department of Cancer Biology at Jefferson and co-leader of the Molecular Biology and Genetics program at the Sidney Kimmel Cancer Center (SKCC) - Jefferson Health. "We know that the strands don't hit the same target sequences. But despite that fact, we see that they are working together."

Researchers have not paid much attention to the both sides of microRNA, in part because reagents created to probe microRNAs were aimed at only one strand, "so as a field, we weren't looking at the whole picture," says Dr. Eischen.

The work was published in Nature Communications, February 20th, 2020.

First author Ramkrishna Mitra, PhD, a Research Instructor in Dr. Eischen's lab, started by using a computational approach that allowed him to search for both strands of the microRNA. "Our data showed that one strand of many of the pairs were not degraded as previously thought. We saw large numbers of both pairs in many cancers," says Dr. Mitra.

Looking at data from 5200 cancer patient samples from 14 cancer types, the researchers found 26 microRNA pairs that both appeared either more active and abundant or less active and abundant across multiple cancers.

"We then narrowed our search for the biggest effects," says Eischen. Dr. Mitra developed a new computational biology approach, in part, through the analysis of the genes essential for cancer cell survival and growth across 290 cancer cell lines to identify the pathways both microRNA pairs impacted across multiple cancer types. The researchers also determined which microRNA pairs had a bigger impact on driving or suppressing cancer growth together than either strand alone.

They found two pairs, named miR-30a and miR-145 that fit the bill. "Each pair has different target genes, but the targets hit the same cancer pathways," says Dr. Eischen. "These microRNAs help keep cancers in check - as seen both in patient data and in tumor cell lines. As a result, many cancers, such as kidney, lung, breast, become more aggressive when they lose these microRNAs and this impacts patient survival."

To validate the findings of their computational work, the researchers replicated what they found using an experimental approach. They forced expression of miR-145 and miR-30a in lung cancer cell lines, which reduced the cancer's aggressive traits, specifically its growth and migration.

"The SKCC has a longstanding history of discovery related to small RNA function in cancer, and Dr. Eischen's breakthroughs have significant potential for understanding both tumor development and tumor progression," says Karen Knudsen, PhD, Executive Vice President of Oncology Services at Jefferson Health, and the Enterprise Director of the Sidney Kimmel Cancer Center - Jefferson Health, one of only 71 NCI-designated cancer centers in the US.

Credit: 
Thomas Jefferson University

New graphene-based metasurface capable of independent amplitude and phase control of light

image: This is a schematic image of graphene plasmonic metamolecules capable of independent amplitude and phase control of light.

Image: 
KAIST

Researchers described a new strategy of designing metamolecules that incorporates two independently controllable subwavelength meta-atoms. This two-parametric control of the metamolecule secures the complete control of both amplitude and the phase of light.

A KAIST research team in collaboration with the University of Wisconsin-Madison theoretically suggested a graphene-based active metasurface capable of independent amplitude and phase control of mid-infrared light. This research gives a new insight into modulating the mid-infrared wavefront with high resolution by solving the problem of the independent control of light amplitude and phase, which has remained a long-standing challenge.

Light modulation technology is essential for developing future optical devices such as holography, high-resolution imaging, and optical communication systems. Liquid crystals and a microelectromechanical system (MEMS) have previously been utilized to modulate light. However, both methods suffer from significantly limited driving speeds and unit pixel sizes larger than the diffraction limit, which consequently prevent their integration into photonic systems.

The metasurface platform is considered a strong candidate for the next generation of light modulation technology. Metasurfaces have optical properties that natural materials cannot have, and can overcome the limitations of conventional optical systems, such as forming a high-resolution image beyond the diffraction limit. In particular, the active metasurface is regarded as a technology with a wide range of applications due to its tunable optical characteristics with an electrical signal.

However, the previous active metasurfaces suffered from the inevitable correlation between light amplitude control and phase control. This problem is caused by the modulation mechanism of conventional metasurfaces. Conventional metasurfaces have been designed such that a metaatom only has one resonance condition, but a single resonant design inherently lacks the degrees of freedom to independently control the amplitude and phase of light.

The research team made a metaunit by combining two independently controllable metaatoms, dramatically improving the modulation range of active metasurfaces. The proposed metasurface can control the amplitude and phase of the mid-infrared light independently with a resolution beyond the diffraction limit, thus allowing complete control of the optical wavefront.

The research team theoretically confirmed the performance of the proposed active metasurface and the possibility of wavefront shaping using this design method. Furthermore, they developed an analytical method that can approximate the optical properties of metasurfaces without complex electromagnetic simulations. This analytical platform proposes a more intuitive and comprehensively applicable metasurface design guideline.

The proposed technology is expected to enable accurate wavefront shaping with a much higher spatial resolution than existing wavefront shaping technologies, which will be applied to active optical systems such as mid-infrared holography, high-speed beam steering devices that can be applied for LiDAR, and variable focus infrared lenses.

Professor Min Seok Jang commented, "This study showed the independent control amplitude and phase of light, which has been a long-standing quest in light modulator technology. The development of optical devices using complex wavefront control is expected to become more active in the future."

PhD candidate Sangjun Han and Dr. Seyoon Kim of the University of Wisconsin-Madison are the co-first authors of the research, which was published and selected as the front cover of the January 28 edition of ACS Nano titled "Complete complex amplitude modulation with electronically tunable graphene plasmonic metamolecules".

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Newly discovered immune cell type may be key to improving pancreatic cancer immunotherapy

Immunotherapy is showing great promise for treating cancer. But so far, this approach has been effective in only about 20% of all cancers. To advance those results, researchers are looking for new ways to mobilize the immune system to destroy tumors.

Most immunotherapy drugs act on one type of immune cells called T cells. Drugs called checkpoint inhibitors "release the brakes" on these cells, spurring them to mount an attack against a tumor. Researchers have learned that checkpoint inhibitors seem to work best in people whose tumors have been invaded by T cells -- sensing some kind of threat from the cancer -- before the treatment is started.

The problem is that most tumors don't have many T cells in them. In order to design an immunotherapy that works on more people, researchers have been looking for additional immune cell types to rally against cancer.

Now, an MSK research team reports finding a promising candidate: a group of immune cells called innate lymphoid cells (ILCs). These cells are present in many different tissues and appear to have mild antitumor effects in their normal resting state. The researchers showed that activating ILCs with drugs mobilizes T cells to shrink pancreatic cancer tumors. This could be an important step, as pancreatic cancers have not responded to checkpoint inhibitor drugs.

"We think this is an important finding both for pancreatic cancer research and cancer immunotherapy overall," says Vinod Balachandran, a surgeon-scientist affiliated with the David M. Rubenstein Center for Pancreatic Cancer Research and a member of the Parker Institute for Cancer Immunotherapy. "We are learning there are multiple ways to use the immune system to fight cancer. We think this is a sign that new immunotherapies are on the horizon."

Dr. Balachandran made the discovery in collaboration with cancer immunologists Taha Merghoub and Jedd Wolchok of the Human Oncology and Pathogenesis Program. The finding is reported today in Nature.

Turning to Innate Immune Cells

ILCs are part of the body's innate immune system where immune cells are programmed to put up an initial defense against infections and other threats, and further amplify the immune response by activating T cells. But ILCs were discovered only 10 years ago, so they have not been the focus of immunotherapy efforts. Now, innate immune cells are beginning to draw more interest from the cancer-research community. Dr. Balachandran and colleagues investigated if -- and how -- these cells played a role in the body's response to cancer.

For the Nature study, the team looked in human pancreatic tumors to see if ILCs were present. They saw that a subtype of these cells called ILC2s were present in larger numbers in tumors compared with normal organs, suggesting they were responding to the tumors. The researchers also found that pancreatic cancer patients with more ILC2s in their tumors lived longer, suggesting ILC2s possibly had an anticancer function.

The team then tested if ILC2s could help control tumors in mice. Removing ILC2s caused pancreatic tumors to grow faster.

"We thought, if these cells have protective tendencies against cancer, maybe we can figure out ways to activate them," Dr. Balachandran says.

Boosting Checkpoint Inhibition

ILC2s have receptors on their surface that control whether they multiply. The researchers found that dosing the ILC2s with a protein called interleukin 33 (IL-33) activated them, and caused both them and T cells to expand, which in turn caused tumors to shrink. IL-33 did not shrink tumors in mice that didn't have ILC2s, proving the ILC2s were the key cells mediating the effects.

The research team then looked for ways to further amp up ILC2 antitumor activity. Checkpoint proteins on the surface of T cells act as brakes to prevent them from attacking the body's own tissues. But this also limits the T cells' antitumor activity. As ILC2s are related to T cells, Dr. Balachandran's team wondered whether checkpoint proteins also acted as brakes on ILC2s. They discovered that when activated by IL-33, ILC2s express an important checkpoint protein on their surface called PD-1. This has interesting immunotherapy implications: PD-1 is one of the most important brakes on T cells, yet PD-1-blocking checkpoint inhibitors have not worked well against pancreatic tumors. This suggested treating mice with IL-33 may make pancreatic tumors sensitive to PD-1-blocking checkpoint inhibitors.

When the researchers gave IL-33 plus a PD-1 inhibitor to the mice, the tumors shrank even more. Activating ILC2s by adding IL-33 appeared to be the key for PD-1 checkpoint inhibitors to work well against the mouse pancreatic tumors.

Dr. Balachandran and his team are currently working on developing a drug that can activate ILC2s in humans as the next step.

"This is a novel treatment that works together with one of the most successful immunotherapies we have today," Dr. Balachandran says. "This could be a way to sensitize cancers that typically would not respond to PD-1 checkpoint inhibitors."

Credit: 
Memorial Sloan Kettering Cancer Center

Black phosphorous tunnel field-effect transistor as an alternative ultra-low power switch?

image: Figure. A: Optical image and band diagram of the heterojunction formed by the thickness variation of black phosphorus 2D material.
B: Schematic of the tunnel field-effect transistor and the thickness-dependent bandgap.
C: Characteristic transfer curve showing steep subthreshold swing and high on-current.

Image: 
KAIST

Researchers have reported a black phosphorus transistor that can be used as an alternative ultra-low power switch. A research team led by Professor Sungjae Cho in the KAIST Department of Physics developed a thickness-controlled black phosphorous tunnel field-effect transistor (TFET) that shows 10-times lower switching power consumption as well as 10,000-times lower standby power consumption than conventional complementary metal-oxide-semiconductor (CMOS) transistors.

The research team said they developed fast and low-power transistors that can replace conventional CMOS transistors. In particular, they solved problems that have degraded TFET operation speed and performance, paving the way to extend Moore's Law.

In the study featured in Nature Nanotechnology last month, Professor Cho's team reported a natural heterojunction TFET with spatially varying layer thickness in black phosphorous without interface problems. They achieved record-low average subthreshold swing values over 4-5 dec of current and record-high, on-state current, which allows the TFETs to operate as fast as conventional CMOS transistors with as much lower power consumption.

"We successfully developed the first transistor that achieved the essential criteria for fast, low-power switching. Our newly developed TFETs can replace CMOS transistors by solving a major issue regarding the performance degradation of TFETs,"Professor Cho said.

The continuous down-scaling of transistors has been the key to the successful development of current information technology. However, with Moore's Law reaching its limits due to the increased power consumption, the development of new alternative transistor designs has emerged as an urgent need.

Reducing both switching and standby power consumption while further scaling transistors requires overcoming the thermionic limit of subthreshold swing, which is defined as the required voltage per ten-fold current increase in the subthreshold region. In order to reduce both the switching and standby power of CMOS circuits, it is critical to reduce the subthreshold swing of the transistors.

However, there is fundamental subthreshold swing limit of 60 mV/dec in CMOS transistors, which originates from thermal carrier injection. The International Roadmap for Devices and Systems has already predicted that new device geometries with new materials beyond CMOS will be required to address transistor scaling challenges in the near future. In particular, TFETs have been suggested as a major alternative to CMOS transistors, since the subthreshold swing in TFETs can be substantially reduced below the thermionic limit of 60 mV/dec. TFETs operate via quantum tunneling, which does not limit subthreshold swing as in thermal injection of CMOS transistors.

In particular, heterojunction TFETs hold significant promise for delivering both low subthreshold swing and high on-state current. High on-current is essential for the fast operation of transistors since charging a device to on state takes a longer time with lower currents. Unlike theoretical expectations, previously developed heterojunction TFETs show 100-100,000x lower on-state current (100-100,000x slower operation speeds) than CMOS transistors due to interface problems in the heterojunction. This low operation speed impedes the replacement of CMOS transistors with low-power TFETs.

Professor Cho said, "We have demonstrated for the first time, to the best of our knowledge, TFET optimization for both fast and ultra-low-power operations, which is essential to replace CMOS transistors for low-power applications." He said he is very delighted to extend Moore's Law, which may eventually affect almost every aspect of life and society. This study (https://doi.org/10.1038/s41565-019-0623-7) was supported by the National Research Foundation of Korea.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Scientists predict state of matter that can conduct both electricity and energy perfectly

image: From left: Shiva Safaei, David Mazziotti, and LeeAnn Sager discuss their finding that a dual state of matter with both fermion and exciton condensates could exist.

Image: 
University of Chicago

Three scientists from the University of Chicago have run the numbers, and they believe there may be a way to make a material that could conduct both electricity and energy with 100% efficiency--never losing any to heat or friction.

The breakthrough, published Feb. 18 in Physical Review B, suggests a framework for an entirely new type of matter, which could have very useful technological applications in the real world. Though the prediction is based on theory, efforts are underway to test it experimentally.

"We started out trying to answer a really basic question, to see if it was even possible--we thought these two properties might be incompatible in one material," said co-author and research adviser David Mazziotti, a professor of Chemistry and the James Franck Institute and an expert in molecular electronic structure. "But to our surprise, we found the two states actually become entangled at a quantum level, and so reinforce each other."

Since an untold amount of energy is lost off power lines, engines and machinery every year, scientists are eager to find more efficient alternatives. "In many ways, this is the most important question of the 21st century--how to generate and move energy with minimal loss," Mazziotti said.

We've known about superconductors--a kind of material that can conduct electricity forever with nearly zero loss--for more than a century. But it was only in the last few years that scientists managed to make a similar material in the laboratory which can conduct energy with nearly zero loss, called an exciton condensate.

But both superconductors and exciton condensates are tricky materials to make and to keep functioning--partly because scientists don't fully understand how they work and the theory behind them is incomplete. We do know, however, that both involve the action of quantum physics.

UChicago graduate student LeeAnn Sager began to wonder how the two states could be generated in the same material. Mazziotti's group specializes in exploring the properties and structures of materials and chemicals using computation, so she began plugging different combinations into a computer model. "We scanned through many possibilities, and then to our surprise, found a region where both states could exist together," she said.

It appears that in the right configuration, the two states actually become entangled--a quantum phenomenon in which systems become intangibly linked together. This challenges the conventional notion that the two states are unrelated, and may open a new field of dual exciton and fermion pair condensates.

Using some advanced mathematics, they showed that thanks to the quantum entanglement, the dual condensates should theoretically exist even at the macroscopic size--that is, visible to the human eye.

"This implies that such condensates may be realizable in novel materials, such as a double layer of superconductors," Sager said.

The scientists are working with experimental groups to see if the prediction can be achieved in real materials.

"Being able to combine superconductivity and exciton condensates would be amazing for lots of applications--electronics, spintronics, quantum computing," said Shiva Safaei, a postdoctoral researcher and the third author on the paper. "Though this is a first step, it looks extremely promising."

Credit: 
University of Chicago

Fossils help identify a lone 'bright spot' in a similar state to coral reefs before human impact

image: Fossil coral close-up

Image: 
Aaron O'Dea

Researchers at the Smithsonian Tropical Research Institute (STRI) discovered a massive, 7,000-year-old fossilized coral reef near the institute's Bocas del Toro Research Station in Panama. Spanning about 50 hectares, it rewards paleontologists with an unusual glimpse of a "pristine" reef that formed before humans arrived.

"All modern reefs in the Caribbean have been impacted in some way by humans," said STRI staff scientist Aaron O'Dea. "We wanted to quantify that impact by comparing reefs that formed before and after human settlement."

Using a large excavator, the team dug 4-meter-deep trenches into the fossil reef and bagged samples of rubble. They dated the reef with high resolution radiometric dating.

"The fossils are exquisitely preserved," O'Dea said. "We found branching corals in life position with chemically pristine fossil preservation. Now we are classifying everything from snails and clams to sea urchins, sponge spicules and shark dermal denticles."

Archaeological evidence from Bocas del Toro indicates that settlers did not make extensive use of marine resources until around 2,000 years ago. So, the fossilized reef predates human impact by a few thousand years. After comparing fossilized corals with corals from nearby reefs, the team was surprised to find a modern reef that closely resembled the pre-settlement reef. They dubbed this a "bright spot," and asked why this reef is more similar to the prehistoric reef than the others.

"Most of the reefs in Bocas today look nothing like they did 7,000 years ago," said Andrew Altieri, former STRI scientist and now assistant professor at the University of Florida, Gainesville. "That confirmed our expectations given what we know about recent deterioration caused by humans. So we were really surprised when we discovered one modern reef that is indistinguishable in its community composition to the ancient reefs."

When the team cored this "bright spot" reef, they discovered that it had been in this state for centuries. "This suggests resilience," said Mauro Lepore, former STRI post-doctoral fellow. "And that kind of information can be really powerful for conservation."

"This finding begs the question of what's so special about this reef," O'Dea said. The team evaluated current environmental factors such as water quality, hypoxia, temperature, aspect and shape, but none of those explained why this reef is more like the pre-human impact reef. The only clues were that it was the furthest away from human activity and that the staghorn coral, which dominates the reef, had previously been shown to consist of clones resilient to white band disease.

More work is needed to understand why this bright spot persists in the face of human impacts. However, the team propose that these kinds of fossil records can help in conservation by establishing which ecosystems have been irrevocably altered and those which preserve elements of what was natural. Once identified, these "bright spots" could act as a guide to conserve other ecosystems.

Credit: 
Smithsonian Tropical Research Institute