Tech

Researchers give yeast a boost to make biofuels from discarded plant matter

image: Whitehead Institute Member Gerald Fink standing in front of a field of the grass Miscanthus giganteus, which is another potential source of cellulose that could be converted to ethanol.

Image: 
Photo courtesy of Felix Lam

More corn is grown in the United States than any other crop, but we only use a small part of the plant for food and fuel production; once people have harvested the kernels, the inedible leaves, stalks and cobs are left over. If this plant matter, called corn stover, could be efficiently fermented into ethanol the way corn kernels are, stover could be a large-scale, renewable source of fuel.

"Stover is produced in huge amounts, on the scale of petroleum," said Whitehead Institute Member and Massachusetts Institute of Technology (MIT) biology professor Gerald Fink. "But there are enormous technical challenges to using them cheaply to create biofuels and other important chemicals."

And so, year after year, most of the woody corn material is left in the fields to rot.

Now, a new study from Fink and MIT chemical engineering professor Gregory Stephanopolous led by MIT postdoctoral researcher Felix Lam offers a way to more efficiently harness this underutilized fuel source. By changing the growth medium conditions surrounding the common yeast model, baker's yeast Saccharomyces cerevisiae, and adding a gene for a toxin-busting enzyme, they were able to use the yeast to create ethanol and plastics from the woody corn material at near the same efficiency as typical ethanol sources such as corn kernels.

Sugarcoating the issue

For years, the biofuels industry has relied on microorganisms such as yeast to convert the sugars glucose, fructose and sucrose in corn kernels to ethanol, which is then mixed in with traditional gasoline to fuel our cars.

Corn stover and other similar materials are full of sugars as well, in the form of a molecule called cellulose. While these sugars can be converted to biofuels too, it's more difficult since the plants hold onto them tightly, binding the cellulose molecules together in chains and wrapping them in fibrous molecules called lignins. Breaking down these tough casings and disassembling the sugar chains results in a chemical mixture that is challenging for traditional fermentation microorganisms to digest.

To help the organisms along, workers in ethanol production plants pretreat high-cellulose material with an acidic solution to break down these complex molecules so yeast can ferment them. A side effect of this treatment, however, is the production of molecules called aldehydes, which are toxic to yeast. Researchers have explored different ways to reduce the toxicity of the aldehydes in the past, but solutions were limited considering that the whole process needs to cost close to nothing. "This is to make ethanol, which is literally something that we burn," Lam said. "It has to be dirt cheap."

Faced with this economic and scientific problem, industries have cut back on creating ethanol from cellulose-rich materials. "These toxins are one of the biggest limitations to producing biofuels at a low cost." said Gregory Stephanopoulos, who is the Willard Henry Dow Professor of Chemical Engineering at MIT.

Lending yeast a helping hand

To tackle the toxin problem, the researchers decided to focus on the aldehydes produced when acid is added to break down tough molecules. "We don't know the exact mechanism by which aldehydes attack microbes, so then the question was, if we don't really know what it attacks, how do we solve the problem?" Lam said. "So we decided to chemically convert these aldehydes into alcohol forms."

The team began looking for genes that specialized in converting aldehydes to alcohols, and landed on a gene called GRE2. They optimized the gene to make it more efficient through a process called directed evolution, and then introduced it into the yeast typically used for ethanol fermentation, Saccharomyces cerevisiae. When the yeast cells with the evolved GRE2 gene encountered aldehydes, they were able to convert them into alcohols by tacking on extra hydrogen atoms.

The resultant high levels of ethanol and other alcohols produced from the cellulose might have posed a problem in the past, but at this point Lam's past research came into play. In a 2015 paper from Lam, Stephanopoulos and Fink, the researchers developed a system to make yeast more tolerant to a wide range of alcohols, in order to produce greater volumes of the fuel from less yeast. That system involved measuring and adjusting the pH and potassium levels in the yeast's growth media, which chemically stabilized the cell membrane.

By combining this method with their newly modified yeast, "we essentially channeled the aldehyde problem into the alcohol problem, which we had worked on before," Lam said. "We changed and detoxified the aldehydes into a form that we knew how to handle."

When they tested the system, the researchers were able to efficiently make ethanol and even plastic precursors from corn stover, miscanthus and other types of plant matter. "We were able to produce a high volume of ethanol per unit of material using our system," Fink said. "That shows that there's great potential for this to be a cost-effective solution to the chemical and economic issues that arise when creating fuel from cellulose-rich plant materials."

Scaling up

Alternative fuel sources often face challenges when it comes to implementing them on a nationwide scale; electric cars, for example, require a nationwide charging infrastructure in order to be a feasible alternative to gas vehicles.

An essential feature of the researchers' new system is the fact that the infrastructure is already in place; ethanol and other liquid biofuels are compatible with existing gasoline vehicles so require little to no change in the automotive fleet or consumer fueling habits. "Right now [the US produces around] 15 billion gallons of ethanol per year, so it's on a massive scale," he said. "That means there are billions of dollars and many decades worth of infrastructure. If you can plug into that, you can get to market much faster."

And corn stover is just one of many sources of high-cellulose material. Other plants, such as wheat straw and miscanthus, also known as silvergrass, can be grown extremely cheaply. "Right now the main source of cellulose in this country is corn stover," Lam said. "But if there's demand for cellulose because you can now make all these petroleum-based chemicals in a sustainable fashion, then hopefully farmers will start planting miscanthus, and all these super dense straws."

In the future, the researchers hope to investigate the potential of modifying yeasts with these anti-toxin genes to create diverse types of biofuels such as diesel that can be used in typical fuel-combusting engines. "If we can [use this system for other fuel types], I think that would go a huge way toward addressing sectors such as ships and heavy machinery that continue to pollute because they have no other electric or non-emitting solution," Lam said.

Credit: 
Whitehead Institute for Biomedical Research

Test distinguishes SARS-CoV-2 from other coronaviruses with 100% accuracy

image: This point-of-care device uses the physics of fluids to draw a few drops of blood and biomedical lubricant through its components to test for COVID-19 antibodies and biomarkers without the need of any electricity.

Image: 
Jake Heggestad

DURHAM, N.C. - Biomedical engineers at Duke University have demonstrated a tablet-sized device that can reliably detect multiple COVID-19 antibodies and biomarkers simultaneously.

Initial results show the test can distinguish between antibodies produced in response to SARS-CoV-2 and four other coronaviruses with 100% accuracy.

The researchers are now working to see if the easy-to-use, energy-independent, point-of-care device can be used to predict the severity of a COVID-19 infection or a person's immunity against variants of the virus.

Having also recently shown the same "D4 assay" platform can detect Ebola infections a day earlier than the gold standard polymerase chain reaction (PCR) test, the researchers say the results show how flexible the technology can be to adapt to other current or future diseases.

The results appear online on June 25 in Science Advances.

"The D4 assay took six years to develop, but when the WHO declared the outbreak a pandemic, we began working to compress all of that work into a few months so we could explore how the test could be used as a public health tool," said Ashutosh Chilkoti, the Alan L. Kaganov Distinguished Professor and Chair of Biomedical Engineering at Duke. "Our test is designed to be both adaptable and truly point-of-care, and this is clearly a scenario when a portable, fast and cost-effective diagnostic would be most useful."

The technology hinges on a polymer brush coating that acts as a sort of non-stick coating to stop anything but the desired biomarkers from attaching to the test slide when wet. The high effectiveness of this non-stick shield makes the D4 assay incredibly sensitive to even low levels of its targets. The approach allows researchers to print different molecular traps on different areas of the slide to catch multiple biomarkers at once.

The current iteration of the platform also features tiny patterned tunnels that use the physics of liquids to draw samples through the channels without needing any electricity. With just a drop of blood and a drop of biomolecular lubricant, the test runs autonomously in a matter of minutes and can be read with a detector roughly the size of a very thick iPad.

"The detector is battery powered and the test doesn't require any power at all, so you can throw the whole thing into a backpack and truly test at the point-of-care with minimal resources," said Jason Liu, a PhD student working in the Chilkoti lab who designed and built the detector.

In the current study, the researchers tested the D4 assay's ability to detect and quantify antibodies produced against three parts of the COVID-19 virus -- a subunit of the spike protein, a binding domain within the spike protein that grabs on to cells, and the nucleocapsid protein that packages the virus's RNA. The test was able to spot the antibodies in all of the 31 patients tested with severe cases of COVID-19 after two weeks. It also reported zero false-positives in 41 samples taken from healthy people before the pandemic started as well as 18 samples taken from individuals infected with four other widely circulating coronaviruses.

With the pandemic on the downswing in the United States and hundreds of other COVID-19 antibody tests in development, the researchers don't believe this particular test is likely to be deployed in large numbers. But they say that the platform's proven accuracy and flexibility make it a prime candidate for developing into other types of tests or for use in future outbreaks.

For example, the platform could potentially be able to test whether or not people have immunity to the various strains of COVID-19 that continue to emerge.

"There's lots of questions from people about whether or not they're protected from new variants of COVID-19, and our test could answer some of those," said Jake Heggestad, a PhD student working in the Chilkoti lab who developed the chip for the test. "We believe that our platform should be able to distinguish between whether people have antibodies that can neutralize emerging variants of concern or if those antibodies aren't going to be protective against new variants."

The researchers are also working to develop the platform into a test for multiple prognostic markers of COVID-19 that together could indicate whether or not a patient is likely to have a severe case of the disease.

"We're platform builders, so we're working to show ways this technology can be easily modified to do different things," said David Kinnamon, a graduate student who developed the liquid handling system for the test. "We're showing this single platform can work as a diagnostic, assess immune response after infection and predict disease outcome, potentially all at the same time. I don't know of many tests that can do that."

"And it can do all of this on a platform that is super user-friendly and transportable," said Heggestad. "It's one thing to do all of this in a centralized facility like Duke, but it's another to be able to do large-scale testing and get good, sensitive results in remote locations around the world."

Credit: 
Duke University

Antacids may improve blood sugar control in people with diabetes

WASHINGTON--Antacids improved blood sugar control in people with diabetes but had no effect on reducing the risk of diabetes in the general population, according to a new meta-analysis published in the Endocrine Society's Journal of Clinical Endocrinology & Metabolism.

Type 2 diabetes is a global public health concern affecting almost 10 percent of people worldwide. Doctors may prescribe diet and lifestyle changes, diabetes medications, or insulin to help people with diabetes better manage their blood sugar, but recent data points to common over the counter antacid medicines as another way to improve glucose levels.

"Our research demonstrated that prescribing antacids as an add-on to standard care was superior to standard therapy in decreasing hemoglobin A1c (HbA1c) levels and fasting blood sugar in people with diabetes," said study author Carol Chiung-Hui Peng, M.D., of the University of Maryland Medical Center Midtown Campus in Baltimore, Md.

"For people without diabetes, taking antacids did not significantly alter their risk of developing the disease," said study author, Huei-Kai Huang M.D., of the Hualien Tzu Chi Hospital in Hualien, Taiwan.

The researchers performed a meta-analysis on the effects of proton pump inhibitors (PPIs)--a commonly used type of antacid medication--on blood sugar levels in people with diabetes and whether these medications could prevent the new onset of diabetes in the general population. The analysis included seven studies (342 participants) for glycemic control and 5 studies (244, 439 participants) for risk of incident diabetes. The researchers found antacids can reduce HbA1c levels by 0.36% in people with diabetes and lower fasting blood sugar by 10 mg/dl based on the results from seven clinical trials. For those without diabetes, the results of the five studies showed that antacids had no effect on reducing the risk of developing diabetes.

"People with diabetes should be aware that these commonly used antacid medications may improve their blood sugar control, and providers could consider this glucose-lowering effect when prescribing these medications to their patients," said study author Kashif Munir, M.D., associate professor in the division of endocrinology, diabetes and nutrition at the University of Maryland School of Medicine in Baltimore, Md.

Credit: 
The Endocrine Society

Curtin research finds 'fool's gold' not so foolish after all

Curtin University research has found tiny amounts of gold can be trapped inside pyrite, commonly known as 'fool's gold', which would make it much more valuable than its name suggests.

This study, published in the journal Geology in collaboration with the University of Western Australia and the China University of Geoscience, provides an in-depth analysis to better understand the mineralogical location of the trapped gold in pyrite, which may lead to more environmentally friendly gold extraction methods.

Lead researcher Dr Denis Fougerouse from Curtin's School of Earth and Planetary Sciences said this new type of "invisible" gold has not previously been recognised and is only observable using a scientific instrument called an atom probe.

"The discovery rate of new gold deposits is in decline worldwide with the quality of ore degrading, parallel to the value of precious metal increasing," Dr Fougerouse said.

"Previously gold extractors have been able to find gold in pyrite either as nanoparticles or as a pyrite-gold alloy, but what we have discovered is that gold can also be hosted in nanoscale crystal defects, representing a new kind of "invisible" gold.

"The more deformed the crystal is, the more gold there is locked up in defects. The gold is hosted in nanoscale defects called dislocations - one hundred thousand times smaller than the width of a human hair - so a special technique called atom probe tomography is needed to observe it."

Dr Fougerouse said the team also explored gold extraction methods and possible ways to obtain the trapped gold with less adverse impacts on the environment.

"Generally, gold is extracted using pressure oxidizing techniques (similar to cooking), but this process is energy hungry. We wanted to look into an eco-friendlier way of extraction," Dr Fougerouse said.

"We looked into an extraction process called selective leaching, using a fluid to selectively dissolve the gold from the pyrite. Not only do the dislocations trap the gold, but they also behave as fluid pathways that enable the gold to be "leached" without affecting the entire pyrite."

Credit: 
Curtin University

A direct look at OLED films leads to some pretty exciton findings

image: University of Tsukuba researchers used time-resolved photoelectron emission microscopy (TR-PEEM) to probe the exciton dynamics of thermally activated delayed fluorescence organic light-emitting diodes (TADF-OLEDs). TADF-OLEDs based on solid-state substrates have significant potential for use in display technology owing to their high efficiency; however, their electron dynamics are not well understood. The TR-PEEM method showed electron accumulation that indicated exciton dissociation. It is hoped that the findings will contribute to advances in OLED displays.

Image: 
University of Tsukuba

Tsukuba, Japan - Organic light-emitting diodes (OLEDs) are widely used in display technology and are also being investigated for lighting applications. A comprehensive understanding of these devices is therefore important if their properties are to be harnessed to their full potential. Researchers from the University of Tsukuba have directly observed the photoexcited electron dynamics in an organic film using time-resolved photoelectron emission microscopy. Their findings are published in Advanced Optical Materials .

OLED displays are popular because they are bright, lightweight, and do not consume a lot of power. Their output is generated when an exciton--a combination of an electron and an electron hole--releases its energy. However, this release is not possible for all OLED excitons, which makes their overall efficiency low.

To address this limitation, researchers are focusing on OLEDs that exhibit thermally activated delayed fluorescence (TADF-OLEDs), which show efficiencies of up to 100%.

However, details of the electron dynamics that affect their performance are not fully understood. Attempts to learn more have used poorly defined models, meaning the findings have been difficult to interpret and apply to other systems.

The researchers focused on a single-component, solid-state film of a material known as 4CzIPN and investigated it using time-resolved photoemission electron microscopy (TR-PEEM). They compared their findings with observations made using the more commonly used time-resolved photoluminescence (TR-PL) method to try to establish details of the decay process that were previously unknown.

"Solid-state films are excellent materials for OLEDs because they make the device fabrication process simpler, reduce the degradation that is often seen, and exhibit excellent quantum efficiencies," explains study corresponding author Professor Yoichi Yamada. "The trouble is that we still don't fully understand what is happening to the excitons, so there's the possibility that we could be making them even better."

The researchers successfully detected the photoexcited electron dynamics of the TADF solid-state film using TR-PEEM. And by comparing with TR-PL results they identified long-lived electrons that they believe were formed by the dissociation of excitons.

They found that up to 4% of the excitons formed may dissociate and become trapped in the film. Very little evidence for this has been noted using other techniques.

"In addition to detecting a feature of exciton decay in TADF-OLEDs that has not been directly observed to date; we also demonstrated the potential of the TR-PEEM method," Professor Yamada explains. "We believe our findings will make a significant contribution to the development of efficient OLED-based products."

Credit: 
University of Tsukuba

A major addition to chemists' toolkit for building new molecules

LA JOLLA, CA--Chemists at Scripps Research have solved a long-standing problem in their field by developing a method for making a highly useful and previously very challenging type of modification to organic molecules. The breakthrough eases the process of modifying a variety of existing molecules for valuable applications such as improving the potency and duration of drugs.

The flexible new method, for "directed C—H hydroxylation with molecular oxygen," does what only natural enzymes have been able to do until now. It's described in a paper this week in Science.

"We expect this method to be adopted widely for building potential new drug molecules and for modifying and even repurposing existing drugs," says principal investigator Jin-Quan Yu, PhD, the Bristol Myers Squibb Chair in Chemistry at Scripps Research. Yu also is the Frank and Bertha Hupp Professor of Chemistry.

Most pharmaceuticals and countless other chemical products are small organic molecules based on backbone rings of carbon atoms. Sometimes the backbone ring includes a non-carbon atom such as nitrogen in place of a carbon, in which case it is called a heterocycle.

Chemists in the past century have made enormous progress in finding ways to assemble such molecules using step-by-step chemical reactions--a process they call organic synthesis. But some widely desired assembly steps have remained difficult or impossible using synthetic methods.

One of these is the replacement of a hydrogen atom, which adorns backbone carbons by default, with a pairing of an oxygen and hydrogen atom called a hydroxyl. Chemists would like to be able to make such a replacement anywhere on a carbon ring, using ordinary O2 as the source of oxygen atoms. However, borrowing one oxygen atom from O2 is very challenging, particularly when modifying heterocyclic compounds. Although highly specialized and dedicated enzymes in animal cells, known as cytochrome P450 enzymes, have evolved to catalyze this type of reaction, until now no chemist has managed the feat with the more flexible tools of organic synthesis.

Yu and his team, which included co-first authors Zhen Li and Zhen Wang, found a way to do this, using an unusual reaction-enabling "catalyst." The catalyst includes an atom of the precious metal palladium, which is widely used in organic synthesis for its ability to break the bonds tying hydrogen atoms to organic molecules' carbon backbones.

But the key ingredient in the catalyst is a small organic molecule called pyridone, which acts as a sort of handle--a "ligand"--on the palladium. This ligand essentially enables the palladium-driven removals of hydrogens and attachments of hydroxyls, in a manner more flexible than ever before, by changing its own identity--shape-shifting, back and forth, between pyridone and a closely related molecule called pyridine. Chemists call such pairs of inter-converting molecules "tautomers."

Yu and his colleagues demonstrated the ease and value of the new method by using it to modify a variety of existing drugs including the blood pressure-lowering telmisartan, the gout drug probenecid, and the anti-inflammatory meclofenamic acid.

"With this catalyst and its tautomeric ligand we can get around many of the traditional limitations on where hydroxylations can be made when building new molecules or modifying existing ones," Yu says.

Credit: 
Scripps Research Institute

AI breakthrough in premature baby care

James Cook University scientists in Australia believe they have made a breakthrough in the science of keeping premature babies alive.

As part of her PhD work, JCU engineering lecturer Stephanie Baker led a pilot study that used a hybrid neural network to accurately predict how much risk individual premature babies face.

She said complications resulting from premature birth are the leading cause of death in children under five and over 50 per cent of neonatal deaths occur in preterm infants.

"Preterm birth rates are increasing almost everywhere. In neonatal intensive care units, assessment of mortality risk assists in making difficult decisions regarding which treatments should be used and if and when treatments are working effectively," said Ms Baker.

She said to better guide their care, preterm babies are often given a score that indicates the risk they face.

"But there are several limitations of this system. Generating the score requires complex manual measurements, extensive laboratory results, and the listing of maternal characteristics and existing conditions," said Ms Baker.

She said the alternative was measuring variables that do not change - such as birthweight - that prevents recalculation of the infant's risk on an ongoing basis and does not show their response to treatment.

"An ideal scheme would be one that uses fundamental demographics and routinely measured vital signs to provide continuous assessment. This would allow for assessment of changing risk without placing unreasonable additional burden on healthcare staff," said Ms Baker.

She said the JCU team's research, published in the journal Computers in Biology and Medicine, had developed the Neonatal Artificial Intelligence Mortality Score (NAIMS), a hybrid neural network that relies on simple demographics and trends in heart and respiratory rate to determine mortality risk.

"Using data generated over a 12 hour period, NAIMS showed strong performance in predicting an infant's risk of mortality within 3, 7, or 14 days.

"This is the first work we're aware of that uses only easy-to-record demographics and respiratory rate and heart rate data to produce an accurate prediction of immediate mortality risk," said Ms Baker.

She said the technique was fast with no need for invasive procedures or knowledge of medical histories.

"Due to the simplicity and high performance of our proposed scheme, NAIMS could easily be continuously and automatically recalculated, enabling analysis of a baby's responsiveness to treatment and other health trends," said Ms Baker.

She said NAIMS had proved accurate when tested against hospital mortality records of preterm babies and had the added advantage over existing schemes of being able to perform a risk assessment based on any 12-hours of data during the patient's stay.

Ms Baker said the next step in the process was to partner with local hospitals to gather more data and undertake further testing.

"Additionally, we aim to conduct research into the prediction of other outcomes in neo-natal intensive care, such as the onset of sepsis and patient length of stay," said Ms Baker.

Credit: 
James Cook University

Scientists discover key player in brain development, cell communication

image: Two astrocyte neighbors communicating with each other and many other cells in the mouse brain.

Image: 
Baldwin Lab, UNC School of Medicine

CHAPEL HILL, NC - When we think of the brain, we think of neurons. But much of the brain is made of non-neuronal cells called glial cells, which help regulate brain development and function. For the first, time UNC School of Medicine scientist Katie Baldwin, PhD, and colleagues revealed a central role of the glial protein hepaCAM in building the brain and affecting brain function early in life.

The findings, published in Neuron, have implications for better understanding disorders, such as autism, epilepsy, and schizophrenia, and potentially for creating therapeutics for conditions such as the progressive brain disorder megalencephalic leukoencephalopathy with subcortical cysts (MLC).

For over one hundred years, the glia were thought of as a bunch of support cells, like a kind of brain glue keeping the more important neurons in place to do the brain's real work. But over the past 30 years, scientists have been teasing apart the importance of glial cells as regulators of brain development and function. Still, much remains unknown, especially about complex glial cells called astrocytes, which extend thousands of fine branches throughout the brain, directly interacting with neurons and other brain cells.

During brain development prior to birth and thereafter, astrocytes establish an intricate network of distinct territories, a sort of tiling of the brain, like the tiles of a soccer ball. These cells use specialized connections, called gap junctions, to communicate throughout this network.

"Both astrocyte tiling and communication through gap junctions are disrupted in different brain disorders and following injury, suggesting these features are important for normal brain function," said Baldwin, the corresponding author, member of the UNC Neuroscience Center, and assistant professor in the UNC Department of Cell Biology and Physiology. "But prior to our study, it was unknown how astrocytes established their territories and whether there was a link between astrocyte territory and gap junction communication, also known as coupling."

In this study, Baldwin and colleagues focused on hepaCAM, a protein abundantly expressed on the astrocyte membranes. They created a transgenic rodent model so the animals did not express any hepaCAM protein in astrocytes. The researchers used this model along with cutting-edge genetic and imaging techniques to study the developing astrocytes, their resulting tile-like territories, and gap junction coupling.

"Deleting hepaCAM from astrocytes disrupted astrocyte territories and impaired gap junction coupling. Essentially, these astrocytes no longer do a good job of communicating with their neighbors," Baldwin said. "We also found that, even though we did not make any disruptions to neurons, loss of hepaCAM in astrocytes altered the balance of synaptic excitation and inhibition." That is, without this protein, astrocyte branches in contact with neurons affected the how neurons behaved.

"We think our findings have important implications for understanding the pathogenesis of MLC, as well as the general role of astrocyte dysfunction as a driving cause of neurological disorders, such as epilepsy," Baldwin said.

Baldwin conducted this work while a postdoctoral fellow in Cagla Eroglu's lab at Duke University before joining UNC-Chapel Hill this past spring. Her lab will continue to focus on the impact of hepaCAM mutations on astrocyte function.

"We are building on this research to explore the bigger question of how astrocytes balance their connections with other cell types in the brain," she said, "with the goal of understanding how problems in astrocytes cause disease in humans, and how we might help people with these serious and complex disorders."

Credit: 
University of North Carolina Health Care

Scientists discover how dengue vaccine fails to protect against disease

CHAPEL HILL, NC - Developing a viable vaccine against dengue virus has proved difficult because the pathogen is actually four different virus types, or serotypes. Unless a vaccine protects against all four, a vaccine can wind up doing more harm than good.

To help vaccine developers overcome this hurdle, the UNC School of Medicine lab of Aravinda de Silva, PhD, professor in the UNC Department of Microbiology and Immunology, investigated samples from children enrolled in a dengue vaccine trial to identify the specific kinds of antibody responses that correlate with protection against dengue virus disease. In doing so, the researchers discovered that a small subpopulation of antibodies binding to unique sites on each serotype are linked to protection. The research, published in the Journal of Clinical Investigation, provides important information for vaccine developers to consider when creating a dengue vaccine, which has long eluded scientists.

The four dengue virus serotypes are mosquito-borne flaviviruses that infect hundreds of millions of individuals each year in Southeast Asia, western Pacific Islands, Africa, and Latin America. Nearly 100 million individuals report flu-like symptoms. Though rarely deadly, the virus can cause severe illness, especially when a person who was previously infected with one serotype (and recovers) is then infected by a second serotype. This happens because antibodies from the first infection help the virus replicate during the second infection through a process called antibody dependent enhancement. A dengue vaccine induced antibody response weighted towards a single dengue virus serotype can mimic this phenomenon.

Several vaccines have been in clinical development for years, and most show that they induce neutralizing antibodies against all four serotypes. Yet, research has also shown that the creation of neutralizing antibodies alone does not correlate to protection against clinical disease. The de Silva lab conducted experiments to compare the properties of antibodies against wild-type Dengue viruses and the properties of antibodies produced by a leading vaccine candidate - Dengvaxia - which the pharmaceutical company Sanofi Pasteur created using all four dengue virus serotypes in one formulation.

Experiments led by Sandra Henein, research associate in the UNC Department of Microbiology and Immunology, and Cameron Adams, a medical and graduate student in the UNC Medical Scientist Training Program (MD/PhD), showed that wild type infections induced neutralizing and protective antibodies that recognized a part of the virus - an epitope - unique to each serotype. The vaccine, though, mainly stimulated neutralizing antibodies that recognized epitopes common among all serotypes. In vaccine trials, these antibodies did not protect children from dengue. In the past, researchers have considered all dengue neutralizing antibodies to be protective in people. This appears to not be the case, according to this UNC-led research.

"Our results suggest that a safe and effective dengue virus vaccine needs to stimulate neutralizing antibodies targeting unique sites on each of the four dengue serotypes ," Adams said. "Not merely the neutralizing antibodies against cross-reactive epitopes common to all four dengue types."

Credit: 
University of North Carolina Health Care

Putting functional proteins in their place

image: An illustration showing the approach for assembling biologically functional proteins into ordered 2D and 3D arrays through programmable octahedral-shaped DNA frameworks. These frameworks can host and control the placement of the proteins internally -- for example, at the center (1) or off-center (2) -- and be encoded with specific sequences externally (color coding scheme) to create desired 2-D and 3-D lattices. For example, red only connects to red, blue to blue, and so on. The team demonstrated the preserved biological activity of ferritin lattices by adding a compound (ascorbate) that induced the release of iron irons forming the ferritin core.

Image: 
<em>Nature Communications</em> volume 12, Article number: 3702 (2021)

UPTON, NY--Scientists have organized proteins--nature's most versatile building blocks--in desired 2-D and 3-D ordered arrays while maintaining their structural stability and biological activity. They built these designer functional protein arrays by using DNA as a programmable construction material. The team--representing the U.S. Department of Energy's (DOE) Brookhaven National Laboratory, Columbia University, DOE's Lawrence Berkeley National Laboratory, and City University of New York (CUNY)--described their approach in the June 17 issue of Nature Communications.

"For decades, scientists have dreamed about rationally assembling proteins into specific organizations with preserved protein function," said corresponding author Oleg Gang, leader of the Center for Functional Nanomaterials (CFN) Soft and Bio Nanomaterials Group at Brookhaven Lab and a professor of chemical engineering and of applied physics and materials science at Columbia Engineering. "Our DNA-based platform has enormous potential not only for structural biology but also for various bioengineering, biomedical, and bionanomaterial applications."

The primary motivation of this work was to establish a rational way to organize proteins into designed 2-D and 3-D architectures while preserving their function. The importance of organizing proteins is well known in the field of protein crystallography. For this technique, proteins are taken from their native solution-based environments and condensed to form an orderly arrangement of atoms (crystalline structure), which can then be structurally characterized. However, because of their flexibility and aggregation properties, many proteins are difficult to crystallize, requiring trial and error. The structure and function of proteins may change during the crystallization process, and they may become nonfunctional when crystallized by traditional methods. This new approach opens many possibilities for creating engineered biomaterials, beyond the goals of structural biology.

"The ability to make biologically active protein lattices is relevant to many applications, including tissue engineering, multi-enzyme systems for biochemical reactions, large-scale profiling of proteins for precision medicine, and synthetic biology," added first author Shih-Ting (Christine) Wang, a postdoc in the CFN Soft and Bio Nanomaterials Group.

Though DNA is best known for its role in storing our genetic information, the very same base-pairing processes used for this storage can be leveraged to construct desired nanostructures. A single strand of DNA is made of subunits, or nucleotides, of which there are four kinds (known by the letters A, C, T, and G). Each nucleotide has a complementary nucleotide it attracts and binds to (A with T and C with G) when two DNA strands are near each other. Using this concept in the technique of DNA origami, scientists mix multiple short strands of synthetic DNA with a single long strand of DNA. The short strands bind to and "fold" the long strand into a particular shape based on the sequence of bases, which scientists can specify.

In this case, the scientists created octahedral-shaped DNA origami. Inside these cage-like frameworks, they placed DNA strands with a particular "color," or coding sequence, at targeted locations (center and off center). To the surface of proteins--specifically, ferritin, which stores and releases iron, and apoferritin, its iron-free counterpart--they attached complementary DNA strands. By mixing the DNA cages and conjugated proteins and heating up the mixture to promote the reaction, the proteins went to the internal designated locations. They also created empty cages, without any protein inside.

To connect these nanoscale building blocks, or protein "voxels" (DNA cages with encapsulated proteins), in desired 2-D and 3-D arrays, second author and Columbia PhD student Brian Minevich designed different colors for the external bonds of the voxels. With this color scheme, the voxels would recognize each other in programmable, controllable ways leading to the formation of specifically prescribed types of protein lattices. To demonstrate the versatility of the platform, the team constructed single- and double-layered 2D arrays, as well as 3D arrays.

"By arranging the colors in a particular way, we can program the formation of different lattices," explained Gang. "We have full control to design and build the protein lattice architectures we want."

To confirm that the proteins had been encapsulated inside the cages and the lattices had been constructed as designed, the team turned to various electron- and x-ray-based imaging and scattering techniques. These techniques included electron microscopy (EM) imaging at the CFN; small-angle x-ray scattering at the National Synchrotron Light Source II (NSLS-II) Complex Materials Scattering (CMS) and Life Science X-ray Scattering (LiX) beamlines at Brookhaven; and cryogenic-EM imaging at the Molecular Foundry (MF) of Lawrence Berkeley and the CUNY Advanced Science Research Center. The CFN, NSLS-II, and MF are all DOE Office of Science User Facilities; CFN and MF are two of five DOE Nanoscale Science Research Centers.

"The science was enabled by advanced synthesis and characterization capabilities at three user facilities within the national lab system and one university-based facility," said Gang. "Without these facilities and the expertise of scientists from each of them, this study wouldn't have been possible."

Following these assembly studies, they investigated the biological activity of ferritin. By adding a reducing reagent to the ferritin lattice, they induced the release of iron ions from the center of the ferritin proteins.

"By monitoring the evolution of SAXS patterns during iron release, we could quantify how much iron was released and how quickly it was released, as well as confirm that the integrity of the lattice was maintained during this protein operation," said Minevich. "According to our TEM studies, the proteins remained inside the frames."

"We showed that the proteins can perform the same function as they do in a biological environment while keeping the spatial organization we created," explained Wang.

Next, the team will apply their DNA-based platform to other types of proteins, with the goal of building more complex, operational protein systems.

"This research represents an important step in bringing together different components from real biological machinery and organizing them into desired 2-D and 3-D architectures to create engineered and bioactive materials," said Gang. "It's exciting because we see the rational path for fabricating desired functional bio-nano systems never-before produced by nature."

Credit: 
DOE/Brookhaven National Laboratory

Having the same nurse for home health visits may prevent rehospitalization for people with dementia

People with dementia receiving home health care visits are less likely to be readmitted to the hospital when there is consistency in nursing staff, according to a new study by researchers at NYU Rory Meyers College of Nursing. The findings are published in the journal Medical Care, a journal of the American Public Health Association.

Home health care--in which health providers, primarily nurses, visit patients' homes to deliver care--has become a leading source of home- and community-based services caring for people living with dementia. These individuals often have multiple chronic conditions, take several medications, and need assistance with activities of daily living. In 2018, more than 5 million Medicare beneficiaries received home health care, including 1.2 million with Alzheimer's disease and related dementias.

"Nurses play a pivotal role in providing home health care," said Chenjuan Ma, PhD, MSN, assistant professor at NYU Meyers and the study's lead author. "As the population ages and older adults choose to 'age in place' as long as possible, the demand for home health care for people with dementia is expected to grow rapidly."

For most patients, their home health care often begins after being discharged from the hospital. Given that hospital readmissions are a significant quality, safety, and financial issue in healthcare, Ma and her colleagues wanted to understand if having continuity of care, or the same nurse coming to each home visit, could help prevent patients from being readmitted.

Using multiple years of data from a large, not-for-profit home health agency, the researchers studied 23,886 older adults with dementia who received home health care following a hospitalization. They measured continuity of care based on the number of nurses and visits during home health care, with a higher score indicating better continuity of care.

Approximately one in four (24 percent) of the older adults with dementia in the study were rehospitalized from home health care. Infections, respiratory problems, and heart disease were the three most common reasons for being readmitted to the hospital.

The researchers found wide variations in continuity of nursing care in home health visits for people with dementia. Eight percent had no continuity of care, with a different nurse visiting each time, while 26 percent received all visits from one nurse. They also found that the higher the visit intensity, or more hours of care provided each week, the lower the continuity of care.

"This may suggest that it is hard to achieve continuity of care when a patient requires more care, though we cannot exclude the possibility that high continuity of care results in more efficient care delivery and thus fewer hours of care," explained Ma.

Notably, increased continuity of home health care led to a lower risk for rehospitalization, even after the researchers controlled for other clinical risk factors and the intensity of home health care (the average hours of care per week). Compared to those with a high continuity of nursing care, people with dementia receiving low or moderate continuity of nursing care were 30 to 33 percent more likely to be rehospitalized.

"Continuity of nursing care is valuable for home health care because of its decentralized and intermittent care model," said Ma. "While continuity of nursing care may benefit every home health care patient, it may be particularly critical for people with dementia. Having the same person delivering care can increase familiarity, instill trust, and reduce confusion for patients and their families."

To improve continuity of nursing care, the researchers recommend addressing the shortage of home health care nurses, improving care coordination, and embracing telehealth in home health care.

"Multiple structural factors present challenges for continuity of care for home health nurses and other staff. These can include long commute times, few full- or part-time staff, agencies relying mostly on per diem staff, and organizational cultures that do not foster retention of home health care staff," said Allison Squires, PhD, RN, FAAN, associate professor at NYU Meyers and the study's senior author. "Proposed legislation in Congress that seeks to increase nursing and home health care frontline staff salaries will pay for itself because agencies can improve continuity of care, and therefore reduce penalties associated with hospital readmissions."

A hybrid care model of in-person visits and telehealth visits could also help achieve more continuity of care, the researchers note. They encourage policymakers to consider expanding coverage for telehealth visits in home health care.

Credit: 
New York University

Texan voters unsure if state can tackle power grid issues

When Winter Storm Uri hit, many Texans lost power from February 14-20, resulting in losses of lives and economic activity, and damages to their homes that for some are still not completely repaired. Now, four months later as demand for electricity has increased at the start of the summer amid tight supply, Texans continue to prioritize improvements to the power grid, albeit with doubt as to whether the Texas Legislature and Governor can get the job done.

In a survey by the Hobby School of Public Affairs and UH Energy at the University of Houston fielded between May 13-24, 1,500 individuals in Texas aged 18 and older responded to a series of questions regarding their experience during Winter Storm Uri and their evaluation of policy proposals toward protecting the Texas electric grid from severe weather events in the future.

"Winter Storm Uri was a massive event, with widespread impact across the state" said Pablo M. Pinto, the principal investigator who serves as associate professor and director of the Center for Public Policy at the University of Houston's Hobby School of Public Affairs.

Two-thirds of those surveyed lost power following Winter Storm Uri, while roughly 30% of those surveyed sustained damage to their home. Additionally, the bulk of power outages for sustained periods of time were clustered near larger urban centers in Texas.

Larger urban centers had no power for more than 30 hours in several zip codes. According to the survey, in the Houston area these zip codes were more clustered than the other large metro-areas in Texas.

The impact of the February storm is abundantly clear, but Texans' confidence in the ability of their state government to prevent another incident like this from repeating itself is less clear.

"Three months after the storm, Texans remained frustrated and blamed government officers, power generators and natural gas producers for the power outages," Pinto said. "They signal this frustration in their demand that energy producers and the Texas governments, not consumers, should bear the costs of retrofitting the Texas grid to withstand extreme weather events, at least in the short term."

Roughly 40% of respondents disagreed that the Texas state government will adequately tackle issues related to the electric grid. A partisan and age divide emerged within this result, namely that Republicans agreed more than Democrats and Independents, as did respondents older than 65 years old.

"A salient concern among Texans is having access to a reliable supply of electric power, which means a power system to provide uninterrupted service at an acceptable price," said Sunny Wong, professor at the UH Hobby School of Public Affairs and one of the principal investigators of the study.

The survey showed a correlation between age and disapproval of unreliable electric service. Of those aged 45-65 years old and older than, 48% and 53%, respectively, agreed that it is never acceptable for the power to go out.

Despite some doubt, eligible voters across party lines believe that wind, solar and other renewable energy sources will make a substantial contribution to reliable and secure electricity supply in Texas in the future. Sun or solar power accounted for 56.3% of surveyed respondents selected sun or solar power as a likely to make a substantial contribution, followed by wind power at 54%. The greatest support for renewables came from ages 18-29 at 69%.

"Even among Republicans, who had the lowest level of support compared to Democrats and Independents, 42% still agreed that solar power would make a substantial contribution," said Gail Buttorff, Co-Director, Survey Research Institute and Assistant Instructional Professor, at the UH Hobby School of Public Affairs and co-principal investigator.

"Younger respondents are much more likely to believe that climate change is happening, though a majority of respondents believe it is happening across age groups. 91% of respondents aged 18-29 believe climate change is happening compared to 76%, 73%, and 60% in the three older age groups."

With so many affected by the storm in February 2021 and the recent request by ERCOT to conserve energy in light of supply tightness juxtaposed by increased demand for electricity to cool homes, Texans continue to keep the power grid at the forefront of their minds.

Senate Bills 2 and 3 were passed by the Texas Legislature in an effort to oversee the appointment of the Public Utility Commission of Texas and of Electric Reliability Council of Texas (ERCOT), as well as require the weatherization of some of the industry's and infrastructure. These both fall in line with voter values, which point to reliability (40%) as a top priority, followed by cost (26%). Weatherization and winterization of the electricity system also emerged as top policy preference among respondents.

"Although respondents preferred not to see the price of electricity increase, they realize that reliable access to electricity will require major investments and regulatory changes in the long run" said Ramanan Krishnamoorti, Chief Energy Officer at the University of Houston and Professor of Chemical and Biomolecular Engineering, who was one of the leaders of the study. "When offered a menu of investments and policy interventions, respondents revealed their willingness to pay modest increases in electricity prices to shorten power."

Time will tell whether legislation passed in the Spring will satiate Texas voters' demands for more reliable and affordable electricity. One point is certain, a majority of surveyed voters have pointed to renewables as a preferred path forward in Texas for diversifying the energy mix and improving reliable and sustainable electricity. Texas is the leading state in wind power already and among the top leaders in solar, and it appears Texas voters are in agreement to continue leading the pack.

Credit: 
University of Houston

NUST MISIS scientists create unique alloy for air, rail transports

image: Torgom Akopyan, senior researcher at NUST MISIS Department of Metal Forming

Image: 
Sergey Gnuskov, NUST MISIS

Scientists from the National University of Science and Technology "MISIS" (NUST MISIS) in cooperation with their colleagues from the Siberian Federal University and the Research and Production Centre of Magnetic Hydrodynamics (Krasnoyarsk) have developed a technology for producing a unique heat-resistant aluminium alloy with improved durability.

According to the researchers, this new alloy could replace more expensive and heavier copper conductors in aircraft and high-speed rail transport. The study results were published in an interdisciplinary, peer-reviewed journal, the Materials Letters. (https://www.sciencedirect.com/science/article/abs/pii/S0167577X2100896X)

Researchers have created a method for producing a unique heat-resistant, high-strength wire. The wire is made from an aluminium alloy, initially cast as a long billet, about 10 mm in diameter, in an electromagnetic crystalliser. The authors have succeeded in obtaining a thermally stable structure (up to and including 400°C), which is considerably superior to known aluminium alloys with thermal stability, retaining their properties up to 250-3000°C.

"Before, alloys with such a structure were attempted to be produced using complicated and expensive technology involving ultrafast melt crystallisation, pellet production and subsequent methods of powder metallurgy", Nikolay Belov, Chief Scientist and Professor of Materials Science and Light Alloys at National University of Science and Technology "MISiS", explained.

The researchers have conducted direct deformation of a long billet - rolling and drawing - without using the traditional operations of homogenization and hardening for aluminium alloys. The key feature of their proposed technology lies in the casting and annealing regimes which produce a structure of thermally stable nanoparticles containing copper (Cu), manganese (Mn) and zirconium (Zr).

"We have been able to produce a high-strength heat-resistant wire from this alloy. We are now determining its physical and mechanical properties, and the first results are already very impressive. We are planning to patent the method of producing this type of wire", Torgom Akopyan, senior researcher at NUST MISIS Department of Metal Forming, noted.

Heat-resistant, high-strength conductivity conductors could find use in aircraft and high-speed rail transport instead of the significantly more expensive and heavier copper ones. According to the authors, the unique and cheap technology could interest producers of wrought aluminium alloy semi-finished products.

Credit: 
National University of Science and Technology MISIS

Hydrofracking environmental problems not that different from conventional drilling

image: Professor Tao Wen contributed to a study assessing groundwater contamination caused by oil and gas production.

Image: 
Syracuse University

Crude oil production and natural gas withdrawals in the United States have lessened the country's dependence on foreign oil and provided financial relief to U.S. consumers, but have also raised longstanding concerns about environmental damage, such as groundwater contamination.

A researcher in Syracuse University's College of Arts and Sciences, and a team of scientists from Penn State, have developed a new machine learning technique to holistically assess water quality data in order to detect groundwater samples likely impacted by recent methane leakage during oil and gas production. Using that model, the team concluded that unconventional drilling methods like hydraulic fracturing - or hydrofracking - do not necessarily incur more environmental problems than conventional oil and gas drilling.

The two common ways to extract oil and gas in the U.S. are through conventional and unconventional methods. Conventional oil and gas are pumped from easily accessed sources using natural pressure. Conversely, unconventional oil and gas are acquired from hard-to-reach sources through a combination of horizontal drilling and hydraulic fracturing. Hydrofracking extracts natural gas, petroleum and brine from bedrock formations by injecting a mixture of sand, chemicals and water. By drilling into the earth and directing the high-pressure mixture into rock, the gas inside releases and flows out to the head of a well.

Tao Wen, assistant professor of earth and environmental sciences (EES) at Syracuse, recently led a study comparing data from different states to see which method might result in greater contamination of groundwater. They specifically tested levels of methane, which is the primary component of natural gas.

The team selected four U.S. states located in important shale zones to target for their study: Pennsylvania, Colorado, Texas and New York. One of those states - New York - banned the practice of hydrofracking in 2015 following a review by the NYS Department of Health which found significant uncertainties about health, including increased water and air pollution.

Wen and his colleagues compiled a large groundwater chemistry dataset from multiple sources including federal agency reports, journal articles, and oil and gas companies. The majority of tested water samples in their study were collected from domestic water wells. Although methane itself is not toxic, Wen says that methane contamination detected in shallow groundwater could be a risk to the relevant homeowner as it could be an explosion hazard, could increase the level of other toxic chemical species like manganese and arsenic, and would contribute to global warming as methane is a greenhouse gas.

Their model used sophisticated algorithms to analyze almost all of the retained geochemistry data in order to predict if a given groundwater sample was negatively impacted by recent oil and gas drilling.

The data comparison showed that methane contamination cases in New York - a state without unconventional drilling but with a high volume of conventional drilling - were similar to that of Pennsylvania - a state with a high volume of unconventional drilling. Wen says this suggests that unconventional drilling methods like fracking do not necessarily lead to more environmental problems than conventional drilling, although this result might be alternatively explained by the different sizes of groundwater chemistry datasets compiled for these two states.

The model also detected a higher rate of methane contamination cases in Pennsylvania than in Colorado and Texas. Wen says this difference could be attributed to different practices when drillers build/drill the oil and gas wells in different states. According to previous research, most of the methane released into the environment from gas wells in the U.S. occurs because the cement that seals the well is not completed along the full lengths of the production casing. However, no data exists to conclude if drillers in those three states use different technology. Wen says this requires further study and review of the drilling data if they become available.

According to Wen, their machine learning model proved to be effective in detecting groundwater contamination, and by applying it to other states/counties with ongoing or planned oil and gas production it will be an important resource for determining the safest methods of gas and oil drilling.

Credit: 
Syracuse University

Backscatter breakthrough runs near-zero-power IoT communicators at 5G speeds everywhere

image: Printed mmWave array prototype for Gbit-data rate backscatter communication.

Image: 
John Kimionis, Nokia Bell Labs

The promise of 5G Internet of Things (IoT) networks requires more scalable and robust communication systems -- ones that deliver drastically higher data rates and lower power consumption per device.

Backscatter radios ? passive sensors that reflect rather than radiate energy ? are known for their low-cost, low-complexity, and battery-free operation, making them a potential key enabler of this future although they typically feature low data rates and their performance strongly depends on the surrounding environment.

Researchers at the Georgia Institute of Technology, Nokia Bell Labs, and Heriot-Watt University have found a low-cost way for backscatter radios to support high-throughput communication and 5G-speed Gb/sec data transfer using only a single transistor when previously it required expensive and multiple stacked transistors.

Employing a unique modulation approach in the 5G 24/28 Gigahertz (GHz) bandwidth, the researchers have shown that these passive devices can transfer data safely and robustly from virtually any environment. The findings were reported earlier this month in the journal Nature Electronics.

Traditionally, mmWave communications, called the extremely high frequency band, is considered "the last mile" for broadband, with directive point-to-point and point-to-multipoint wireless links. This spectrum band offers many advantages, including wide available GHz bandwidth, which enables very large communication rates, and the ability to implement electrically large antenna arrays, enabling on-demand beamforming capabilities. However, such mmWave systems depend on high-cost components and systems.

The Struggle for Simplicity Versus Cost

"Typically, it was simplicity against cost. You could either do very simple things with one transistor or you need multiple transistors for more complex features, which made these systems very expensive," said Emmanouil (Manos) Tentzeris, Ken Byers Professor in Flexible Electronics in Georgia Tech's School of Electrical and Computer Engineering (ECE). "Now we've enhanced the complexity, making it very powerful but very low cost, so we're getting the best of both worlds."

"Our breakthrough is being able to communicate over 5G/millimeter-wave (mmWave) frequencies without actually having a full mmWave radio transmitter - only a single mmWave transistor is needed along much lower frequency electronics, such as the ones found in cell phones or WiFi devices. Lower operating frequency keeps the electronics' power consumption and silicon cost low," added first author Ioannis (John) Kimionis, a Georgia Tech Ph.D. graduate now a member of technical staff at Nokia Bell Labs. "Our work is scalable for any type of digital modulation and can be applied to any fixed or mobile device."

The researchers are the first to use a backscatter radio for gigabit-data rate mmWave communications, while minimizing the front-end complexity to a single high-frequency transistor. Their breakthrough included the modulation as well as adding more intelligence to the signal that is driving the device.

"We kept the same RF front-end for scaling up the data rate without adding more transistors to our modulator, which makes it a scalable communicator," Kimionis said, adding that their demonstration showed how a single
mmWave transistor can support a wide range of modulation formats.

Powering a Host of 'Smart' IoT Sensors

The technology opens up a host of IoT 5G applications, including energy harvesting, which Georgia Tech researchers recently demonstrated using a specialized Rotman lens that collects 5G electromagnetic energy from all directions.

Tentzeris said additional applications for the backscatter technology could include "rugged" high-speed personal area networks with zero-power wearable/implantable sensors for monitoring oxygen or glucose levels in the blood or cardiac/EEG functions; smart home sensors that monitor temperature, chemicals, gases, and humidity; and smart agricultural applications for detecting frost on crops, analyzing soil nutrients, or even livestock tracking.

The researchers developed an early proof of concept of this backscatter modulation, which won third prize at the 2016 Nokia Bell Labs Prize. At the time, Kimionis was a Georgia Tech ECE doctoral researcher working with Tentzeris in the ATHENA lab, which advances novel technologies for electromagnetic, wireless, RF, millimeter-wave, and sub-terahertz applications.

Key Enabler of Low Cost: Additive Manufacturing

For Kimionis, the backscatter technology breakthrough reflects his goal to "democratize communications."

"Throughout my career I've looked for ways to make all types of communication more cost-efficient and more energy-efficient. Now, because the whole front end of our solution was created at such low complexity, it is compatible with printed electronics. We can literally print a mmWave antenna array that can support a low-power, low-complexity, and low-cost transmitter."

Tentzeris considers affordable printing crucial to making their backscattering technology market viable. Georgia Tech is a pioneer in inkjet printing on virtually every material (paper, plastics, glass, flexible/organic substrates) and was one of the first research institutes to use 3D printing up to millimeter-frequency ranges back in 2002.

Credit: 
Georgia Institute of Technology