Culture

New 5G switch provides 50 times more energy efficiency than currently exists

image: With US Army funding, researchers at The University of Texas at Austin and the University of Lille in France develop a radio-frequency switch that is more than 50 times more energy efficient that what is used today.

Image: 
University of Texas

RESEARCH TRIANGLE PARK, N.C. -- As 5G hits the market, new U.S. Army-funded research has developed a radio-frequency switch that is more than 50 times more energy efficient than what is used today.

With funding from the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory, researchers at The University of Texas at Austin and the University of Lille in France, have built a new component that will more efficiently allow access to the highest 5G frequencies, in a way that increases devices' battery life and speeds up how quickly users can do things like stream HD media.

Smartphones are loaded with switches that perform a number of duties. One major task is jumping back and forth between different networks and spectrum frequencies: 4G, WiFi, LTE, Bluetooth, etc. The current radio-frequency switches that perform this task are always running, consuming precious processing power and battery life.

"Radio-frequency switches are pervasive in military communication, connectivity and radar systems," said Dr. Pani Varanasi, division chief, materials science program at ARO. "These new switches could provide large performance advantage compared to existing components and can enable longer battery life for mobile communication, and advanced reconfigurable systems."

The journal Nature Electronics published the research team's findings.

"It has become clear that the existing switches consume significant amounts of power, and that power consumed is useless power," said Dr. Deji Akinwande, a professor in the Cockrell School of Engineering's Department of Electrical and Computer Engineering who led the research. "The switch we have developed can transmit an HDTV stream at a 100GHz frequency, and that is an achievement in broadband switch technology."

The new switches stay off, saving battery life for other processes, unless they are actively helping a device jump between networks. They have also shown the ability to transmit data well above the baseline for 5G-level speeds.

Prior researchers have found success on the low end of the 5G spectrum - where speeds are slower but data can travel longer distances. This is the first switch that can function across the spectrum from the low-end gigahertz frequencies to high-end terahertz frequencies that could someday be key to the development of 6G.

The team's switches use the nanomaterial hexagonal boron nitride, a rapidly emerging nanomaterial from the same family as graphene. The structure of the switch involves a single layer of boron and nitrogen atoms in a honeycomb pattern sandwiched between a pair of gold electrodes. Hexagonal boron nitride is the thinnest known insulator with a thickness of 0.33 nanometers.

The impact of these switches extends beyond smartphones. Satellite systems, smart radios, reconfigurable communications, and Internet of Things, are all examples of potential uses for the switches. In addition, these switches can be realized on flexible substrates making them suitable for Soldier wearable radios and communication systems that can benefit from the improved energy efficiency for longer battery life with faster data speeds as well as other defense technologies.

"This will be very useful for radio and radar technology," Akinwande said.

This research spun out of a previous project that created the thinnest memory device, also using hBN. Akinwande said sponsors encouraged the researchers to find other uses for the material, and that led them to pivot to RF switches.

Credit: 
U.S. Army Research Laboratory

Eye injury sets immune cells on surveillance to protect the lens

image: 3D surface structure imaging at one day post-corneal wounding shows immune cells (CD45+, green) migrating along within ciliary zonule fibrils (MAGP1+, white) that extend along the surface of the matrix capsule that surrounds the lens (perlecan+, red). Also seen are the ciliary zonules (white) that link the lens to the ciliary body. Nuclei in both these tissues of the eye are labeled blue.

Image: 
JodiRae DeDreu, researcher in the lab of Sue Menko, Thomas Jefferson University

PHILADELPHIA - The lens of the eye is an unusual organ. Unlike most of the body's organs, blood vessels don't reach the lens. If they did, they'd obscure our vision and we wouldn't be able to see. The lack of vasculature led scientists to believe immune cells, which travel via the bloodstream, couldn't get to this part of the body either. But a few years ago, Jefferson researchers challenged this long held assumption by demonstrating that immune cells populate the lens in response to degeneration. Now the Jefferson team finds the eye also launches an immune response in the lens after injury. The discovery adds to a growing body of evidence that is working to overturn the accepted dogma of the field.

"Why would we evolve a tissue that is so central to our being able to see without ways to ensure its protection, its ability to repair itself?" says, Sue Menko, PhD, Professor in the Department of Pathology, Anatomy and Cell Biology at Thomas Jefferson University, who led the research. "Immune cells are central to that protection and repair."

The lens of the eye works like a camera lens. Its main purpose is to focus images coming in through the cornea - the transparent front layer of the eye - onto the retina at the back of the eye. The images are detected by the retina and then translated in the brain as what we see. That lens must be crystal clear. As a result, scientists have always described the lens as a tissue without vasculature and therefore no source of immune cells either.

"At some point, you think about it and you wonder how that's possible," Dr. Menko says. "It doesn't really make a lot of sense."

The puzzle led Dr. Menko and her team to investigate whether immune cells are present in the eye. In a previous study, they discovered that when the lens is in a diseased state, immune cells are not only recruited there, but they also show up in the cornea, retina, and vitreous body - all parts of the eye that don't normally have immune cells. Dr. Menko's work suggested that the immune cells come from the ciliary body, a sort of muscle that helps squeeze and pull the lens, changing its shape, and helping it focus.

"The ciliary body is also a place that is vascular rich so it seemed like the most obvious place to look," Dr. Menko says.

Now, in the latest work, Dr. Menko and colleagues show that after injury to the cornea, immune cells travel from the ciliary body to the lens along fibers known as ciliary zonules. The researchers used fluorescent markers and high-powered microscopes to observe structures of mouse eyes one day after receiving a scratch on the cornea. The high-tech imaging analysis Dr. Menko's team used revealed that following injury to the cornea, the immune system launches a response to protect the lens. Immune cells are recruited to the lens via the ciliary zonules, and crawl along the surface of the lens to surveille and protect from adverse impacts of the corneal wound.

"This is really the first demonstration that surveillance by immune cells of the lens in response to injury somewhere else in the eye," Dr. Menko says.

The researchers also found that some immune cells were able to cross the lens capsule, a membranous structure that helps to keep the lens under tension. The results could point to a role for immune cells in cataract formation.

Together, the findings indicate that in response to damage or disease, the eye utilizes alternative mechanisms - rather than direct contact with the bloodstream like non-transparent tissues do - to ensure that immune cells get to sites to provide healing and protection.

"We're excited to go from thinking this doesn't make sense to proving that the body is amazing and can adapt to anything. You just have to go in and look for it," Dr. Menko says.

"We should be willing to challenge dogma because that's where discovery is," she adds. "It can enlighten what we know if we always keep our mind open to what doesn't make sense and what maybe should be challenged to understand things better."

Credit: 
Thomas Jefferson University

MAVEN maps electric currents around mars that are fundamental to atmospheric loss

image: This image is from a scientific visualization of the electric currents around Mars. Electric currents (blue and red arrows) envelop Mars in a nested, double-loop structure that wraps continuously around the planet from its day side to its night side. These current loops distort the solar wind magnetic field (not pictured), which drapes around Mars to create an induced magnetosphere around the planet. In the process, the currents electrically connect Mars' upper atmosphere and the induced magnetosphere to the solar wind, transferring electric and magnetic energy generated at the boundary of the induced magnetosphere (faint inner paraboloid) and at the solar wind bow shock (faint outer paraboloid).

Image: 
NASA/Goddard/MAVEN/CU Boulder/SVS/Cindy Starr

Five years after NASA's MAVEN spacecraft entered into orbit around Mars, data from the mission has led to the creation of a map of electric current systems in the Martian atmosphere.

"These currents play a fundamental role in the atmospheric loss that transformed Mars from a world that could have supported life into an inhospitable desert," said experimental physicist Robin Ramstad of the University of Colorado, Boulder. "We are now currently working on using the currents to determine the precise amount of energy that is drawn from the solar wind and powers atmospheric escape." Ramstad is lead author of a paper on this research published May 25 in Nature Astronomy.

Earth has such current systems, too: we can even see them in the form of colorful displays of light in the night sky near the polar regions known as the aurora, or northern and southern lights. Earth's aurora are strongly linked to currents, generated by the interaction of the Earth's magnetic field with the solar wind, that flow along vertical magnetic field lines into the atmosphere, concentrating in the polar regions. Studying the flow of electricity thousands of miles above our heads, though, only tells part of the story about the situation on Mars. The difference lies in the planets' respective magnetic fields, because while Earth's magnetism comes from within, Mars' does not.

Planetary magnetic fields

Earth's magnetism comes from its core, where molten, electrically conducting iron flows beneath the crust. Its magnetic field is global, meaning it surrounds the entire planet. Since Mars is a rocky, terrestrial planet like Earth, one might assume that the same kind of magnetic paradigm functions there, too. However, Mars does not generate a magnetic field on its own, outside of relatively small patches of magnetized crust. Something different from what we observe on Earth must be happening on the Red Planet.

What's going on above Mars?

The solar wind, made up largely of electrically charged electrons and protons, blows constantly from the Sun at around a million miles per hour. It flows around and interacts with the objects in our solar system. The solar wind is also magnetized and this magnetic field cannot easily penetrate the upper atmosphere of non-magnetized planets like Mars. Instead, currents that it induces in the planet's ionosphere cause a pile-up and strengthening of the magnetic field, creating a so-called induced magnetosphere. How the solar wind powers this induced magnetosphere at Mars has not been well understood until now.

As solar wind ions and electrons smash into this stronger induced magnetic field near Mars, they are forced to flow apart due to their opposite electric charge. Some ions flow in one direction, some electrons in the other direction, forming electric currents that drape around from the dayside to the nightside of the planet. At the same time, solar x-rays and ultraviolet radiation constantly ionize some of the upper atmosphere on Mars, turning it into a combination of electrons and electrically charged ions that can conduct electricity.

"Mars' atmosphere behaves a bit like a metal sphere closing an electric circuit," Ramstad said. "The currents flow in the upper atmosphere, with the strongest current layers persisting at 120-200 kilometers (about 75-125 miles) above the planet's surface." Both MAVEN and previous missions have seen localized hints of these current layers before, but they have never before been able to map the complete circuit, from its generation in the solar wind, to where the electrical energy is deposited in the upper atmosphere.

Directly detecting these currents in space is infamously difficult. Fortunately, the currents distort the magnetic fields in the solar wind, detectable by MAVEN's sensitive magnetometer. The team used MAVEN to map out the average magnetic field structure around Mars in three dimensions and calculated the currents directly from their distortions of the magnetic field structure.

"With a single elegant operation, the strength and paths of the currents pop out of this map of the magnetic field," Ramstad said.

The Red Planet's destiny

Without a global magnetic field surrounding Mars, the currents induced in the solar wind can form a direct electrical connection to the Martian upper atmosphere. The currents transform the energy of the solar wind into magnetic and electric fields that accelerate charged atmospheric particles into space, driving atmospheric escape to space. The new results reveal several unexpected features particular to MAVEN's goal to understand atmospheric escape: the energy that drives escape appears to be drawn from a much larger volume than was often assumed.

Solar-wind-driven atmospheric loss has been active for billions of years and contributed to the transformation of Mars from a warm and wet planet that could have harbored life into a global cold desert. MAVEN is continuing to explore how this process works and how much of the planet's atmosphere has been lost.

This research was funded by the MAVEN mission. MAVEN's principal investigator is based at the University of Colorado's Laboratory for Atmospheric and Space Physics, Boulder, and NASA Goddard manages the MAVEN project. NASA is exploring our Solar System and beyond, uncovering worlds, stars, and cosmic mysteries near and far with our powerful fleet of space and ground-based missions.

Credit: 
NASA/Goddard Space Flight Center

Triggered by light, a novel way to switch on an enzyme

image: In the model: blue light triggers a special monooxygenase reaction in an enzyme. This kind of activation was hitherto unknown in enzymology.

Image: 
Steffen L. Drees

Enzymes: they are the central drivers for biochemical metabolic processes in every living cell, enabling reactions to take place efficiently. It is this very ability which allows them to be used as catalysts in biotechnology, for example to create chemical products such as pharmaceutics. A topic that is currently being widely discussed is photoinduced catalysis, in which researchers harness the ability of nature to start biochemical reactions with the aid of light. What they need for this purpose is enzymes which can be activated by means of light. It is not, however, a simple matter to incorporate the few naturally occurring light-activatable enzymes into biotechnological processes, as they are highly specialised and difficult to manipulate.

Researchers at the Universities of Münster (Germany) and Pavia (Italy) have now identified an enzyme which becomes catalytically active when exposed to blue light and which immediately triggers a reaction hitherto unknown in enzymology. The reaction in question is a special monooxygenase reaction, in which an oxygen atom is transferred to the substrate. The reaction is supported by a "helper molecule" which stepwise delivers two electrons. Up to now, it had been assumed that such a light-dependent reaction cannot occur in enzymes.

"The enzyme we have identified belongs to a very large family of enzymes, and it is realistic to assume that other enzymes can be produced, by means of genetic manipulations, which can be activated by light too and which can be used in a very wide range of applications," says Dr. Steffen L. Drees, who headed the study and works at the Institute of Molecular Microbiology and Biotechnology at Münster University. One possible application, for example, is in the field of medicine, where pharmaceuticals could be activated by means of light. The study has been published in the journal "Nature Communications".

Background and method:

In their study, the researchers investigated the enzyme PqsL, which is found in the opportunistic pathogen Pseudomonas aeruginosa and, originally, is not light-dependent. The researchers stimulated the enzyme with blue light and analysed the reaction using, for example, a combination of time-resolved spectroscopic and crystallographic techniques.

The enzyme examined belongs to the family of flavoproteins and - typically for this family of proteins - uses a derivative of vitamin B2 as a so-called cofactor for catalysing the incorporation of oxygen into organic molecules. The cosubstrate NADH (reduced nicotinamide adenine dinucleotide) is needed as a "helper molecule" for the enzymatic reaction, providing the necessary electrons. The reaction mechanism the researchers observed in their study is new, however, and so far, unique. Activated by the exposure to light in the flavin-NADH complex, NADH transfers a single electron to the protein-bound flavin. In this way, a flavin radical is created - a highly reactive molecule which is characterised by an unpaired electron. Using time-resolved spectroscopy, the researchers were able to observe how the molecule formed and changed its state.

The flavin radical has a very negative redox potential, which means that it has a large capacity for transferring electrons to reaction partners. "Because of this property, we assume that the flavin radical can also enable additional reactions to take place which would expand the catalytic potential of this enzyme - as well as of other enzymes too, perhaps," says group leader Prof. Susanne Fetzner.

The enzyme identified is the only one so far which is not naturally photoactive, and carries out a light-independent reaction in the bacterial cell. "The three-dimensional structure of the enzyme shows that the outward-facing flavin co-factor might be the key to photoactivation," says Simon Ernst, first author of the study.

Photoactive enzymes enable a large number of applications - for example, multi-step catalysis in a one-vessel reaction or spatially resolved catalysis, for example to functionalise surfaces in certain patterns. They can also be useful for so-called prodrug activation in the body or on the skin - a process in which a pharmacological substance becomes active only after metabolization in the organism.

Credit: 
University of Münster

Can interactive technology ease urban traffic jams?

image: A new analysis from University of Houston Bauer College of Business Dean Paul Pavlou and his colleagues found that interactive technologies can ease traffic congestion in cities that use it.

Image: 
University of Houston

Traffic congestion is a serious problem in the United States, but a new analysis shows that interactive technology - ranging from 511 traffic information systems and roadside cameras to traffic apps like Waze and Google Maps - is helping in cities that use it.

Potentially, the researchers said, technology could limit the need to widen and expand roadways while saving commuters time and money and lessening environmental damage.

"Technology has the potential to help society, and one way is to help us make better infrastructure decisions and put less pressure on roads," said Paul A. Pavlou, dean of the C.T. Bauer College of Business at the University of Houston and corresponding author for the report, published by the journal Information Systems Research.

Pavlou and colleagues Aaron Cheng of the London School of Economics and Min-Seok Pang of Temple University found that U.S. cities using Intelligent Transportation Systems (ITS) saved money, time and other resources, including:

More than $4.7 billion a year in lost work or productivity

175 million hours a year in travel time

53 million gallons a year in fossil fuel consumption and

10 billion pounds less CO2 emitted each year.

The researchers analyzed longitudinal data from ITS technologies deployed in 99 urban areas in the United States from 1994 to 2014. That included the metropolitan areas of Chicago, Los Angeles, Atlanta, New York-Newark, Houston, Dallas-Fort Worth and Washington D.C., among others.

Pavlou noted that technology has advanced - and traffic has continued to grow - since 2014, the latest year in the dataset used for the research, making it likely that today's savings would be greater.

The U.S. Department of Transportation describes ITS as "an integrated system of advanced communications technologies embedded in the transportation infrastructure and in vehicles to improve transportation safety and mobility," and has awarded grants to cities to invest in the technologies. Technologies included in the research include both those developed by DOT and commercial technologies designed to improve traffic safety and mobility.

The researchers found that the technology is most effective at reducing traffic congestion when two things happen: commuters use more online services for traffic information, including such apps as Waze, and when state governments incorporate more advanced functions into their 511 traveler information systems.

But each city is different. Pavlou noted that while Houston has not adopted the 511 system, it does collaborate with private companies to design and build intelligent transportation systems, including messaging signs, roadside cameras and solar-powered radar detection sites.

Pavlou said the study suggests alternatives to simply building more and bigger roads to keep up with population and traffic growth. Using large-scale technology systems in conjunction with real-time traffic apps at the individual level is less expensive and more effective than only spending funds to expand and maintain roadways, he said
.
Houston traffic and its freeway system, for example, have grown significantly since he was a student here in the 1990s, he said. "Traffic is even worse than before since people move where the roads are built and drive more. The city is growing, but there are alternative ways that do not impose some much demand on roads with the intelligent use of technology in parallel."

Credit: 
University of Houston

Montefiore and Einstein test a new drug combination to conquer COVID-19

May 26, 2020 - (BRONX, NY) Montefiore Health System Albert Einstein College of Medicine have begun the next stage of the Adaptive COVID-19 Treatment Trial (ACTT), to evaluate treatment options for people hospitalized with severe COVID-19 infection. The new iteration of the trial, known as ACTT 2, is sponsored by the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health.

In March, Montefiore was the first New York location to join the multicenter trial, which evaluated remdesivir, a broad-spectrum antiviral drug given intravenously. Preliminary results from the trial, announced last month and published on Friday in the New England Journal of Medicine, show that patients with COVID-19 who received remdesivir recovered in 11 days on average compared to 15 days for patients in the placebo group--a statistically significant improvement. Of the 1,063 clinical trial participants, 91 of them, nearly 10%, were from Montefiore and Einstein.

Following up on remdesivir's promising results, the trial is now studying remdesivir in combination with baricitinib or placebo in a double-blind, randomized trial. Baricitinib is marketed for reducing inflammation associated with rheumatoid arthritis. Researchers want to know if baricitinib combined with remdesivir can prevent or reduce the hyper-inflammatory "cytokine storm" that can fatally overwhelm the lungs and other parts of the body in people with COVID-19 when their immune system reacts to coronavirus infection.

"What concerns us is that in some people the immune response to coronavirus can be more deadly than the infection itself, and there is no known treatment for this yet," said Barry Zingman, M.D., professor of medicine at Einstein and clinical director, infectious diseases, at the Moses division of Montefiore Health System. "Including baricitinib in our trial may reduce COVID-19-related inflammation and combining baricitinib with remdesivir may yield an even more effective treatment option for people most severely affected by this illness." Dr. Zingman oversaw the original remdesivir study at Montefiore and is directing ACTT 2.

Patients enrolled in ACTT 2 are hospitalized with a laboratory-confirmed coronavirus infection and lung complications, including rattling sounds when breathing, a need for supplemental oxygen, abnormal chest X-rays showing pneumonia, or the need for a mechanical ventilator. All patients will receive remdesivir intravenously for up to 10 days. Half of the patients will also be given baricitinib by mouth, with the remaining half receiving an identical placebo, both for up to 14 days.

Remdesivir was developed by Gilead Sciences, Inc. Baricitinib was developed by Eli Lilly and Company.

Credit: 
Albert Einstein College of Medicine

Modern problems, primitive solutions: A glimpse into archaic protein synthesis systems

image: The study highlights the possible mechanisms of evolution of the current genetic code through the transition of the recognition site by aaRS.

Image: 
Tadashi Ando (Tokyo University of Science)

In cells, protein is synthesized based on the genetic code. Each protein is coded by the triplet combination of chemicals called "nucleotides," and a continuous "reading" of any set of triplet codes will, after a multi-step process, result in the creation of a chain of amino acids, a protein. The genetic code is matched with the correct amino acid by a special functional RNA aptly named transfer RNA or tRNA (which, incidentally, is itself composed of its own type of "codes"). An enzyme called "aminoacyl-tRNA synthetase" or aaRS accurately assigns a specific amino acid to the correct "code" through a tRNA by recognizing unique structural components called 'identity elements' on the tRNA. In the case of the amino acid alanine, the identity element for recognition by the enzyme alanyl-tRNA synthetase (AlaRS) is an unlikely base pair "G3:U70," present in the minihelix structure (amino acid-accepting upper half region) of tRNA. Considering its importance in the recognition of the code, the base pair is popularly known as the "operational RNA code."

The evolution of this complex tRNA-aaRS system is a fascinating enigma, as the existing evolutionary evidence suggests that the upper half of the tRNA containing this operational code appeared earlier in evolutionary history than the lower half part that binds to the triplet code of mRNA. Interestingly, in a primitive microorganism, Nanoarchaeum equitans, the genes coding for each AlaRS subunit α and β are split, with the two genes being separated by half the length of the chromosome.

This interesting fact inspired a team of scientists at Tokyo University of Science, led by Prof. Koji Tamura, to hypothesize that these split forms of AlaRS in N. equitans might be connected with the evolutionary history of aaRS enzyme activity.

Prof. Tamura emphasizes the significance of their study, published in Journal of Molecular Evolution, in the evolutionary context, "AlaRS-α shows the G3:U70-independent addition of alanine to RNA minihelix regions. Our data indicate the existence of a simplified process of alanine addition to tRNA by AlaRS early in the evolutionary process, before the appearance of the G3:U70 base pair."

The aforementioned minihelix parts of tRNAs were previously known to function as the region of occurrence of addition of amino acids by many aaRSs. To understand the interaction process of the minihelix (minihelixAla) of alanine-specific tRNA (tRNAAla) and AlaRS subunits, the researchers cloned the coding sequences of α and β subunits of N. equitans and then purified the synthesized proteins.

The researchers noticed that, at a relatively high concentration, AlaRS-α alone was capable of adding alanine to both tRNAAla and minihelixAla. Then also observed that AlaRS-α alone interacts with the end of the alanine-accepting region of tRNAAla, but not with the G3:U70 base pair. This was in stark contrast to prior knowledge regarding tRNAAla and AlaRS system. In brief, when both AlaRS-α and AlaRS-β were present, AlaRS behaved in a G3:U70-dependent manner, but working alone, AlaRS-α could add alanine to tRNAAla and minihelixAla in a G3:U70-independent manner. The researchers deduced that "the G3:U70 may be a late-arriving 'operational RNA code,' relevant to later alanylation systems incorporating further specificity through the evolution of the AlaRS-β subunit."

So, what makes the findings of this study so important? Prof. Tamura explains the significance of the striking results of their research, "our findings reveal for the first time that a G3:U70-independent mechanism of alanine addition exists. Furthermore, using 'RNA minihelix' molecules, which are considered to be the primitive form of tRNA, we could also illuminate the 'morphology' of tRNA before the evolutionary appearance of the G3:U70 base pair.''

While discussing the broader implication of their study, Prof. Tamura comments thoughtfully "The breakthroughs in science almost always came from the curiosity-driven research, and the results of our study approach the mystery of the origin of life. It has the potential to transform many areas". His team is now focusing on an extensive structural analysis using the mutants of N. equitans AlaRS-α, but their current findings, published in August issue in printing and selected as the cover of the August issue, are enough to give cause to rethink chapters scientists that have believed to be fundamental in evolutionary history!

Credit: 
Tokyo University of Science

Lymph node metastases form through a wider evolutionary bottleneck than distant metastases

The evolutionary processes underlying metastasis-the development of secondary malignant growths away from the primary tumor site-in human patients are still incompletely understood.

Metastases can form in locoregional lymph nodes draining the primary tumor-a form of progression that portends a worse prognosis but can still be curable-or they can develop in distant organs. The latter case defines stage IV disease and treatments for it are typically considered palliative.

It is unknown whether lymph node and distant metastases are only distinguished by their different prognostic implications, or whether the biology underlying their formation is also distinct.

In a new study, published in Nature Genetics, Kamila Naxerova, PhD, of the Center for Systems Biology at Massachusetts General Hospital, Johannes Reiter, PhD, of the Canary Center for Cancer Early Detection at Stanford and colleagues now show that lymph node and distant metastases develop through different evolutionary mechanisms.

Reconstructing the evolutionary histories of dozens of primary colorectal cancers and their metastases, the team showed that lymph node metastases are a genetically highly diverse group. Their pronounced heterogeneity indicates that they can be seeded by many different primary tumor sub-lineages.

In contrast, distant metastases are homogeneous. They typically resemble each other and have a recent common ancestor, suggesting that fewer primary tumor cells possess the ability to form lesions in distant organs. Moreover, the genetic diversity within individual lymph node metastases is also higher than the genetic diversity within individual distant lesions.

These results show that the selective pressures shaping metastasis development in different anatomical sites differ substantially. Lymph node metastasis formation is comparatively "easy" and can be achieved by many cells. Dissemination to and outgrowth in distant organs, on the other hand, appears to be much more challenging and represents a major bottleneck in tumor progression. A significantly smaller fraction of genetic lineages within a primary tumor appears to be capable of this feat. Perhaps these differences can explain why diagnosis of lymph node metastases generally is a less ominous sign than the presence of distant metastases.

In future studies, it will be important to study the molecular and cell biological mechanisms underlying differential selection in lymph nodes and distant sites.

For example, it is possible that distant metastasis is more difficult to achieve because target organs like the liver are located further away from the primary tumor than locoregional lymph nodes, requiring cells to travel farther distances.

Or perhaps, the microenvironment of the lymph node is for some reason a more hospitable milieu for disseminating tumor cells than the parenchyma of distant organs. Understanding the molecular factors that are rate-limiting for metastasis formation in different sites could lead to novel preventative treatments.

Credit: 
Massachusetts General Hospital

Ozone disinfectants can be used to sterilize cloth and n95 masks against COVID-19

COVID-19 pandemic is proving to be havoc for the whole world and an unending challenge for the healthcare systems, hospitals, medics and paramedics. The most vivid example is the shortage of equipment and preventive kits, including the n95 masks. Indeed, there has been high demand of the n95 masks in the hospitals but the shortage of it has forced many local hospitals to use the ordinary masks also. The bigger problem for the healthcare professionals and others in the hospitals is, how best to sterilize (clean) the masks for use and, if need arises, reuse.

Dr. Craig G Burkhart, from the Department of Medicine, University of Toledo College of Medicine, Toledo, Ohio, USA, in his Editorial published in the journal, The Open Dermatology Journal, defines a suitable way for sterilizing the protective masks. According to him, the best cleansing solution for protective masks is treating them with chemical sterilizing agents. The sterilizing agents following the chemistry that the oxidative processes with low molecular weight molecules, used with tested methodologies, can kill all bacteria, mold, and viruses. Ozone sterilization is one very effective method.

Ozone or activated oxygen (O3) is a sterilizing agent that has proved successful in destroying bacteria, fungi, viruses, and protozoa. Combatting viruses, ozone diffuses into their protein coat, all the way to the nucleic acid, damaging and killing the organism. The activated oxygen destroys the unsaturated lipid envelope of the virus by breaking the existing multiple bond configuration. As the nuclear content of the virus cannot live without an intact lipid envelope, thus the virus gets killed. COVID-19 virus is one such virus that has an unsaturated lipid envelope enclosing the nuclear content. Other viruses that cannot withstand the activated oxygen include poliovirus 1 and 2, human rotavirus, Norwalk virus, Parvoviruses, and Hepatitis A, B, and non-A non-B.1.

SoClean is one of the sanitizers that use this very method for sterilizing cloths, masks and such items. It comes with the Continuous Positive Airway Pressure (CPAP) machines and, both can be used to sterilize the n95 and other masks for reusing in the hospitals and elsewhere.

This editorial is open access and can be read from the following link: https://benthamopen.com/ABSTRACT/TODJ-14-14

Credit: 
Bentham Science Publishers

What do ants and light rays have in common when they pass through lenses?

image: Figure 1. Similarities between dispersion of light and dispersion of ant trails across convex and concave lenses between light sources and target, or nest and food patch.
Light rays that originate from the light source and pass through a lens (a) to reach the target can travel through the wider area of the convex lens (b) compared to the concave lens (c). Ants (d) travelling between their nest and food source behave in a generally similar manner: they disperse more across the convex (e) than that across the concave "lens" (f) made of Velcro that slows ants down. The underlying hypothetical reason for this similarity is the tendency to reduce travel time by the both, the light and the ants.

Image: 
Choi J, Lim H, Song W., Cho H., Kim HY, Lee SI, Jablonski PG.[(b, c, e, f are adapted and modified from Choi et al. <Scientific Reports> (2020)) (https://www.nature.com/articles/s41598-020-65245-0)

Light and foraging ants seem totally unrelated, but they have one thing in common: they travel along time-reducing paths. According to Fermat's principle about the refraction of a ray of light, the light bends when it meets a matter with different refractive indices and travels through time-minimizing paths. Recently, similar behavior was reported in foraging ants in a lab setting: ants 'bend' their travel paths when they enter a substrate that slows them down. But would the ants behave similarly as the light passing through convex or concave lenses when they travel through impediments with lens-like shapes? A multidisciplinary team of researchers from Seoul National University (SNU) and DGIST in Korea, composed of ecologists and an engineer conducted experiments in the field to find out if the ants indeed behave similarly to the light.

The study has started with the following reasoning. Laws of optics predict that light rays reach the target on the other side of a convex lens by crossing the convex lens nearly everywhere; the crossing points can be quite far from the center of the lens (Fig. 1b). On the other hand, light rays cross the concave lens can reach the target only when it pass through the points near the center of the lens (Fig. 1e)

The researchers asked whether ants also show similar trends when they cross lens-shaped impediments, made of Velcro tape, during their foraging trips. On the Velcro "lens", the ants cannot walk as fast as they can on normal, flat surface. The researchers put these "lenses" between the nest entrance and the food source near several colonies of the Japanese carpenter ants and observed what happened. It turned out that the trails of ants crossing the "convex lens" diverged away from the center more than they did on the "concave lens". This means that more ants avoided the central thick part of convex impediment, and more ants walked through the central narrow part of concave impediment. This suggests that ants tend to avoid the parts of impediments that considerably slow them down.

This general similarity to the behavior of light crossing through convex and concave lenses is consistent with the idea that foraging ants, like light rays, use time-reducing paths. "I studied math and physics as an undergraduate, and this has helped me to come up with this research idea after I was exposed to the wonders of the behavior of ants by my Lab mate, Dr. Woncheol Song" says Ph.D. candidate Jibeom Choi, who conducted the experiments and created a mathematical model of ant behavior. Collaboration between a behavioral ecologist, Prof. Piotr Jablonski (Laboratory of Behavioral Ecology and Evolution, SNU) and a theoretical engineer, Prof. Hoyoung Kim (Microfluids & Soft Matter Laboratory, SNU), additionally highlights the multidisciplinary merit of this study. "This is an example of synergistic effect of multidisciplinary collaboration; by crossing the boundaries between disciplines, we have a fuller understanding of the natural world.", remarks the integrative ecologist in the study, Prof. Sang-im Lee (Laboratory of Integrative Animal Ecology, DGIST), who has been actively pursuing multidisciplinary research at SNU and DGIST, and has been involved in research on ants for years.

There are, however, remaining questions to be solved. As pointed out by authors themselves, the behavior of individual ants on the Velcro impediment and its borders has not been thoroughly investigated and it may contribute to the observed pattern. Therefore, further studies should focus on the behavior of individual ants at the edges between different substrates.

Credit: 
Laboratory of Behavioral Ecology and Evolution at Seoul National University

Can copying your friends help you achieve your goals?

Consumers often struggle to achieve self-set life improvement goals, but what if deliberately emulating the successful strategies used by their friends could help them?

A new paper published in the Journal of the Association for Consumer Research shows that encouraging people to find and mimic exercise strategies used by their friends increases the amount of time people spent exercising relative to receiving an exercise strategy passively. In the study, Katie S. Mehr, Amanda E. Geiser, Katherine L. Milkman, and Angela L. Duckworth introduce the "copy-paste prompt," a nudge that encourages consumers to seek out and mimic a goal-achievement strategy used by an acquaintance.

Copy-paste prompts "are easy to implement, virtually costless, and widely applicable with the potential to improve outcomes ranging from healthy eating to academic success," the authors write in "Copy-Paste-Prompts: A New Nudge to Promote Goal Achievement."

Copy-paste prompts may be more effective than other methods for bolstering goal achievement for several reasons: behaviors are more appealing when learned from observation, plus learning from models increases both a person's expectations of their own abilities and their likelihood of using information. However, consumers may not take full advantage of opportunities to observe and emulate others in their social network. In this case, copy-paste prompts may add value by helping consumers better take advantage of this resource. Plus, the information is more customized and goal relevant, since consumers select peers whose behavior they want to emulate.

In the authors' longitudinal study, over 1,000 participants were asked how many hours they spent exercising in the last week and were randomly assigned to one of three conditions: the copy-paste prompt condition, a quasi-yoked control condition, or a simple control condition.

In the copy-paste prompt condition, participants read the following:

"In this study, we want to help you learn about an effective hack or strategy that someone you know uses as motivation to exercise. Over the next two days, we'd like you to pay attention to how people you know get themselves to work out. If you want, you can ask them directly for their motivational tips and strategies."

In the quasi-yoked control condition, participants read the following:

"In this study, we're hoping to help you learn about an effective hack or strategy that motivates people to exercise. Over the next two days, we'd like you to get ready to learn a new strategy to motivate you to exercise."

The participants who received the copy-paste prompt spent more time exercising the following week than participants assigned to either a quasi-yoked or simple control condition. "The benefits of copy-paste prompts are mediated by the usefulness of the adopted exercise strategy, commitment to using it, effort put into finding it, and the frequency of social interaction with people who exercise regularly," the authors write.

Looking ahead, the authors write, "It may be that once a consumer learns to copy-paste in one domain (e.g., exercise), she will be able to apply this technique in a way that improves many other outcomes (e.g., retirement savings)."

Credit: 
University of Chicago Press Journals

How exposure to negative feedback in influences goal-directed consumer behaviors

Threats to self-esteem and negative feedback are pervasive in today's society. Social media researchers, for example, have shown a link between frequent usage of social media websites and upward social comparison and negative affect.

How does this influence consumer behavior? A new paper published in the Journal of the Association for Consumer Research examines how single and repeated exposure to negative feedback in one domain influences goal-directed consumer behaviors.

In "The Motivating And Demotivating Effects Of Negative Feedback On Cross-Domain Goal Pursuit Behaviors," authors Alison Jing Xu, Shirley Y. Y. Cheng, and Tiffany Barnett White hypothesize that receiving negative feedback induces a general motive to boost one's self-view, which motivates people to pursue proving goals (i.e., those that allow them to demonstrate their competence), even in areas that are unrelated to the feedback. The authors propose that negative feedback also demotivates the pursuit of enjoyment goals (i.e., those that focus on the pursuit of pleasure and therefore have no self-restorative characteristics).

These motivational consequences not only influence consumers' goal pursuit behaviors when a single goal (either a proving goal ["Playing this game can prove my intellectual abilities"] or an enjoyment goal ["I would have a lot of fun playing this game"] is activated, but also affect consumers' choice between a proving goal and an enjoyment goal.

Although receiving negative feedback may give rise to negative affect, the motivational consequences of negative feedback on goal pursuit behaviors were not driven by negative affect, per se. Instead, the motivation to boost one's self-view mediates the motivational influence of negative feedback on goal pursuit behaviors in other unrelated domains.

The authors conducted four experiments to support their hypothesis: manipulating negative feedback by providing performance feedback on a creativity test or an emotional intelligence test, and demonstrating its influence on consumers' motivation to pursue either a proving or an enjoyment goal, as well as on consumers' choice between a proving goal and an enjoyment goal.

"We showed that receiving negative feedback in an unrelated domain motivated consumers to spend more effort searching for product information if their search behavior was driven by the goal to identify the best option and prove their ability to make wise decisions (i.e., a proving goal)," the authors write. However, when consumers' search behavior was driven by the goal of having fun (i.e., an enjoyment goal), receiving negative feedback reduced search efforts.

Study participants who received negative performance feedback on a creativity task in one study responded by searching for information about options in an ostensibly unrelated study when this search behavior was framed as a proving goal. Similarly, receiving negative feedback on an emotional intelligence quiz increased participants' later likelihood of choosing to play a game that could demonstrate and improve their intellectual abilities (Clash of Clans) versus one they would enjoy more and have more fun playing (Fruit Ninja).

The findings also suggest that while consumers may be eager to self-improve when they receive initial negative feedback, repeated negative feedback exposure may undermine their confidence in their ability to self-repair, resulting in their being less motivated to pursue proving goals. This finding has important implications not only for the dynamic effects of self-repair motives on consumer behavior, but also for how to give negative feedback. "Specifically, it suggests that although people generally strive to self-improve after negative feedback, too much negative feedback can lead them to seek enjoyment rather than self-improvement, even in areas that have nothing to do with what they failed on previously," the authors write.

The findings may offer a path through which firms can have a positive impact on consumers' well-being. The findings suggest that, independent of potential effects of mood, receiving feedback that causes consumers to experience a threat to their self-concept (e.g., unflattering social comparison on a social media website) can influence not only their preferences for a given brand, but also the depth of their engagement with that brand. For example, the consumers in the first experiment were not only interested in trying the items in the assortment following negative (versus positive) feedback, they also literally searched more options for a longer period of time.

The drop of self-esteem as a result of upward social comparison may facilitate the marketing of goods, services, and activities that are associated with proving (but not enjoyment) goals (e.g., via Facebook ads) in the short run but not in the long run.

Credit: 
University of Chicago Press Journals

Renewable energy advance

image: New characterization techniques developed at the Catalysis Center for Energy Innovation may help improve electrochemical storage technologies, such as fuels cells used in UD's hydrogen fuel cell buses.

Image: 
Photo by Jon Cox and courtesy of Josh Lansford

Renewable technologies are a promising solution for addressing global energy needs in a sustainable way.

However, widespread adoption of renewable energy resources from solar, wind, biomass and more have lagged, in part because they are difficult to store and transport.

As the search for materials to efficiently address these storage and transport needs continues, University of Delaware researchers from the Catalysis Center for Energy Innovation (CCEI) report new techniques for characterizing complex materials with the potential to overcome these challenges.

The researchers recently reported their technique in Nature Communications.

Seeing the parts, as well as the whole

Currently technologies exist for characterizing highly ordered surfaces with specific repeating patterns, such as crystals. Describing surfaces with no repeating pattern is a harder problem.

UD doctoral candidate and 2019-2020 Blue Waters Graduate Fellow Josh Lansford and Dion Vlachos, who directs both CCEI and the Delaware Energy Institute and is the Allan and Myra Ferguson Professor of Chemical and Biomolecular Engineering, have developed a method to observe the local surface structure of atomic-scale particles in detail while simultaneously keeping the entire system in view.

The approach, which leverages machine learning, data science techniques and models grounded in physics, enables the researchers to visualize the actual three-dimensional structure of a material they are interested in up close, but also in context. This means they can study specific particles on the material's surface, but also watch how the particle's structure evolves -- over time -- in the presence of other molecules and under different conditions, such as temperature and pressure.

Put to use, the research team's technique will help engineers and scientists identify materials that can improve storage technologies, such as fuel cells and batteries, which power our lives. Such improvements are necessary to help these important technologies reach their full potential and become more widespread.

"In order to optimize electrochemical storage technologies, such as fuel cells and batteries, we must understand how they work and what they look like," said Lansford, the paper's lead author, who is advised at UD by Vlachos, the project's principal investigator.

"We need to understand the structure of the materials we are generating, in detail, so that we can recreate them efficiently at a large scale or modify them to alter their stability."

Computational modeling

Lansford concedes that it is too costly and time-consuming to model complex structures directly. Instead, they take data, generated from a single spot on the surface of a material, and scale it to be representative for a variety of catalysts on many surfaces of many different materials.

Imagine a cube made up of many atoms. The atoms located on the corners of the cube will have different properties than, say, the atoms located on one side of the cube. This is because on the corners, fewer atoms will be connected to each other and atoms may be spaced closer together. While on the side of the cube, more atoms will be connected even though they may be spaced farther apart from each other.

The same is true for catalyst materials. Even if we can't see them with the naked eye, the particles that make up a catalyst are adsorbed onto many different sites on the material -- and these sites have different edges, bumps and other variations that affect how materials located there will behave. Because of these differences, scientists can't just use a single number to try to quantify what's happening across a material's entire surface, so they have to estimate what these surfaces look like.

According to Lansford, this is where computational modeling can help.

The research team used experimental measurements of different wavelengths of infrared light and machine learning to predict and describe the chemical and physical properties of different surfaces of materials. The models were trained entirely on mathematically generated data, allowing them to visualize many different options under many different conditions.

They developed special open-source software to apply the technique on different metals, materials and adsorbates. The methodology is flexible enough to be used with other spectroscopic techniques beyond infrared light, so that other scientists and engineers can modify the software to advance their own work.

"This work introduces an entirely new way of thinking on how to bridge the gap between real-world materials and well-defined model systems, with contributions to surface science and machine learning that stand on their own," said Lansford.

Credit: 
University of Delaware

Beware of false negatives in diagnostic testing of COVID-19

image: Maribel Jose and Zhellann Aguilar test Covid-19 samples in the lab.

Image: 
Keith Weller/Johns Hopkins Medicine

One of the most commonly used diagnostic tools, particularly during this pandemic, is the reverse transcriptase polymerase chain reaction test (RT-PCR), which uses a person's respiratory sample to detect viral particles and determine if the person may have been exposed to a virus. Laboratory professionals across the U.S. and the globe have used RT-PCR to find out if a person has been infected with SARS-CoV-2, the virus that causes COVID-19. These tests have played a critical role in our nation's response to the pandemic. But, while they are important, researchers at Johns Hopkins have found that the chance of a false negative result -- when a virus is not detected in a person who actually is, or recently has been, infected -- is greater than 1 in 5 and, at times, far higher. The researchers caution that the predictive value of these tests may not always yield accurate results, and timing of the test seems to matter greatly in the accuracy.

In the report on the findings published May 13 in the journal Annals of Internal Medicine, the researchers found that the probability of a false negative result decreases from 100% on Day 1 of being infected to 67% on Day 4. The false negative rate decreased to 20% on Day 8 (three days after a person begins experiencing symptoms). They also found that on the day a person started experiencing actual symptoms of illness, the average false negative rate was 38%. In addition, the false negative rate began to increase again from 21% on Day 9 to 66% on Day 21.

The study, which analyzed seven previously published studies on RT-PCR performance, adds to evidence that caution should be used in the interpretation of negative test results, particularly for individuals likely to have been exposed or who have symptoms consistent with COVID-19.

Credit: 
Johns Hopkins Medicine

New testing system predicts septic shock outcomes

More than 1.7 million Americans develop sepsis each year, and more than 270,000 die from it. The condition--which happens when the body has an extreme response to a bacterial or viral infection, causing a chain reaction that can lead to organ failure and death--has few strategies for treatment.

That's what Savas Tay found a few years ago, when his mother died from sepsis. "I learned that there is very little they can do to really monitor and diagnose these patients," said Tay, associate professor of molecular engineering at the Pritzker School of Molecular Engineering (PME) at the University of Chicago. "A good percentage of them will ultimately die, which is unacceptable, considering the high-quality facilities, physicians, and therapies we have available. I was kind of enraged with the situation."

So Tay set out to do something about it. Now, he and his collaborators have developed a new, extremely sensitive method that can quantify bacteria, an antibiotic resistant gene, and immune molecule levels within sepsis patients, far more rapidly than current protocols.

By deploying these tests at intervals, the researchers also found that it wasn't the absolute levels of these markers that mattered--it was the change in the levels. Using machine learning, they accurately predicted which patients with sepsis would recover quickly, recover later, or ultimately succumb to the condition. That information could ultimately help physicians diagnose and treat patients in a more personalized way.

"Our findings provide a new approach to the diagnosis of sepsis with the potential to identify the causal pathogen early," said Gokhan Mutlu, professor of medicine and chief of pulmonary and critical care medicine at UChicago and co-author of the research. "This will allow us to use the appropriate antibiotics earlier before the culture results are available and minimize the use of antibiotics that are needed to treat the infection. By combining the pathogen-related and host response data, we are able to predict outcomes in patients with sepsis."

The results were published May 25 in the journal Nature Communications.

Understanding how to treat sepsis

Because sepsis is often caused by microbial infections, the condition is usually initially treated with antibiotics. Treatment must happen quickly--any delay in the administration of correct antibiotics increases the chances of the patient dying. But doctors often aren't sure which bacteria is causing the infection, and growing cultures to pinpoint the bacteria can take days.

Even if doctors can treat the infection directly, the condition can cause the body's immune response to become exaggerated. By attacking the pathogens, the immune system can release too many immune system proteins called cytokines, which can ultimately overwhelm the body and kill the patient. Anti-inflammatory drugs can help treat this, but often physicians do not know when this "cytokine storm" is taking place until it's too late.

"The immune system has a gas and a brake," Tay said. "You need the gas to kill the pathogens, but you need the brake so you don't overshoot inflammation and harm the patient. In all of this, timing is critical. We wanted to know if we could monitor bacterial load and cytokines at the same time, and monitor their changes, to provide better guidance about who should get certain treatments."

Creating an extremely sensitive test

Tay, an expert in single-cell analysis and microfluidics, and his team developed a digital polymerase chain reaction (PCR) test that uses digital proximity litigation assays to quantify the levels of certain genes and proteins in the blood.

Specifically, the test uses a blood sample to test for gram-negative (GN) and gram-positive (GP) bacterial DNA, which is abundant in many septic patients. It also tests for levels of the IL-6 and TNF proteins, the cytokines that the immune system releases to attack pathogens. In addition, it tests for the blaTEM gene, which signifies antibiotic resistance.

The test is extremely sensitive--able to quantify very small changes in the concentrations of these molecules--and provides results within a few hours. Tay worked with pulmonologists at University of Chicago Medicine to try out the test on samples from septic patients.

The researchers took samples once a day for two days from 32 patients and tested their bacterial and protein levels. They found that the bacterial levels of the patients who lived decreased as time went on.

However, in almost every patient that died, IL-6 levels increased throughout their time at the hospital. Even patients who had low bacterial levels to begin with still died if their IL-6 levels increased, showing that the immune system potentially overshot and attacked their own body.

Though IL-6 has been considered a major biomarker in sepsis before, previous researchers did not realize that it was the change in the levels--not the levels themselves--that predicted this outcome.

In addition, the researchers found several patients with the gene that indicates antibiotic resistance, which would be helpful information for the physicians treating them.

"Sepsis manifests itself differently in each person, therefore having a test like this to shed light on that variation could one day be used by providers to identify which patients may respond better to certain treatments or interventions," said Krysta Wolfe, a pulmonologist and assistant professor of medicine at UChicago, and co-author of the research.

Using machine learning algorithms, the researchers could ultimately use these biomarkers to predict who would recover early, recover late, or die, with nearly 100% accuracy.

"All of the sudden we have this method that allows us to really understand how these patients are going to fare," Tay said. "If there are patients that are going to do badly, then you can start treating these patients in different ways, perhaps with drugs that will help block the immune system from overshooting."

Extending the test to other diseases

Right now, the test happens in a lab, but Tay and his group are developing a machine that can quickly test samples on site at ICUs. They are proceeding with a clinical trial and hope to extend the test to include more groups of bacteria beyond just the GN and GP levels, to help physicians better understand which antibiotics are needed in order to help reduce antibiotic resistance.

This test could also be extended to other infections where cytokines can overtake the body, including viral infections like COVID-19.

"A rapid test like this is needed in many situations and could really change the game for treatment of sepsis," Tay said. "This is a disease that can kill everybody, regardless of your situation."

Credit: 
University of Chicago