Tech

VCU technology could upend DNA sequencing for diagnosing certain DNA mutations

image: From left, postdoctoral scholar Andrey Mikheykin, Ph.D., Jason Reed, Ph.D., and postdoctoral fellow Sean Koebley, Ph.D., worked together on the study.

Image: 
John Wallace, VCU Massey Cancer Center

Doctors are increasingly using genetic signatures to diagnose diseases and determine the best course of care, but using DNA sequencing and other techniques to detect genomic rearrangements remains costly or limited in capabilities. However, an innovative breakthrough developed by researchers at Virginia Commonwealth University Massey Cancer Center and the VCU Department of Physics promises to diagnose DNA rearrangement mutations at a fraction of the cost with improved accuracy.

Led by VCU physicist Jason Reed, Ph.D., the team developed a technique that combines a process called digital polymerase chain reaction (dPCR) with high-speed atomic force microscopy (HSAFM) to create an image with such nanoscale resolution that users can measure differences in the lengths of genes in a DNA sequence. These variations in gene length, known as polymorphisms, can be key to accurately diagnosing many forms of cancer and neurological diseases.

A study detailing the method was recently published in the journal ACS Nano, and the research team reported their results at the annual meetings for the Association of Molecular Pathology and the American Society of Hematology. Previous research detailing the HSAFM technology was described by VCU Massey Cancer Center in 2017.

"The technology needed to detect DNA sequence rearrangements is expensive and limited in availability, yet medicine increasingly relies on the information it provides to accurately diagnose and treat cancers and many other diseases," says Jason Reed, Ph.D., member of the Cancer Biology research program at VCU Massey Cancer Center and associate professor in the Department of Physics at the VCU College of Humanities and Sciences. "We've developed a system that combines a routine laboratory process with an inexpensive yet powerful atomic microscope that provides many benefits over standard DNA sequencing for this application, at a fraction of the cost."

dPCR uses the DNA polymerase enzyme to exponentially clone samples of DNA or RNA for further experimentation or analysis. The sample is then placed on an atomically flat plate for inspection using HSAFM, which drags an extremely sharp microscopic stylus similar to the needle on a record player across the sample to create precise measurements at a molecular level. The technique was adapted by Reed's team to use optical lasers, like those in a DVD player, to process samples at a rate thousands of times faster than typical atomic force microscopy. The researchers then developed computer code to trace the length of each DNA molecule.

The team claims that each dPCR reaction costs less than $1 to scan using their technique.

To demonstrate the clinical utility of the process, Reed partnered with Amir Toor, M.D., hematologist-oncologist and member of the Developmental Therapeutics research program at Massey, and Alden Chesney, M.D., associate professor of pathology in the Department of Pathology at the VCU School of Medicine. Together, they compared Reed's technique to the current standard test to diagnose DNA length polymorphisms in the FLT3 gene in patients with acute myeloid leukemia. Patients with these mutations typically have a more aggressive disease and poor prognosis when compared to patients without the mutation.

Reed's technique accurately identified FLT3 gene mutations in all samples and matched the results of the current gold standard test (LeukoStrat® CDx FLT3 Mutation Assay) in measuring the lengths of the gene segments. However, unlike the current test, Reed's analysis also reports the variant allele fraction (VAF). The VAF can show whether the mutation is inherited and allows the detection of mutations that could potentially be missed by the current test.

"We chose to focus on FLT3 mutations because they are difficult to diagnosis, and the standard assay is limited in capability," says Reed. "We plan to continue developing and testing this technology in other diseases involving DNA structural mutations. We hope it can be a powerful and cost-effective tool for doctors around the world treating cancer and other devastating diseases driven by DNA mutations."

Credit: 
Virginia Commonwealth University

Historically redlined neighborhoods are more likely to lack greenspace today

Historically redlined neighborhoods are more likely to have a paucity of greenspace today compared to other neighborhoods. The study by researchers at Columbia University Mailman School of Public Health and the University of California, Berkeley and San Francisco, demonstrates the lasting effects of redlining, a racist mortgage appraisal practice of the 1930s that established and exacerbated racial residential segregation in the United States. Results appear in Environmental Health Perspectives.

In the 1930s, the Home Owners' Loan Corporation (HOLC) assigned risk grades to neighborhoods across the country based on racial demographics and other factors. "Hazardous" areas--often those whose residents included people of color--were outlined in red on HOLC maps. In the decades since, redlined neighborhoods experienced lower levels of private and public investment and have remained segregated.

Mounting evidence indicates that historically redlined neighborhoods contributed to worse health outcomes, as well as elevated exposures to air pollution and other environmental hazards. Areas lacking green space often also have elevated levels of air and noise pollution, as well as higher rates of racial segregation and poverty.

"Though redlining is now outlawed, its effects on urban neighborhoods persist in many ways, including by depriving residents of green space, which is known to promote health and buffer stress," says first author Anthony Nardone, MS, a medical student at the University of California, San Francisco.

"We find lingering effects of racist redlining policies from the 1930s. Future policies should, with the input of local leaders, strive to expand the availability of green space, a health-promoting amenity, in communities of color," adds senior author Joan Casey, PhD, assistant professor of environmental health sciences at the Columbia Mailman School.

The researchers examined 72 urban areas in the United States, estimating the association between HOLC grades and greenspace, as measured by satellite imagery from 2010. They compared greenspace between neighborhoods with different HOLC grades but otherwise similar sociodemographic characteristics according to the 1940s Census so as to isolate the effect of HOLC grades, including redlining. They limited their analysis to neighborhoods that overlapped with 1940 census tract boundaries and considered ecoregion because greenspace will be qualitatively different in, for example, the Southwest compared to the Northeast.

HOLC risk grades were one part of a larger pattern of racist policies. The Federal Housing Administration, tasked with stimulating the private real estate market during the New Deal, would not underwrite insurance on private mortgages that would have desegregated neighborhoods. Similarly, racially restrictive covenants, which were clauses in homeownership deeds, prohibited the future sale of many homes to people of color. More recently--even after the passage of the 1968 Fair Housing Act, which explicitly made redlining illegal--racist banking and real estate practices have persisted and are reflected by the fallout of the subprime mortgage crisis, in which those in communities of color, particularly Black and Latino individuals, were disproportionately targeted with predatory loans and foreclosures by banks.

The authors note that their analysis of satellite imagery does not provide an indication of greenspace quality (for example, green space in places with arid climates may not be reasonable proxies for vicinity to natural environments and their health-related benefits). They also do not distinguish between public and private greenspace or untended forest and manicured parks. In some areas, the presence of green space in the 1930s may also have decreased the likelihood that a neighborhood was redlined.

The researchers say future studies could apply similar methods to analyze metropolitan- and regional-specific associations and assess the extent to which state, county, or city-level policies may modify observed relationships between HOLC grade and green space. Co-authors include Kara E. Rudolph at Columbia University and Rachel Morello-Frosch at the University of California, Berkeley.

Credit: 
Columbia University's Mailman School of Public Health

Mira's last journey: Exploring the dark universe

image: Visualization of the Last Journey simulation. Shown is the large-scale structure of the universe as a thin slice through the full simulation (lower left) and zoom-ins at different levels. The lower right panel shows one of the largest structures in the simulation.

Image: 
(Image by Argonne National Laboratory.)

A massive simulation of the cosmos and a nod to the next generation of computing

A team of physicists and computer scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory performed one of the five largest cosmological simulations ever. Data from the simulation will inform sky maps to aid leading large-scale cosmological experiments.

The simulation, called the Last Journey, follows the distribution of mass across the universe over time — in other words, how gravity causes a mysterious invisible substance called “dark matter” to clump together to form larger-scale structures called halos, within which galaxies form and evolve.

“We’ve learned and adapted a lot during the lifespan of Mira, and this is an interesting opportunity to look back and look forward at the same time.” — Adrian Pope, Argonne physicist

The scientists performed the simulation on Argonne’s supercomputer Mira. The same team of scientists ran a previous cosmological simulation called the Outer Rim in 2013, just days after Mira turned on. After running simulations on the machine throughout its seven-year lifetime, the team marked Mira’s retirement with the Last Journey simulation.

The Last Journey demonstrates how far observational and computational technology has come in just seven years, and it will contribute data and insight to experiments such as the Stage-4 ground-based cosmic microwave background experiment (CMB-S4), the Legacy Survey of Space and Time (carried out by the Rubin Observatory in Chile), the Dark Energy Spectroscopic Instrument and two NASA missions, the Roman Space Telescope and SPHEREx.

“We worked with a tremendous volume of the universe, and we were interested in large-scale structures, like regions of thousands or millions of galaxies, but we also considered dynamics at smaller scales,” said Katrin Heitmann, deputy division director for Argonne’s High Energy Physics (HEP) division.

The code that constructed the cosmos

The six-month span for the Last Journey simulation and major analysis tasks presented unique challenges for software development and workflow. The team adapted some of the same code used for the 2013 Outer Rim simulation with some significant updates to make efficient use of Mira, an IBM Blue Gene/Q system that was housed at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

Specifically, the scientists used the Hardware/Hybrid Accelerated Cosmology Code (HACC) and its analysis framework, CosmoTools, to enable incremental extraction of relevant information at the same time as the simulation was running.

“Running the full machine is challenging because reading the massive amount of data produced by the simulation is computationally expensive, so you have to do a lot of analysis on the fly,” said Heitmann. “That’s daunting, because if you make a mistake with analysis settings, you don’t have time to redo it.”

The team took an integrated approach to carrying out the workflow during the simulation. HACC would run the simulation forward in time, determining the effect of gravity on matter during large portions of the history of the universe. Once HACC determined the positions of trillions of computational particles representing the overall distribution of matter, CosmoTools would step in to record relevant information — such as finding the billions of halos that host galaxies — to use for analysis during post-processing.

“When we know where the particles are at a certain point in time, we characterize the structures that have formed by using CosmoTools and store a subset of data to make further use down the line,” said Adrian Pope, physicist and core HACC and CosmoTools developer in Argonne’s Computational Science (CPS) division. “If we find a dense clump of particles, that indicates the location of a dark matter halo, and galaxies can form inside these dark matter halos.”

The scientists repeated this interwoven process — where HACC moves particles and CosmoTools analyzes and records specific data — until the end of the simulation. The team then used features of CosmoTools to determine which clumps of particles were likely to host galaxies. For reference, around 100 to 1,000 particles represent single galaxies in the simulation.

“We would move particles, do analysis, move particles, do analysis,” said Pope. “At the end, we would go back through the subsets of data that we had carefully chosen to store and run additional analysis to gain more insight into the dynamics of structure formation, such as which halos merged together and which ended up orbiting each other.”

Using the optimized workflow with HACC and CosmoTools, the team ran the simulation in half the expected time.

Community contribution

The Last Journey simulation will provide data necessary for other major cosmological experiments to use when comparing observations or drawing conclusions about a host of topics. These insights could shed light on topics ranging from cosmological mysteries, such as the role of dark matter and dark energy in the evolution of the universe, to the astrophysics of galaxy formation across the universe.

“This huge data set they are building will feed into many different efforts,” said Katherine Riley, director of science at the ALCF. “In the end, that’s our primary mission — to help high-impact science get done. When you’re able to not only do something cool, but to feed an entire community, that’s a huge contribution that will have an impact for many years.”

The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing models and the development of new ones, impacting both ongoing and upcoming cosmological surveys.

“We are not trying to match any specific structures in the actual universe,” said Pope. “Rather, we are making statistically equivalent structures, meaning that if we looked through our data, we could find locations where galaxies the size of the Milky Way would live. But we can also use a simulated universe as a comparison tool to find tensions between our current theoretical understanding of cosmology and what we’ve observed.”

Looking to exascale

“Thinking back to when we ran the Outer Rim simulation, you can really see how far these scientific applications have come,” said Heitmann, who performed Outer Rim in 2013 with the HACC team and Salman Habib, CPS division director and Argonne Distinguished Fellow. “It was awesome to run something substantially bigger and more complex that will bring so much to the community.”

As Argonne works towards the arrival of Aurora, the ALCF’s upcoming exascale supercomputer, the scientists are preparing for even more extensive cosmological simulations. Exascale computing systems will be able to perform a billion billion calculations per second — 50 times faster than many of the most powerful supercomputers operating today.

“We’ve learned and adapted a lot during the lifespan of Mira, and this is an interesting opportunity to look back and look forward at the same time,” said Pope. “When preparing for simulations on exascale machines and a new decade of progress, we are refining our code and analysis tools, and we get to ask ourselves what we weren’t doing because of the limitations we have had until now.”

The Last Journey was a gravity-only simulation, meaning it did not consider interactions such as gas dynamics and the physics of star formation. Gravity is the major player in large-scale cosmology, but the scientists hope to incorporate other physics in future simulations to observe the differences they make in how matter moves and distributes itself through the universe over time.

“More and more, we find tightly coupled relationships in the physical world, and to simulate these interactions, scientists have to develop creative workflows for processing and analyzing,” said Riley. “With these iterations, you’re able to arrive at your answers — and your breakthroughs — even faster.”

Credit: 
DOE/Argonne National Laboratory

Pioneering research unravels hidden origins of Eastern Asia's 'land of milk and honey'

This release was removed on January 28, 2021.

For more information on the origins of the 'land of milk and honey', please visit this link or contact Duncan Sandes at D.Sandes@exeter.ac.uk.

Journal

Science Advances

Credit: 
University of Exeter

Juicing technique could influence healthfulness of fresh-squeezed juice

With the New Year, many people are making resolutions to eat healthier, by eating more vegetables, for example. But those who don't like the taste or texture of some vegetables might prefer to drink them in a home-squeezed juice. Now, researchers reporting in ACS Food Science & Technology have found that the choice of household juicing technique can influence the phytochemical content and antioxidant activity of common vegetable juices.

Home juicing machines have become popular in recent years, with different types available. For example, blenders crush vegetables with fast, spinning blades, and the resulting juice is typically thick, with much pulp and dietary fiber. In contrast, high-speed centrifugal juicers quickly pulverize veggies and separate out pulp and fiber, making for a thinner juice. Low-speed juice extractors squeeze juice with a horizontal auger that rotates vegetables at a low speed, producing the least heat of the three methods and also removing pulp and fiber. Juicing can alter the levels of health-promoting phytochemicals and antioxidants in raw vegetables by exposing inner tissues to oxygen, light and heat and releasing enzymes. Therefore, Junyi Wang, Guddadarangavvanahally Jayaprakasha and Bhimanagouda Patil at Texas A&M University wanted to compare the phytochemical and antioxidant contents of 19 vegetables juiced with these three techniques.

After preparing juices with the different methods, the researchers observed that, in general, blending produced juices with the lowest amounts of some beneficial compounds, such as vitamin C, antioxidants and phenolics, probably because the technique produced the most heat. Low-speed juicing generated the highest amounts of beneficial compounds, although exceptions were found for certain vegetables. However, likely because of their higher fiber content, blended vegetable juices had the highest amounts of α-amylase inhibitors, which could help reduce hyperglycemia after a meal. The researchers then used mass spectrometry and chemometrics to identify and quantify 85 metabolites in juices prepared by the three methods, finding that the low-speed juicer produced more diverse metabolites than the other two methods, but the relative abundances for the three juicing methods differed based on the veggie type. Therefore, different vegetables and juicing methods could produce unique health benefits, the researchers say.

Credit: 
American Chemical Society

Scientists develop perovskite solar modules with greater size, power and stability

video: Scientists from the OIST Energy Materials and Surface Sciences Unit show off the perovskite solar modules in action, powering a fan and toy car.

Image: 
OIST

Perovskites are projected to be a game-changer in future solar technology but currently suffer from a short operational lifespan and drops in efficiency when scaled up to a larger size

Scientists have improved the stability and efficiency of solar cell modules by mixing the precursor materials with ammonium chloride during fabrication

The perovskite active layer in the improved solar modules are thicker and have larger grains, with fewer defects

Both 5 x 5 cm2 and 10 x 10 cm2 perovskite modules maintained high efficiencies for over 1000 hours

Researchers from the Okinawa Institute of Science and Technology Graduate University (OIST) have created perovskite solar modules with improved stability and efficiency by using a new fabrication technique that reduced defects. Their findings were published on the 25th January in Advanced Energy Materials.

Perovskites are one of the most promising materials for the next-generation of solar technology, soaring from efficiencies of 3.8% to 25.5% in slightly over a decade. Perovskite solar cells are cheap to produce and have the potential to be flexible, increasing their versatility. But two obstacles still block the way to commercialization: their lack of long-term stability and difficulties with upscaling.

"Perovskite material is fragile and prone to decomposition, which means the solar cells struggle to maintain high efficiency over a long time," said first author Dr. Guoqing Tong, a postdoctoral scholar in the OIST Energy Materials and Surface Sciences Unit, led by Professor Yabing Qi. "And although small-sized perovskite solar cells have a high efficiency and perform almost as well as their silicon counterparts, once scaled up to larger solar modules, the efficiency drops."

In a functional solar device, the perovskite layer lies in the center, sandwiched between two transport layers and two electrodes. As the active perovskite layer absorbs sunlight, it generates charge carriers which then flow to the electrodes via the transport layers and produce a current.

However, pinholes in the perovskite layer and defects at the boundaries between individual perovskite grains can disrupt the flow of charge carriers from the perovskite layer to the transport layers, reducing efficiency. Humidity and oxygen can also start to degrade the perovskite layer at these defect sites, shortening the lifespan of the device.

"Scaling up is challenging because as the modules increase in size, it's harder to produce a uniform layer of perovskite, and these defects become more pronounced," explained Dr. Tong. "We wanted to find a way of fabricating large modules that addressed these problems."

Currently, most solar cells produced have a thin perovskite layer - only 500 nanometers in thickness. In theory, a thin perovskite layer improves efficiency, as the charge carriers have less distance to travel to reach the transport layers above and below. But when fabricating larger modules, the researchers found that a thin film often developed more defects and pinholes.

The researchers therefore opted to make 5 x 5 cm2 and 10 x 10 cm2 solar modules that contained perovskite films with double the thickness.

However, making thicker perovskite films came with its own set of challenges. Perovskites are a class of materials that are usually formed by reacting many compounds together as a solution and then allowing them to crystallize.

However, the scientists struggled to dissolve a high enough concentration of lead iodine - one of the precursor materials used to form perovskite - that was needed for the thicker films. They also found that the crystallization step was fast and uncontrollable, so the thick films contained many small grains, with more grain boundaries.

The researchers therefore added ammonium chloride to increase the solubility of lead iodine. This also allowed lead iodine to be more evenly dissolved in the organic solvent, resulting in a more uniform perovskite film with much larger grains and fewer defects. Ammonia was later removed from the perovskite solution, lowering the level of impurities within the perovskite film.

Overall, the solar modules sized 5 x 5 cm2 showed an efficiency of 14.55%, up from 13.06% in modules made without ammonium chloride, and were able to work for 1600 hours - over two months - at more than 80% of this efficiency.

The larger 10 x 10 cm2 modules had an efficiency of 10.25% and remained at high levels of efficiency for over 1100 hours, or almost 46 days.

"This is the first time that a lifespan measurement has been reported for perovskite solar modules of this size, which is really exciting," said Dr. Tong.

This work was supported by the OIST Technology Development and Innovation Center's Proof-of-Concept Program. These results are a promising step forward in the quest to produce commercial-sized solar modules with efficiency and stability to match their silicon counterparts.

In the next stage of their research, the team plans to optimize their technique further by fabricating the perovskite solar modules using vapor-based methods, rather than by using solution, and are now trying to scale up to 15 x 15 cm2 modules.

"Going from lab-sized solar cells to 5 x 5 cm2 solar modules was hard. Jumping up to solar modules that were 10 x 10 cm2 was even harder. And going to 15 x 15 cm2 solar modules will be harder still," said Dr. Tong. "But the team is looking forward to the challenge."

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Mount Sinai researchers build models using machine learning technique to enhance predictions of COVID-19 outcomes

Mount Sinai researchers have published one of the first studies using a machine learning technique called "federated learning" to examine electronic health records to better predict how COVID-19 patients will progress. The study was published in the Journal of Medical Internet Research - Medical Informatics on January 27.

The researchers said the emerging technique holds promise to create more robust machine learning models that extend beyond a single health system without compromising patient privacy. These models, in turn, can help triage patients and improve the quality of their care.

Federated learning is a technique that trains an algorithm across multiple devices or servers holding local data samples but avoids clinical data aggregation, which is undesirable for reasons including patient privacy issues. Mount Sinai researchers implemented and assessed federated learning models using data from electronic health records at five separate hospitals within the Health System to predict mortality in COVID-19 patients. They compared the performance of a federated model against ones built using data from each hospital separately, referred to as local models. After training their models on a federated network and testing the data of local models at each hospital, the researchers found the federated models demonstrated enhanced predictive power and outperformed local models at most of the hospitals.

"Machine learning models in health care often require diverse and large-scale data to be robust and translatable outside the patient population they were trained on," said the study's corresponding author, Benjamin Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, and member of the Hasso Plattner Institute for Digital Health at Mount Sinai and the Mount Sinai Clinical Intelligence Center. "Federated learning is gaining traction within the biomedical space as a way for models to learn from many sources without exposing any sensitive patient data. In our work, we demonstrate that this strategy can be particularly useful in situations like COVID-19."

Machine learning models built within a hospital are not always effective for other patient populations, partially due to models being trained on data from a single group of patients which is not representative of the entire population.

"Machine learning in health care continues to suffer a reproducibility crisis," said the study's first author, Akhil Vaid, MD, postdoctoral fellow in the Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai, and member of the Hasso Plattner Institute for Digital Health at Mount Sinai and the Mount Sinai Clinical Intelligence Center. "We hope that this work showcases benefits and limitations of using federated learning with electronic health records for a disease that has a relative dearth of data in an individual hospital. Models built using this federated approach outperform those built separately from limited sample sizes of isolated hospitals. It will be exciting to see the results of larger initiatives of this kind."

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Up-trending farming and landscape disruptions threaten Paris climate agreement goals

Irvine, Calif., Jan. 27, 2021 -- One of President Joe Biden's first post-inauguration acts was to realign the United States with the Paris climate accord, but a new study led by researchers at the University of California, Irvine demonstrates that rising emissions from human land-use will jeopardize the agreement's goals without substantial changes in agricultural practices.

In a paper published today in Nature, the team presented the most thorough inventory yet of land-use contributions to carbon dioxide and other greenhouse gases (including nitrous oxide and methane) from 1961 to 2017, taking into account emissions from agricultural production activities and modifications to the natural landscape.

"We estimated and attributed global land-use emissions among 229 countries and areas and 169 agricultural products," said lead author Chaopeng Hong, UCI postdoctoral scholar in Earth system science. "We looked into the processes responsible for higher or lower emissions and paid particularly close attention to trends in net CO2 emitted from changes in land use, such as converting forested land into farm acreage."

The researchers learned that poorer countries in Latin America, Southeast Asia and sub-Saharan Africa experienced the most pronounced surge in these "land-use change" emissions.

East Asia, South Asia and the Middle East produced fewer greenhouse gases as a result of land-use change, according to the study, but the regions' agricultural emissions were growing strongly as output raced to keep up with population expansion. And more affluent North America, Europe and Oceania showed negative land-use change emissions but nonetheless substantial farm-originated pollution.

"While the situation in low-income countries is critical, mitigation opportunities in these places are large and clear," said senior author Steve Davis, UCI associate professor of Earth system science. "Improving yields on already cultivated land can avoid clearing more carbon-dense forests for cultivation of soybeans, rice, maize and palm oil, thereby drastically reducing land-use emissions in these countries."

The authors suggest that nations in emerging and developed markets also can lessen the emissions intensity of agriculture by adopting more efficient tilling and harvesting methods, by better soil and livestock waste management, and by reducing food waste.

In addition, dietary changes could help, according to the study, which says that while red meat supplies only about 1 percent of calories produced globally, it's responsible for up to a quarter of the world's land-use greenhouse gas emissions.

Europe has the lowest land-use emissions, at 0.5 tons per person per year, the researchers note, but the figure is substantially higher almost everywhere else, and as the planet's population continues to increase, farmers and policymakers will need to meet and exceed current best practices.

The paper highlights some promising technological solutions, such as new ways of cultivating rice that create less methane and dietary supplements for cattle that reduce their harmful emissions by up to 95 percent.

"Feeding the planet may always generate substantial greenhouse gas emissions," said Davis, a member of the executive board of UCI's Solutions that Scale initiative which seeks answers to the planet's most pressing climate and environmental problems. "Even if we get emissions down to European levels worldwide, with expected population growth, we could still be looking at more than 5 gigatons of land-use emissions per year in 2100, an amount at odds with ambitious international climate goals unless offset by negative emissions."

The project - funded by the National Science Foundation, the German Research Foundation, and the Gordon and Betty Moore Foundation - also included researchers from the University of California, San Diego; Colorado State University; Stanford University; and Germany's Max Planck Institute for Meteorology.

Credit: 
University of California - Irvine

Optical scanner design for adaptive driving beam systems can lead to safer night driving

image: Headlights Infographic: ADB with MEMS 2D optical scanner, based on the piezoelectric effect.

Image: 
SPIE

Car accidents are responsible for approximately a million deaths each year globally. Among the many causes, driving at night, when vision is most limited, leads to accidents with higher mortality rates than accidents during the day. Therefore, improving visibility during night driving is critical for reducing the number of fatal car accidents.

An adaptive driving beam (ADB) can help to some extent. This advanced drive-assist technology for vehicle headlights can automatically adjust the driver's visibility based on the car speed and traffic environment. ADB systems that exist commercially are a marked improvement over manually controlled headlights, but they suffer from limited controllability. Whereas spatial light modulators, like liquid crystal pixels or digital micromirrors, can alleviate this problem, they are often expensive to implement and lead to heat loss from unutilized light power.

In a recent study published in the Journal of Optical Microsystems, researchers from Japan have come up with an alternative to conventional ADB systems: a microelectromechanical systems (MEMS) optical scanner that relies on the piezoelectric effect of electrically induced mechanical vibrations. This design consists of a thin film of lead-zirconate-titanate oxide (or PZT), which induces mechanical vibrations in the scanner in synchronization with a laser diode. The optical scanner spatially steers the laser beam to form structured light on the phosphor plate, where it is converted into bright white light. The light intensity is, in turn, modulated by the ADB controller based on the traffic, steering wheel angle, and vehicle cruising speed. University of Tokyo researcher Hiroshi Toshiyoshi, one of the authors on the paper, explains, "What is unique about this setup is that the laser beam is converted into white light at high efficiency, which reduces heating of the ADB system."

The researchers designed the optical scanner on a single chip consisting of a bonded silicon-on-insulator wafer with the PZT layer grown on it and laminated with metal to form piezoelectric actuators. They arranged the actuators as suspensions to allow for large-angle horizontal and vertical deflections of the scanner. This, in turn, enabled two-dimensional scanning of the headlight beam. Further, they designed the modes so that they don't react to low-frequency noise, such as from other vehicles. Their ADB system also accounts for temperature variations. Finally, they mounted the module on a vehicle and evaluated its performance for actual driving.

The researchers found that the ADB with a MEMS scanner provided the driver with better visibility, especially when it comes to seeing pedestrians. It could also reduce the glare from oncoming vehicles and reconfigure the illumination area depending on the cruising speed of the vehicle.

While this technology certainly advances drive-assist technology, it also has other potential applications in light detection and range finding, as well as inter-vehicle optical communication links, which means that the system could find use in self-driving technology of intelligent traffic systems in the future, taking us another step toward risk-free driving.

Credit: 
SPIE--International Society for Optics and Photonics

T cells can mount attacks against many SARS-CoV-2 targets--even on new virus variant

image: Transmission electron micrograph of SARS-CoV-2 virus particles, isolated from a patient. Image captured and color-enhanced at the NIAID Integrated Research Facility (IRF) in Fort Detrick, Maryland.

Image: 
NIAID

LA JOLLA--A new study led by scientists at La Jolla Institute for Immunology (LJI) suggests that T cells try to fight SARS-CoV-2 by targeting a broad range of sites on the virus--beyond the key sites on the virus's spike protein. By attacking the virus from many angles, the body has the tools to potentially recognize different SARS-CoV-2 variants.

The new research, published January 27, 2021 in Cell Report Medicine, is the most detailed analysis so far of which proteins on SARS-CoV-2 stimulate the strongest responses from the immune system's "helper" CD4+ T cells and "killer" CD8+ T cells.

"We are now armed with the knowledge of which parts of the virus are recognized by the immune system," says LJI Professor Alessandro Sette, Dr. Biol. Sci., who co-led the new study with LJI Instructor Alba Grifoni, Ph.D.

Sette and Grifoni have led research into immune responses to the virus since the beginning of the pandemic. Their previous studies, co-led by members of the LJI Coronavirus Task Force, shows that people can have a wide range of responses to the virus--some people have strong immune responses and do well. Others have disjointed immune responses and are more likely to end up in the hospital.

As COVID-19 vaccines reach more people, LJI scientists are keeping an eye on how different people build immunity to SARS-CoV-2. They are also studying how T cells could combat different variants of SARS-CoV-2. This work takes advantage of the lab's expertise in predicting and studying T cell responses to viruses such as dengue and Zika.

"This is even more important with COVID-19 because it is a global pandemic, so we need to account for immune responses in different populations," says Grifoni.

The immune system is very flexible. By re-scrambling genetic material, it can make T cells that respond to a huge range of targets, or epitopes, on a pathogen. Some T cell responses will be stronger against some epitopes than others. Researchers call the targets that prompt a strong immune cells response "immunodominant."

For the new study, the researchers examined T cells from 100 people who had recovered from SARS-CoV-2 infection. They then took a close look at the genetic sequence of the virus to separate the potential epitopes from the epitopes that these T cells would actually recognize.

Their analysis revealed that not all parts of the virus induce the same strong immune response in everyone. In fact, T cells can recognize dozens of epitopes on SARS-CoV-2, and these immunodominant sites also change from person to person. On average, each study participant had the ability to recognize about 17 CD8+ T cells epitopes and 19 CD4+ T cell epitopes.

This broad immune system response serves a few purposes. The new study shows that while the immune system often mounts a strong response against a particular site on the virus's "spike" protein called the receptor binding domain, this region is actually not as good at inducing a strong response from CD4+ helper T cells.

Without a strong CD4+ T cell response, however, people may be slow to mount the kind of neutralizing immune response that quickly wipes out the virus. Luckily, the broad immune response comes in handy, and most people have immune cells that can recognize sites other than the receptor binding domain.

Among the many epitopes they uncovered, the researchers identified several additional epitopes on the SARS-CoV-2 spike protein. Grifoni says this is good news. By targeting many vulnerable sites on the spike protein, the immune system would still be able to fight infection, even if some sites on the virus change due to mutations.

"The immune response is broad enough to compensate for that," Grifoni says.

Since the announcement of the fast-spreading UK variant of SARS-CoV-2 (called SARS-CoV-2 VUI 202012/01), the researchers have compared the mutated sites on that virus to the epitopes they found. Sette notes that the mutations described in the UK variant for the spike protein affect only 8% of the epitopes recognized by CD4+ T cells in this study, while 92% of the responses is conserved.

Sette emphasized that the new study is the results of months of long hours and international collaboration between labs at LJI; the University of California, San Diego; and researcher's at Australia's Murdoch University. "This was a tremendous amount of work, and we were able to do it really fast because of our collaborations," he says.

Credit: 
La Jolla Institute for Immunology

Going Organic: uOttawa team realizing the limitless possibilities of wearable electronics

Benoît Lessard and his team are developing carbon-based technologies which could lead to improved flexible phone displays, make robotic skin more sensitive and allow for wearable electronics that could monitor the physical health of athletes in real-time.

With the help of the Canadian Light Source (CLS) at the University of Saskatchewan (USask), a team of Canadian and international scientists have evaluated how thin film structure correlates to organic thin-film transistors performance.

Organic electronics use carbon-based molecules to create more flexible and efficient devices. The display of our smart phones is based on organic-LED technology, which uses organic molecules to emit bright light and others to respond to touch.

Lessard, the corresponding author of a recent paper published in ACS Applied Materials and Interfaces, is excited about the data his team has collected at the HXMA beamline. As Canada Research Chair in Advanced Polymer Materials and Organic Electronics and Associate Professor at the University of Ottawa in the Department of Chemical and Biological Engineering, Lessard is working on furthering the technology behind organic thin-film transistors.

To improve on this technology the team is engineering the design and processing of phthalocyanines, molecules used traditionally as dyes and pigments.

"The features that make a molecule bright and colourful are features that make them able to absorb and emit light effectively." Lessard said. "A lot of things we want in a dye or pigment is the same thing we are looking for in your OLED display --brightly coloured things that make light."

Phthalocyanines have been used in photocopiers and similar technologies since the 1960s. Repurposing these molecules ¬for use in organic electronics helps keep costs down and makes the manufacturing of these devices more practical, allowing for their use in many unusual applications.

"The computer we are using has a billion transistors, but if you want to have artificial skin for robotics or wearable sensors, you are going to need flexible, bendable electronics and the best way to do that is to go organic," Lessard said.

Organic electronic technologies can be used in artificial skin for burn victims or electronic skin for robots. Organic sensors could be imbedded in athletic clothing and could send information to coaches who could observe an athlete's hydration levels by monitoring what is lost in their sweat.

"The applications are sort of anything you can dream of," Lessard said.

Lessard has also used this technology in the creation of sensors that detect cannabinoids, the active molecules in cannabis. He is co-founder of a spin-off company called Ekidna Sensing, which develops rapid tests for the cannabis industry based on similar technologies.

"Everything we are learning at the synchrotron could help us towards this goal of the start-up company," Lessard said.

While there are table-top technologies available, they aren't powerful enough to reveal what happens at the interface, which is only a couple of nanometers thick. The team couldn't have generated the data needed for understanding how the transistors perform without the help of the CLS.

Credit: 
University of Ottawa

A little soap simplifies making 2D nanoflakes

image: The image displays the exfoliation of hexagonal boron nitride into atomically thin nanosheets aided by surfactants, a process refined by chemists at Rice University.

Image: 
Ella Maru Studio

HOUSTON - (Jan. 27, 2021) - Just a little soap helps clean up the challenging process of preparing two-dimensional hexagonal boron nitride (hBN).

Rice University chemists have found a way to get the maximum amount of quality 2D hBN nanosheets from its natural bulk form by processing it with surfactant (aka soap) and water. The surfactant surrounds and stabilizes the microscopic flakes, preserving their properties.

Experiments by the lab of Rice chemist Angel Martí identified the "sweet spot" for making stable dispersions of hBN, which can be processed into very thin antibacterial films that handle temperatures up to 900 degrees Celsius (1,652 degrees Fahrenheit).

The work led by Martí, alumna Ashleigh Smith McWilliams and graduate student Cecilia Martínez-Jiménez is detailed in the American Chemical Society journal ACS Applied Nano Materials.

"Boron nitride materials are interesting, particularly because they are extremely resistant to heat," Martí said. "They are as light as graphene and carbon nanotubes, but you can put hBN in a flame and nothing happens to it."

He said bulk hBN is cheap and easy to obtain, but processing it into microscopic building blocks has been a challenge. "The first step is to be able to exfoliate and disperse them, but research on how to do that has been scattered," Martí said. "When we decided to set a benchmark, we found the processes that have been extremely useful for graphene and nanotubes don't work as well for boron nitride."

Sonicating bulk hBN in water successfully exfoliated the material and made it soluble. "That surprised us, because nanotubes or graphene just float on top," Martí said. "The hBN dispersed throughout, though they weren't particularly stable.

"It turned out the borders of boron nitride crystals are made of amine and nitric oxide groups and boric acid, and all of these groups are polar (with positive or negative charge)," he said. "So when you exfoliate them, the edges are full of these functional groups that really like water. That never happens with graphene."

Experiments with nine surfactants helped them find just the right type and amount to keep 2D hBN from clumping without cutting individual flakes too much during sonication. The researchers used 1% by weight of each surfactant in water, added 20 milligrams of bulk hBN, then stirred and sonicated the mix.

Spinning the resulting solutions at low and high rates showed the greatest yield came with the surfactant known as PF88 under 100-gravity centrifugation, but the highest-quality nanosheets came from all the ionic surfactants under 8,000 g centrifugation, with the greatest stability from common ionic surfactants SDS and CTAC.

DTAB -- short for dodecyltrimethylammonium bromide -- under high centrifugation proved best at balancing the yield and quality of 2D hBN.
The researchers also produced a transparent film from hBN nanosheets dispersed in SDS and water to demonstrate how they can be processed into useful products.

"We describe the steps you need to do to produce high-quality hBN flakes," Martí said. "All of the steps are important, and we were able to bring to light the consequences of each one."

Credit: 
Rice University

A metalens for virtual and augmented reality

image: A metalens fabricated on 2-inch glass wafer (left) and a scanning fiber mounted through a piezo tube (right). The fiber tip locates within the focal length of the metalens. Light travels along the fiber and emits out from the scanning fiber tip, where a display pattern forms.

Image: 
Photo Zhaoyi Li/Harvard University

Despite all the advances in consumer technology over the past decades, one component has remained frustratingly stagnant: the optical lens. Unlike electronic devices, which have gotten smaller and more efficient over the years, the design and underlying physics of today's optical lenses haven't changed much in about 3,000 years.

This challenge has caused a bottleneck in the development of next-generation optical systems such as wearable displays for virtual reality, which require compact, lightweight, and cost-effective components.

At the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), a team of researchers led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, has been developing the next generation of lenses that promise to open that bottleneck by replacing bulky curved lenses with a simple, flat surface that uses nanostructures to focus light.

In 2018, the Capasso's team developed achromatic, aberration-free metalenses that work across the entire visible spectrum of light. But these lenses were only tens of microns in diameter, too small for practical use in VR and augmented reality systems.

Now, the researchers have developed a two-millimeter achromatic metalenses that can focus RGB (red, blue, green) colors without aberrations and developed a miniaturized display for virtual and augmented reality applications.

The research is published in Science Advances.

"This state-of-the-art lens opens a path to a new type of virtual reality platform and overcomes the bottleneck that has slowed the progress of new optical device," said Capasso, the senior author of the paper.

"Using new physics and a new design principle, we have developed a flat lens to replace the bulky lenses of today's optical devices," said Zhaoyi Li, a postdoctoral fellow at SEAS and first author of the paper. "This is the largest RGB-achromatic metalens to date and is a proof of concept that these lenses can be scaled up to centimeter size, mass produced, and integrated in commercial platforms."

Like previous metalenses, this lens uses arrays of titanium dioxide nanofins to equally focus wavelengths of light and eliminate chromatic aberration. By engineering the shape and pattern of these nanoarrays, the researchers could control the focal length of red, green and blue color of light. To incorporate the lens into a VR system, the team developed a near-eye display using a method called fiber scanning.

The display, inspired by fiber-scanning-based endoscopic bioimaging techniques, uses an optical fiber through a piezoelectric tube. When a voltage is applied onto the tube, the fiber tip scans left and right and up and down to display patterns, forming a miniaturized display. The display has high resolution, high brightness, high dynamic range, and wide color gamut.

In a VR or AR platform, the metalens would sit directly in front of the eye, and the display would sit within the focal plane of the metalens. The patterns scanned by the display are focused onto the retina, where the virtual image forms, with the help of the metalens. To the human eye, the image appears as part of the landscape in the AR mode, some distance from our actual eyes.

"We have demonstrated how meta-optics platforms can help resolve the bottleneck of current VR technologies and potentially be used in our daily life," said Li.

Next, the team aims to scale up the lens even further, making it compatible with current large-scale fabrication techniques for mass production at a low cost.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Putting bugs on the menu, safely

image: Man eating an insect

Image: 
Edith Cowan University

The thought of eating insects is stomach turning for many, but new Edith Cowan University (ECU) research is shedding light on allergy causing proteins which could pose serious health risks for those suffering from shellfish allergy.

The research, published in the journal Food Chemistry, identified 20 proteins found in cricket food products which could cause serious allergic reactions.

The project was led by Professor Michelle Colgrave from ECU's School of Science and the CSIRO.

Professor Colgrave said crickets and other insects could be the key to feeding for the estimated 9.7 billion people on Earth in 2050.

"More than 2 billion people around the world already eat insects on a daily basis and they could be a sustainable solution, providing protein that complements traditional animal-based protein sources," she said.

"Crickets are high in protein, nutrient dense and considered environmentally friendly.

"Numerous studies have shown eating insects provide benefits to gut health, lowering blood pressure while being high in antioxidants."

Insects might have a strong reaction

While insects show promise as an alternative protein source, and are identified by Agrifutures as a high potential emerging industry, their allergenic properties are a concern.

As the world searches for novel and more sustainable forms of food, consideration must also be paid to those with allergenic properties and that is where Professor Colgrave's research fits in.

"This research showed a significant overlap in allergenic proteins found in cricket food products and those found in shellfish like crabs and prawns," she said.

"That's because crickets, mealworms and other insects are closely related to crustaceans.

"Shellfish allergies affect up to two per cent of people globally, but varies according to age and region, and there's a good chance that people allergic to shellfish will also react to insects."

Being an allergen does not prevent insects being used as a food source, however it does mean that insect-based foods need to be tested and labelled correctly to ensure people with allergies don't unwittingly eat them.

Breaking down the bugs

The research team from ECU, CSIRO, James Cook University and Singapore's National Agency for Science Technology and Research compared proteins from roasted whole crickets and cricket powder products to known allergens.

Their results can now be used to detect cricket-derived allergens in food products that can support allergen labelling and safe food manufacture.

Credit: 
Edith Cowan University

Is there a link between cashless payments and unhealthy consumption?

The widespread use of cashless payments including credit cards, debit cards, and mobile apps has made transactions more convenient for consumers. However, results from previous research have shown that such cashless payments can increase consumers' spending on unhealthy food. "Why Do Cashless Payments Increase Unhealthy Consumption? The Decision-Risk Inattention Hypothesis," a newly published article in the Journal of the Association for Consumer Research, explains this phenomenon by showing how changes in bodily responses to cashless payments influence consumers responses.

Authors Joowon Park, Clarence Lee, and Manoj Thomas propose that cash and cashless payments elicit different levels of negative arousal when making shopping decisions. "Most people experience a spontaneous negative emotional response to the loss of wealth, particularly when such loss is concrete and vivid," the authors note. In contrast, when a person swipes a card or uses mobile payment, it is difficult to visualize the money changing hands. The payment occurs at a later date, which presumably does not entail a physical handover of money. "Because such transactions are not concrete," the authors write, "cashless payments are less likely to elicit the negative arousal that is appraised as the 'pain of paying.'"

Since arousal has been shown to direct people's attention to risky factors in the environment, the authors suggest that the lower level of arousal caused by cashless payments can direct consumers' attention away from decision risks. This makes shoppers less attentive, for example, to risks relating to food (e.g., the risk that the product might have adverse effects on health in the long run). Authors refer to this process as "decision risk inattention" caused by cashless payments.

To test this idea, the authors invited participants to a lab for a simulated grocery shopping where some participants were told to imagine making cash payments and others were told to imagine making cashless payments. During the shopping simulation, participants wore a device on their hands that measured changes in their physical arousal level. The authors found that participants thinking of making cashless payments experienced lower arousal than those thinking of making cash payments. The higher arousal from cash payments made participants pay attention to the health risks associated with the grocery items, and consequently less likely to add unhealthy items such as cookies and candies to their shopping baskets. On the other hand, the lower arousal from cashless payments made participants pay less attention to the health risks, and thus they were more likely to purchase unhealthy items. That is, cashless payments made participants pay less attention to decision risks. The changes in arousal did not affect the purchase decision of healthy food items such as apples and salad, the purchase of which does not accompany such decision risks.

In a similar study, participants were told to imagine a dessert bar opening up in major cities in the U.S. They were told that the company is interested in understanding the popularity of several desserts. Participants viewed pictures and descriptions of several desserts and indicated how much they would be willing to pay for each one. Similar to the earlier result, participants who were thinking of making cashless payments were willing to pay more for the desserts than those thinking of making cash payments. Furthermore, this gap was more prominent for participants that had higher levels of education, who are presumably better aware of the health risks from the consumption of desserts. The authors found that inattention to such risks caused by cashless payments increased the amount that participants with more education were willing to pay for the desserts. However, for participants with less education, the different payment methods did not affect how much they were willing to pay. The authors found that for these participants, the level of attention paid to health risks did not matter, possibly because they were not well aware of such health risks.

One of the takeaways from the research is that the authors see the potential for their hypothesis to be tested in other situations involving different types of decision risks. "Compared to brick-and-mortar shoppers, would shoppers in Amazon's cashless stores be more willing to try radically new products because of lower risk sensitivity? If casinos start giving out chips on mobile apps, instead of physical chips, would gamblers be willing to bet their money on riskier gambles?"

Credit: 
University of Chicago Press Journals