Culture

NYU Abu Dhabi researchers design simulator to help stop the spread of 'fake news'

image: Photo

Image: 
Courtesy of NYU Abu Dhabi

Abu Dhabi, UAE, April 27, 2021: As people around the world increasingly get their news from social media, online misinformation has emerged as an area of great concern. To improve news literacy and reduce the spread of misinformation, NYUAD Center for Cybersecurity researcher and lead author Nicholas Micallef is part of a team that designed Fakey, a game that emulates a social media news feed and prompts players to use available signals to recognize and scrutinize suspicious content and focus on credible information. Players can share, like, or fact-check individual articles.

In a new study, Fakey: A Game Intervention to Improve News Literacy on Social Media published in the ACM Digital Library, Micallef and his colleagues Mihai Avram, Filippo Menczer, and Sameer Patil from the Luddy School of Informatics, Computing, and Engineering, Indiana University, present the analysis of interactions with Fakey, which was released to the general public as a web and mobile app with data procured after 19 months of use. Interviews were conducted to verify player understanding of the game elements. The researchers found that the more players interacted with articles in the game, the better their skills at spotting credible content became. However, playing the game did not affect players' ability to recognize questionable content. Further research will help determine how much gameplay would be necessary to be able to distinguish between legitimate and questionable content.

Games like Fakey, which was designed and developed by researchers at Indiana University, could be offered as a tool to social media users. For example, social media platforms could conduct regular exercises (akin to 'phishing drills' used in organizations for employee security training) wherein users practice identifying questionable articles. Or, the researchers say, such games could be integrated into media literacy curricula in schools. "The impact of misinformation could be substantially reduced if people were given tools to help them recognize and ignore such content," said Micallef. "The principles and mechanisms used by Fakey can inform the design of social media functionality in a way that empowers people to distinguish between credible and fake content in their news feeds and increase their digital literacy."

Credit: 
New York University

Household aerosols now release more harmful smog chemicals than all UK vehicles

Aerosol products used in the home now emit more harmful volatile organic compound (VOC) air pollution than all the vehicles in the UK, new research shows.

A new study by the University of York and the National Centre for Atmospheric Science reveals that the picture is damaging globally with the world's population now using huge numbers of disposable aerosols - more than 25 billion cans per year.

This is estimated to lead to the release of more than 1.3 million tonnes of VOC air pollution each year, and could rise to 2.2 million tonnes by 2050.

The chemicals now used in compressed aerosols are predominantly volatile organic compounds (VOCs), chemicals which are also released from cars and fuels. The report says the VOCs currently being used in aerosols are less damaging than the ozone-depleting CFCs they replaced in the 1980's. However, in the 80's when key international policy decisions were made, no-one foresaw such a large rise in global consumption.

In the presence of sunlight, VOCs combine with a second pollutant, nitrogen oxides, to cause photochemical smog which is harmful to human health and damages crops and plants.

In the 1990s and 2000s by far the largest source of VOC pollution in the UK was gasoline cars and fuel, but these emissions have reduced dramatically in recent years through controls such as catalytic converters on vehicles and fuel vapour recovery at filling stations.

Researchers found that on average in high-income countries 10 cans of aerosol are used per person per year with the largest contributor being personal care products. The global amount emitted from aerosols every year is surging as lower and middle-income economies grow and people in these countries buy more.

The report authors are calling on international policymakers to reduce the use of VOCs in compressed aerosols, either by encouraging less damaging propellants like nitrogen, or advocating the use of non-aerosol versions of products. At present VOCs are used in around 93 per cent of aerosol cans.

Professor Alastair Lewis from the Department of Chemistry and a Director of the National Centre for Atmospheric Science said: "Virtually all aerosol based consumer products can be delivered in non-aerosol form, for example as dry or roll-on deodorants, bars of polish not spray. Making just small changes in what we buy could have a major impact on both outdoor and indoor air quality, and have relatively little impact on our lives.

"The widespread switching of aerosol propellant with non-VOC alternatives would lead to potentially meaningful reductions in surface ozone.

"Given the contribution of VOCs to ground-level pollution, international policy revision is required and the continued support of VOCs as a preferred replacement for halocarbons is potentially not sustainable for aerosol products longer term."

The report says there are already non-aerosol alternatives that can be easily be applied in their liquid or solid forms, for example, as roll-on deodorant, hair gel, solid furniture polish, bronzing lotion, and room fragrance.

Study authors conclude that the continued use of aerosols when non-aerosol alternatives exist is often down to the continuation of past consumer habits. And that the role played by aerosol VOC emissions in air pollution needs to be much more clearly articulated in messaging on air pollution and its management to the public.

Professor Lewis added: "Labelling of consumer products as high VOC emitting--and clearly linking this to poor indoor and outdoor air quality--may drive change away from aerosols to their alternatives, as has been seen previously with the successful labelling of paints and varnishes."

Amber Yeoman, a PhD student from the Wolfson Atmospheric Chemistry Laboratories was a co-author of the study which used data from industry and regulatory bodies from around the world.

Credit: 
University of York

Exposure to high heat neutralizes SARS-CoV-2 in less than one second

image: If the coronavirus-containing solution is heated to around 72 degrees Celsius for about half a second, it can reduce the titer, or quantity of the virus in the solution by 100,000 times.

Image: 
Texas A&M University College of Engineering

Arum Han, professor in the Department of Electrical and Computer Engineering at Texas A&M University, and his collaborators have designed an experimental system that shows exposure of coronavirus to a very high temperature, even if applied for less than a second, can be sufficient to neutralize the virus so that it can no longer infect another human host.

Applying heat to neutralize COVID-19 has been demonstrated before, but in previous studies temperatures were applied from anywhere from one to 20 minutes. This length of time is not a practical solution, as applying heat for a long period of time is both difficult and costly. Han and his team have now demonstrated that heat treatment for less than a second completely inactivates the coronavirus -- providing a possible solution to mitigating the ongoing spread of COVID-19, particularly through long-range airborne transmission.

The Medistar Corporation approached leadership and researchers from the College of Engineering in the spring of 2020 to collaborate and explore the possibility of applying heat for a short amount of time to kill COVID-19. Soon after, Han and his team got to work, and built a system to investigate the feasibility of such a procedure.

Their process works by heating one section of a stainless-steel tube, through which the coronavirus-containing solution is run, to a high temperature and then cooling the section immediately afterward. This experimental setup allows the coronavirus running through the tube to be heated only for a very short period of time. Through this rapid thermal process, the team found the virus to be completely neutralized in a significantly shorter time than previously thought possible. Their initial results were released within two months of proof-of-concept experiments.

Han said if the solution is heated to nearly 72 degrees Celsius for about half a second, it can reduce the virus titer, or quantity of the virus in the solution, by 100,000 times which is sufficient to neutralize the virus and prevent transmission.

"The potential impact is huge," Han said. "I was curious of how high of temperatures we can apply in how short of a time frame and to see whether we can indeed heat-inactivate the coronavirus with only a very short time. And, whether such a temperature-based coronavirus neutralization strategy would work or not from a practical standpoint. The biggest driver was, 'Can we do something that can mitigate the situation with the coronavirus?'"

Their research was featured on the cover of the May issue of the journal Biotechnology and Bioengineering.

Not only is this sub-second heat treatment a more efficient and practical solution to stopping the spread of COVID-19 through the air, but it also allows for the implementation of this method in existing systems, such as heating, ventilation and air conditioning systems.

It also can lead to potential applications with other viruses, such as the influenza virus, that are also spread through the air. Han and his collaborators expect that this heat-inactivation method can be broadly applied and have a true global impact.

"Influenza is less dangerous but still proves deadly each year, so if this can lead to the development of an air purification system, that would be a huge deal, not just with the coronavirus, but for other airborne viruses in general," Han said.

In their future work, the investigators will build a microfluidic-scale testing chip that will allow them to heat-treat viruses for much shorter periods of time, for example, tens of milliseconds, with the hope of identifying a temperature that will allow the virus to be inactivated even with such a short exposure time.

Credit: 
Texas A&M University

Don't go fracking my heart

The Marcellus Formation straddles the New York State and Pennsylvania border, a region that shares similar geography and population demographics. However, on one side of the state line unconventional natural gas development - or fracking - is banned, while on the other side it represents a multi-billion dollar industry. New research takes advantage of this 'natural experiment' to examine the health impacts of fracking and found that people who live in areas with a high concentration of wells are at higher risk for heart attacks.

"Fracking is associated with increased acute myocardial infarction hospitalization rates among middle-aged men, older men and older women as well as with increased heart attack-related mortality among middle-aged men," said Elaine Hill, Ph.D., an associate professor in the University of Rochester Medical Center (URMC) Department of Public Health Sciences, and senior author of the study that appears in the journal Environmental Research. "Our findings lend support for increased awareness about cardiovascular risks of unconventional natural gas development and scaled-up heart attack prevention, as well as suggest that bans on hydraulic fracturing can be protective for public health."

Natural gas extraction, including hydraulic fracking, is a well-known contributor to air pollution. Fracking wells operate around the clock and the process of drilling, gas extraction, and flaring - the burning off of natural gas byproducts - release organic compounds, nitrogen oxide, and other chemicals and particulates into the air. Additionally, each well requires the constant transportation of equipment, water, and chemicals, as well as the removal of waste water from the fracking process, further contributing to air pollution levels. Fracking wells remain in operation for several years, prolonging exposure to people who work at the wells sites and those who live nearby.

Instead of the typical single source of industrial air pollution, such as a factory or power plant, fracking entails multiple well sites spread across a large, and often rural, geographic area. In 2014, there were more than 8,000 fracking well sites in Pennsylvania. Some areas of the state have a dense population of fracking wells - three Pennsylvania counties have more than 1,000 sites. Contrast that with New York State, which has essentially banned the process of hydraulic fracking since 2010.

Exposure to air pollution is recognized as a significant risk factor for cardiovascular disease. Other research has shown that the intensity of oil and gas development and production is positively associated with diminished vascular function, blood pressure, and inflammatory markers associated with stress and short-term air pollution exposure. Light and noise pollution from the continuous operation of the wells are also associated with increasing stress, which is another contributor to cardiovascular disease.

The research team decided to measure the impact of fracking on cardiovascular health by studying heart attack hospitalization and death rates in 47 counties on either side of the New York and Pennsylvania state line. Using data from 2005 to 2014, they observed that heart attack rates were 1.4 to 2.8 percent higher in Pennsylvania, depending upon the age group and level of fracking activity in a given county.

The associations between fracking and heart attack hospitalization and death were most consistent among men aged 45-54, a group most likely to be in the unconventional gas industry workforce and probably the most exposed to fracking-related air pollutants and stressors. Heart attack deaths also increase in this age group by 5.4 percent or more in counties with high concentrations of well sites. Hospitalization and mortality rates also jumped significantly in women over the age of 65.

Fracking is more concentrated in rural communities, which the authors speculate may further compromise cardiovascular heath due to the trend of rural hospital closures. People who suffer from cardiovascular disease in these areas may be at increased risk of adverse health outcomes, including death, due to less access to care. The authors suggest that more should be done to raise awareness about fracking-related risks for cardiovascular disease and physicians should keep a closer eye on high risk patients who reside in areas with fracking activity. They also contend that the study should inform policymakers about the tradeoffs between public health and the economic activity generated by the industry.

"These findings contribute to the growing body of evidence on the adverse health impact of fracking," said Alina Denham, a Ph.D. candidate in Health Policy at the University of Rochester School of Medicine and Dentistry and first author of the study. "Several states, including New York, have taken the precaution of prohibiting hydraulic fracturing until more is known about the health and environmental consequences. If causal mechanisms behind our findings are ascertained, our findings would suggest that bans on hydraulic fracturing can be protective for human health."

Credit: 
University of Rochester Medical Center

The science of sound, vibration to better diagnose, treat brain diseases

image: Georgia Tech researchers align the electrodynamic exciter and Laser Doppler Vibrometer setup for vibration experiments.

Image: 
Allison Carter, Georgia Tech

A team of engineering researchers at the Georgia Institute of Technology hopes to uncover new ways to diagnose and treat brain ailments, from tumors and stroke to Parkinson's disease, leveraging vibrations and ultrasound waves.

The five-year, $2 million National Science Foundation (NSF) project initiated in 2019 already has resulted in several published journal articles that offer promising new methods to focus ultrasound waves through the skull, which could lead to broader use of ultrasound imaging -- considered safer and less expensive than magnetic resonance imaging (MRI) technology.

Specifically, the team is researching a broad range of frequencies, spanning low frequency vibrations (audio frequency range) and moderate frequency guided waves (100 kHz to 1 MHz) to high frequencies employed in brain imaging and therapy (in the MHz range).

"We're coming up with a unique framework that incorporates different research perspectives to address how you use sound and vibration to treat and diagnose brain diseases," explained Costas Arvanitis, an assistant professor in Georgia Tech's George W. Woodruff School of Mechanical Engineering and the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "Each researcher is bringing their own expertise to explore how vibrations and waves across a range of frequencies could either extract information from the brain or focus energy on the brain."

Accessing the Brain Is a Tough Challenge

While it is possible to treat some tumors and other brain diseases non-invasively if they are near the center of the brain, many other conditions are harder to access, the researchers say.

"The center part of the brain is most accessible; however, even if you are able to target the part of the brain away from the center, you still have to go through the skull," Arvanitis said.

He added that moving just 1 millimeter in the brain constitutes "a huge distance" from a diagnostic perspective. The science community widely acknowledges the brain's complexity, each part associated with a different function and brain cells differing from one to the other.

According to Brooks Lindsey, a biomedical engineering assistant professor at Georgia Tech and Emory, there is a reason why brain imaging or therapy works well in some people but not in others.

"It depends on the individual patient's skull characteristics," he said, noting that some people have slightly more trabecular bone - the spongy, porous part of the bone - that makes it more difficult to treat.

Using ultrasound waves, the researchers are tackling the challenge on multiple levels. Lindsey's lab uses ultrasound imaging to assess skull properties for effective imaging and therapy. He said his team conducted the first investigation that uses ultrasound imaging to measure the effects of bone microstructure -- specifically, the degree of porosity in the inner, trabecular bone layer of the skull.

"By understanding transmission of acoustic waves through microstructure in an individual's skull, non-invasive ultrasound imaging of the brain and delivery of therapy could be possible in a greater number of people," he said, explaining one potential application would be to image blood flow in the brain following a stroke.

Refocusing Ultrasound Beams on the Fly

Arvanitis' lab recently found a new way to focus ultrasound through the skull and into the brain, which is "100-fold faster than any other method," Arvanitis said. His team's work in adaptive focusing techniques would allow clinicians to adjust the ultrasound on the fly to focus it better.

"Current systems rely a lot on MRIs, which are big, bulky, and extremely expensive," he said. "This method lets you adapt and refocus the beam. In the future this could allow us to design less costly, simpler systems, which would make the technology available to a wider population, as well as be able to treat different parts of the brain."

Using 'Guided Waves' to Access Periphery Brain Areas

Another research cohort, led by Alper Erturk, Woodruff Professor of Mechanical Engineering at Georgia Tech, and former Georgia Tech colleague Massimo Ruzzene, Slade Professor of Mechanical Engineering at the University of Colorado Boulder, performs high-fidelity modeling of skull bone mechanics along with vibration-based elastic parameter identification. They also leverage guided ultrasonic waves in the skull to expand the treatment envelope in the brain. Erturk and Ruzzene are mechanical engineers by background, which makes their exploration of vibrations and guided waves in difficult-to-reach brain areas especially fascinating.

Erturk noted that guided waves are used in other applications such as aerospace and civil structures for damage detection. "Accurate modeling of the complex bone geometry and microstructure, combined with rigorous experiments for parameter identification, is crucial for a fundamental understanding to expand the accessible region of the brain," he said.

Ruzzene compared the brain and skull to the Earth's core and crust, with the cranial guided waves acting as an earthquake. Just as geophysicists use earthquake data on the Earth's surface to understand the Earth's core, so are Erturk and Ruzzene using the guided waves to generate tiny, high frequency "earthquakes" on the external surface of the skull to characterize what comprises the cranial bone.

Trying to access the brain periphery via conventional ultrasound poses added risks from the skull heating up. Fortunately, advances such as cranial leaky Lamb waves increasingly are recognized for transmitting wave energy to that region of the brain.

These cranial guided waves could complement focused ultrasound applications to monitor changes in the cranial bone marrow from health disorders, or to efficiently transmit acoustic signals through the skull barrier, which could help access metastases and treat neurological conditions in currently inaccessible regions of the brain.

Ultimately, the four researchers hope their work will make full brain imaging feasible while stimulating new medical imaging and therapy techniques. In addition to transforming diagnosis and treatment of brain diseases, the techniques could better detect traumas and skull-related defects, map the brain function, and enable neurostimulation. Researchers also see the potential for uncovering ultrasound-based blood-brain barrier openings for drug delivery for managing and treating diseases such as Alzheimer's.

Credit: 
Georgia Institute of Technology

New duckbilled dinosaur discovered in Japan

image: This artist's illustration of Yamatosaurus izanagii (center) represents its ancestry to more advanced hadrosaurs (in the background).

Image: 
Artwork by Masato Hattori.

DALLAS (SMU) - An international team of paleontologists has identified a new genus and species of hadrosaur or duck-billed dinosaur, Yamatosaurus izanagii, on one of Japan's southern islands.

The fossilized discovery yields new information about hadrosaur migration, suggesting that the herbivors migrated from Asia to North America instead of vice versa. The discovery also illustrates an evolutionary step as the giant creatures evolved from walking upright to walking on all fours. Most of all, the discovery provides new information and asks new questions about dinosaurs in Japan.

The research, "A New Basal Hadrosaurid (Dinosauria: Ornithischia) From the latest Cretaceous Kita-ama Formation in Japan implies the origin of Hadrosaurids," was recently published in Scientific Reports. Authors include Yoshitsugu Kobayashi of Hokkaido University Museum, Ryuji Takasaki of Okayama University of Science, Katsuhiro Kubota of Museum of Nature and Human Activities, Hyogo and Anthony R. Fiorillo of Southern Methodist University.

Hadrosaurs, known for their broad, flattened snouts, are the most commonly found of all dinosaurs. The plant-eating dinosaurs lived in the Late Cretaceous period more than 65 million years ago and their fossilized remains have been found in North America, Europe, Africa and Asia.

Uniquely adapted to chewing, hadrosaurs had hundreds of closely spaced teeth in their cheeks. As their teeth wore down and fell out, new teeth in the dental battery, or rows of teeth below existing teeth, grew in as replacements. Hadrosaurs' efficient ability to chew vegetation is among the factors that led to its diversity, abundance and widespread population, researchers say.

The Yamatosaurus' dental structure distinguishes it from known hadrosaurs, says Fiorillo, senior fellow at SMU's Institute for the Study of Earth and Man. Unlike other hadrosaurs, he explains, the new hadrosaur has just one functional tooth in several battery positions and no branched ridges on the chewing surfaces, suggesting that it evolved to devour different types of vegetation than other hadrosaurs.

Yamatosaurus also is distinguished by the development of its shoulder and forelimbs, an evolutionary step in hadrosaurid's gait change from a bipedal to a quadrupedal dinosaur, he says.

"In the far north, where much of our work occurs, hadrosaurs are known as the caribou of the Cretaceous," says Fiorillo. They most likely used the Bering Land Bridge to cross from Asia to present-day Alaska and then spread across North America as far east as Appalachia, he says. When hadrosaurs roamed Japan, the island country was attached to the eastern coast of Asia. Tectonic activity separated the islands from the mainland about 15 million years ago, long after dinosaurs became extinct.

The partial specimen of the Yamatosaurus was discovered in 2004 by an amateur fossil hunter in an approximately 71- to 72-million-year-old layer of sediment in a cement quarry on Japan's Awaji Island. The preserved lower jaw, teeth, neck vertebrae, shoulder bone and tail vertebra were found by Mr. Shingo Kishimoto and given to Japan's Museum of Nature and Human Activities in the Hyogo Prefecture, where they were stored until studied by the team.

"Japan is mostly covered with vegetation with few outcrops for fossil-hunting," says Yoshitsugu Kobayashi, professor at Hokkaido University Museum. "The help of amateur fossil-hunters has been very important."

Kobayashi has worked with SMU paleontologist Tony Fiorillo since 1999 when he studied under Fiorillo as a Ph.D. student. They have collaborated to study hadrosaurs and other dinosaurs in Alaska, Mongolia and Japan. Together they created their latest discovery's name. Yamato is the ancient name for Japan and Izanagi is a god from Japanese mythology who created the Japanese islands, beginning with Awaji Island, where Yamatosaurus was found.

Yamatosaurus is the second new species of hadrosaurid that Kobayashi and Fiorillo have identified in Japan. In 2019 they reported the discovery of the largest dinosaur skeleton found in Japan, another hadrosaurid, Kamuysaurus, discovered on the northern Japanese island of Hokkaido.

"These are the first dinosaurs discovered in Japan from the late Cretaceous period," Kobayashi says. "Until now, we had no idea what dinosaurs lived in Japan at the end of the dinosaur age," he says. "The discovery of these Japanese dinosaurs will help us to fill a piece of our bigger vision of how dinosaurs migrated between these two continents," Kobayashi says.

Credit: 
Southern Methodist University

University of Chicago scientists design "Nanotraps" to catch, clear coronavirus

image: Cartoon rendering of Nanotrap binding SARS-CoV-2. Nanotrap is shown with a yellow core, green phospholipid shell, and red functionalized particles to bind the virus (either ACE2 or Neutralizing Antibody). Virus protein coats are shown in gray, and are decorated with the Spike protein (green) and glycoprotein (red).

Image: 
Huang Lab

Researchers at the Pritzker School of Molecular Engineering (PME) at the University of Chicago have designed a completely novel potential treatment for COVID-19: nanoparticles that capture SARS-CoV-2 viruses within the body and then use the body's own immune system to destroy it.

These "Nanotraps" attract the virus by mimicking the target cells the virus infects. When the virus binds to the Nanotraps, the traps then sequester the virus from other cells and target it for destruction by the immune system.

In theory, these Nanotraps could also be used on variants of the virus, leading to a potential new way to inhibit the virus going forward. Though the therapy remains in early stages of testing, the researchers envision it could be administered via a nasal spray as a treatment for COVID-19.

The results were published April 19 in the journal Matter.

"Since the pandemic began, our research team has been developing this new way to treat COVID-19," said Asst. Prof. Jun Huang, whose lab led the research. "We have done rigorous testing to prove that these Nanotraps work, and we are excited about their potential."

Designing the perfect trap

To design the Nanotrap, the research team - led by postdoctoral scholar Min Chen and graduate student Jill Rosenberg - looked into the mechanism SARS-CoV-2 uses to bind to cells: a spike-like protein on its surface that binds to a human cell's ACE2 receptor protein.

To create a trap that would bind to the virus in the same way, they designed nanoparticles with a high density of ACE2 proteins on their surface. Similarly, they designed other nanoparticles with neutralizing antibodies on their surfaces. (These antibodies are created inside the body when someone is infected and are designed to latch onto the coronavirus in various ways).

Both ACE2 proteins and neutralizing antibodies have been used in treatments for COVID-19, but by attaching them to nanoparticles, the researchers created an even more robust system for trapping and eliminating the virus.

Made of FDA-approved polymers and phospholipids, the nanoparticles are about 500 nanometers in diameter - much smaller than a cell. That means the Nanotraps can reach more areas inside the body and more effectively trap the virus.

The researchers tested the safety of the system in a mouse model and found no toxicity. They then tested the Nanotraps against a pseudovirus - a less potent model of a virus that doesn't replicate - in human lung cells in tissue culture plates and found that they completely blocked entry into the cells.

Once the pseudovirus bound itself to the nanoparticle - which in tests took about 10 minutes after injection - the nanoparticles used a molecule that calls the body's macrophages to engulf and degrade the Nanotrap. Macrophages will generally eat nanoparticles within the body, but the Nanotrap molecule speeds up the process. The nanoparticles were cleared and degraded within 48 hours.

The researchers also tested the nanoparticles with a pseudovirus in an ex vivo lung perfusion system - a pair of donated lungs that is kept alive with a ventilator - and found that they completely blocked infection in the lungs.

They also collaborated with researchers at Argonne National Laboratory to test the Nanotraps with a live virus (rather than a pseudovirus) in an in vitro system. They found that their system inhibited the virus 10 times better than neutralizing antibodies or soluble ACE2 alone.

A potential future treatment for COVID-19 and beyond

Next the researchers hope to further test the system, including more tests with a live virus and on the many virus variants.

"That's what is so powerful about this Nanotrap," Rosenberg said. "It's easily modulated. We can switch out different antibodies or proteins or target different immune cells, based on what we need with new variants."

The Nanotraps can be stored in a standard freezer and could ultimately be given via an intranasal spray, which would place them directly in the respiratory system and make them most effective.

The researchers say it is also possible to serve as a vaccine by optimizing the Nanotrap formulation, creating an ultimate therapeutic system for the virus.

"This is the starting point," Huang said. "We want to do something to help the world."

The research involved collaborators across departments, including chemistry, biology, and medicine.

Credit: 
University of Chicago

Incentives could turn costs of biofuel mandates into environmental benefits

image: Miscanthus is harvested from a CABBI facility at Iowa State University. CABBI researchers from ISU and the University of Illinois Urbana-Champaign found the biofuel mandates of the Renewable Fuels Standard will lead to significant economic and environmental costs without targeted policies and incentives that value the sustainability benefits of perennial bioenergy crops like miscanthus over cheaper options.

Image: 
Center for Advanced Bioenergy and Biofuels Innovation (CABBI)

New studies from the Center for Advanced Bioenergy and Bioproducts Innovation (CABBI) shed more light on the economic and environmental costs of mandates in the Renewable Fuels Standard (RFS), a federal program to expand the nation's biofuels sector.

Researchers said the studies indicate the need to adopt more targeted policies that value the environmental and ecosystem benefits of perennial bioenergy crops over cheaper options -- and provide financial incentives for farmers to grow them.

The RFS was issued in 2005 and updated through the Energy Independence and Security Act of 2007 to enhance U.S. energy security, reduce greenhouse gas (GHG) emissions, and promote rural development. The 2007 standards mandated blending 36 billion gallons of first-generation biofuels (made from food crops like corn, such as ethanol) and second-generation biofuels (made from the biomass of miscanthus or other energy feedstocks) with fossil fuels by 2022, to replace petroleum-based heating oil and fuel. The corn ethanol mandate has been met, with 15 billion gallons produced annually, but production of cellulosic biofuels has been negligible. Targets beyond 2022 are yet to be determined.

The biofuel mandates impact the environment in multiple ways -- affecting land use, GHG emissions, nitrogen (N) application, and leakage of harmful nitrogen compounds into the soil, air, and water. Those impacts vary by feedstock, as do the economic costs and benefits for consumers buying food and fuel and for producers, depending on cultivation costs and the competition for cropland for alternative uses.

The first study calculated the net economic and environmental costs of the RFS mandates and found that maintaining the corn ethanol mandate would lead to a cumulative net cost to society of nearly $200 billion from 2016 to 2030 compared to having no RFS. The social cost of nitrogen damage from corn ethanol production substantially offsets the social benefits from GHG savings.

On the other hand, implementation of the additional cellulosic mandate could provide substantial economic and environmental benefits with technological innovations that lower the costs of converting biomass to cellulosic ethanol and policies that place a high monetized value for GHG mitigation benefits. That study, published in Environmental Research Letters, was led by CABBI Sustainability Theme Leader Madhu Khanna and Ph.D. student Luoye Chen from the University of Illinois Urbana-Champaign.

The second study examined how full implementation of the RFS mandates will affect water quality in the Mississippi/Atchafalaya River Basin (MARB) and Gulf of Mexico, which are plagued by nitrogen runoff from corn and soybean fields. Rising N levels have depleted oxygen and created a hypoxic dead zone in the gulf. Specifically, this study looked at whether diversifying cropland with perennial energy crops -- such as CABBI grows -- could reduce N loss associated with corn production and thus improve water quality while meeting RFS goals.

It found that the most economical place to grow perennial bioenergy crops, which typically require less nitrogen fertilizer and lower N runoff, was on idle cropland. This limited their potential to reduce N runoff, which would be highest if they replaced N-intensive row crops on cropland. The N reduction benefits of bioenergy crops would also be more than offset by the increase in runoff generated by the harvesting of low-cost crop residues such as corn stover -- leaves and stalks of corn left after the grain is harvested -- for cellulosic biomass. The findings suggest that targeted incentives for reducing N loss are needed to persuade growers to replace N-intensive row crops as well as biomass from corn stover with bioenergy crops. Published in Environmental Science and Technology, the study was led by Associate Professor of Agronomy Andy VanLoocke and Ph.D. student Kelsie Ferin of Iowa State University.

Together, the studies showed that maintaining the corn ethanol mandate pushes more land into corn production, which increases the market price of other agricultural commodities. While producers might benefit from higher market prices, consumers who buy fuel or agricultural products pay the cost. And although the corn ethanol mandate can help mitigate GHG by displacing fossil fuels with biofuels, it increases nitrogen leaching because of increased fertilizer use with expanded corn production. That worsens water quality in the MARB and Gulf of Mexico and leads to a huge environmental and social cost.

In contrast, the cellulosic ethanol mandate could provide an overall benefit with the right policies. Supporting research and development to lower the cost of converting biomass to cellulosic ethanol would substantially reduce production costs and increase social benefits, and a high monetized value for GHG mitigation could offset all other costs.

These findings should lead policymakers to question the effectiveness of technology mandates like the RFS that treat all cellulosic feedstocks as identical. It incentivizes cheaper options like corn stover and limits incentives to grow high-yielding perennial energy crops that have lower carbon intensity and N-leakage but are more costly under current technology.

CABBI researchers hope performance-based policies -- including the low carbon fuel standard, carbon and nitrogen leakage taxes, or limits on crop-residue harvest and N application -- can be implemented to supplement the RFS mandates after 2022.

The complexity of biofuel policies requires expertise from both agronomists and economists, as in these studies. Both research teams developed integrated economic and biophysical models incorporating a broad range of factors into their analyses.

"CABBI provides a great opportunity for this kind of research, inspiring collaborations from different disciplines," Khanna said.

Credit: 
University of Illinois at Urbana-Champaign Institute for Sustainability, Energy, and Environment

Antiviral response: Eosinophils active in immediate defense during influenza a infection

MEMPHIS, Tenn. - For the first time in published literature, Le Bonheur Children's Hospital and University of Tennessee Health Science Center (UTHSC) researchers showed that a variety of white blood cells known as eosinophils modify the respiratory barrier during influenza A (IAV) infection, according to a recent paper in the journal Cells. This research could have implications in understanding SARS-CoV-2 (COVID-19) infection in asthmatic patients.

The Le Bonheur/UTHSC study found that eosinophils immunomodulate airway epithelial cells during IAV infection, helping to neutralize the virus and protect the airway. The study was led by University of Tennessee Health Science Center Postdoctoral Fellow Meenakshi Tiwary, PhD, from the lab of Director of the Pediatric Asthma Research Program and Plough Foundation Chair of Excellence in Pediatrics, Amali Samarasinghe, PhD, in collaboration with Robert Rooney, PhD, assistant professor of pediatrics at the University of Tennessee Health Science Center and director of the Biorepository and Integrative Genomics Initiative at Le Bonheur, and Swantje Liedmann, PhD, a postdoctoral fellow at St. Jude Children's Research Hospital.

"We examined eosinophil responses to influenza A virus during the early phase of infection and found that eosinophils exhibit multiple functions as active mediators of antiviral host defense," said Samarasinghe. "These include virus neutralization, trafficking to draining lymphoid organs and, most importantly, protecting the airway barrier from virus-induced cytopathology."

The study used both mouse models and cell culture models to observe eosinophil responses during the early phases of IAV infection. Investigators found that eosinophils altered the respiratory epithelial transcriptome to enhance epithelial cell defense against virus-induced damage. As eosinophil-deficient allergic mice had heightened virus-induced damage to the epithelial barrier, eosinophil and epithelial cell interactions are necessary for host protection during influenza.

Further results included the following:

Eosinophils are activated within 20 minutes of virus infection. As a result of IAV infection, eosinophil movement into and out of the lungs increased, and activated eosinophils expressed markers necessary to migrate into lymphoid organs from the site of infection

Crosstalk between airway epithelial cells and eosinophils promotes activation in both cell types. The presence of eosinophils reduced expression of specific surface markers in epithelial cells when placed in close proximity during IAV infection. This is especially important given that this study provides direct evidence that eosinophils are not toxic to host tissue.

This study builds on research from Samarasinghe's lab investigating why asthmatics were less likely to suffer from severe disease than non-asthmatics during the swine flu pandemic of 2009. Previous research has shown that eosinophils have a higher prevalence in asthmatic lungs and aided patients during infection.

"Reports from the COVID-19 pandemic have early indicators that patients with allergic asthma are not at increased risk of severe COVID-19," said Samarasinghe. "It is tempting to speculate that eosinophils may play an antiviral role against SARS-CoV-2, similar to their function against influenza A and other virus infections."

Credit: 
Le Bonheur Children's Hospital

Do senior faculty publish as much as their younger colleagues?

An Academic Analytics Research Center (AARC) study published in the journal Scientometrics found that senior faculty (scholars who earned their terminal degree 30 or more years ago) research publication activity exceeded expectations based on age cohort population for book chapters and book publications, and senior scholars largely kept pace in terms of journal article publications. "Across all disciplines, senior faculty may be uniquely positioned to invest their time in a longer-term publication effort, shifting their research focus to the review and synthesis of ideas through the publication of books and chapters," said AARC Senior Researcher and Co-Author of the study, Bill Savage, Ph.D.

The study explored the publishing activity of 167,299 unique faculty members at American Ph.D. granting universities in six broad fields over a five-year period (2014-2018). Comparing the publication activity of early career, mid-career, and senior faculty, the study describes a generalized career research trajectory as a gradual change of focus away from journal article publications and toward book and book chapter publication.

Peter Lange, former Provost and current Thomas A. Langford University Professor of Political Science and Public Policy at Duke University and Senior Advisor to AARC, commented "This article shows convincingly that a widespread belief - that as scholars age, especially in fields driven by grants and journal publication, their scholarly productivity declines - is incorrect. It changes but does not necessarily decline, and it changes in ways that enrich the scholarly literature."

AARC Director and Co-Author of the study, Anthony Olejniczak, Ph.D., commented "Synthesizing years of knowledge production to explore new topics or re-examine old ones is an important service to one's field, one that senior scholars are especially able to perform." The study uncovered important implications for voluntary separation programs, departmental and program review processes, and retirement planning strategies.

Credit: 
Academic Analytics Research Center (AARC)

New mouse model provides first platform to study late-onset Alzheimer's disease

Irvine, Calif., April 27, 2021 -- University of California, Irvine biologists have developed a new genetically engineered mouse model that, unlike its predecessors, is based on the most common form of Alzheimer's disease. The advance holds promise for making new strides against the neurodegenerative disease as cases continue to soar. Their study appears in the journal, Nature Communications.

Link to study: https://www.nature.com/articles/s41467-021-22624-z

While over 170 Alzheimer's mouse models have been in use since the 1990s, those models mimic early-onset AD, also known as "familial AD," which accounts for less than 5 percent of total AD cases. Until recently, scientists introduced mutations found in familial risk human genes, such as the amyloid precursor protein and presenilin 1, into the mouse genome to generate the mouse models. The UCI team decided to take a new approach by developing a mouse model better positioned to analyze causes of late-onset AD. Also called "sporadic AD," this new model encompasses the remaining 95 percent of cases.

"We believed models developed on the rare familial type might be a reason therapies have worked in the lab but haven't translated into clinical trial success," said Frank LaFerla, professor of neurobiology & behavior and the study's co-senior author; he is also dean of the UCI School of Biological Sciences and director of the UCI Alzheimer's Disease Research Center. "We decided it was time to begin developing a model that embodies the far more common late-onset form of AD."

"The existing AD models have been invaluable to the field by helping provide a better understanding of the pathogenesis of the disease." said David Baglietto-Vargas, assistant research professor of neurobiology & behavior and the study's first author. "Unfortunately, many of those same models have unique physiological alterations that have led scientists to misinterpret some experimental findings."

Rather than introduce mutations from familial-AD risk genes into the mouse, the UCI team generated the new model by using genetic engineering to switch three amino acids in the mouse amyloid precursor protein to make it more closely mirror its human counterpart. The result is what the researchers are calling a "platform model" for late-onset AD.

While the mice don't display the more advanced brain plaque and tangle pathology linked with the disease, the model manifests AD-associated conditions that precede and likely underline more advanced pathology. These AD-related conditions include age-related alterations in cognition, inflammation, brain volume and others consistent with human changes.

"This model can be used to understand very early events in the brain that may be relevant," said Kim Green, professor and vice-chair of neurobiology & behavior and co-senior author of the study. "We can use environmental and genetic factors to examine which aspects of aging are important in developing late-onset AD."

"This mouse is a foundational step toward modeling late-onset AD with its hallmark features of plaques and tangles," said Grant MacGregor, professor of developmental & cell biology and study co-author. "It will take a lot more time and inclusion of additional subtle AD-associated genetic changes to achieve it, but the result of this study suggests we are on the right path and that our approach may bear fruit."

Unless there are medical breakthroughs, the number of Americans 65 and older with Alzheimer's could rise from 6.2 million currently to 12.7 million by 2050, according to the Alzheimer's Association.

Credit: 
University of California - Irvine

Canola growth environments and genetics shape their seed microbiomes

image: Canola seeds vary not only in color, but also their microbiomes.

Image: 
Zayda Morales Moreira

Just as humans receive the first members of their microbiomes from their mothers, seeds may harbor some of the first microorganisms plants encounter. While these initial microbes could become influential players in the plants' microbiomes, the microbial communities that colonize seeds have not received as much attention as root, shoot, or soil microbiomes. To understand how seed microbiomes are assembled, a group of researchers at the University of Saskatchewan (Canada) examined the relative effects of growth environment and plant genotype on the seed microbiome of canola, a globally important crop grown in diverse environments.

In their recently published paper in Phytobiomes Journal, Zayda Morales Moreira, Bobbi Helgason, and James Germida characterized the seed microbiomes of eight canola lines harvested in different years and from different sites in Saskatchewan that are representative of local canola growing regions. They found that seed microbiomes differed not only by harvest year and location, but also by the variety of canola, indicating that both genetic and environmental factors play important roles in seed microbiome assembly. These findings suggest that both plant breeding and seed production conditions could be used to shape beneficial seed microbiomes. Lead author Zayda Morales Moreira believes "increasing our knowledge of how microbial communities carried by seeds are assembled, transmitted, and preserved offers a promising way for breeding programs to consider microbial communities when selecting for more resilient and productive cultivars."

This paper lays the groundwork for future research into the seed microbiome, such as examining how these differently assembled communities of microbes affect plant growth and resilience. Their results also suggest the presence of a core microbiome in all seed samples that includes potentially beneficial microbes such as Pseudomonas spp., which are known to promote plant growth. If seed-derived microbes improve seedling survival or become key components of the microbiome throughout the plants' life cycles, optimizing seed microbiomes--through breeding or seed inoculants--could be an important tool for managing plant microbiomes in agriculture. The authors hope that understanding the seed microbiome will advance current sustainable agricultural practices and breeding techniques.

Credit: 
American Phytopathological Society

NIST study suggests how to build a better 'nanopore' biosensor

video: To identify molecules, scientists can use a type of biosensor called a nanopore -- a tiny hole in a membrane that allows fluid to flow through it. When a molecule of interest is driven into the pore, it partially blocks the flow of current, providing a signal researchers can use to identify the molecule. But in order to get a good measurement, the molecule must sit inside the pore for long enough. NIST researchers are using laser light to measure the energy of molecules as they transition into and out of nanopores. The resulting information can help scientists design optimized pores for detecting particular molecules.

Image: 
Sean Kelley/Inform Studio

Researchers have spent more than three decades developing and studying miniature biosensors that can identify single molecules. In five to 10 years, when such devices may become a staple in doctors' offices, they could detect molecular markers for cancer and other diseases and assess the effectiveness of drug treatment to fight those illnesses.

To help make that happen and to boost the accuracy and speed of these measurements, scientists must find ways to better understand how molecules interact with these sensors. Researchers from the National Institute of Standards and Technology (NIST) and Virginia Commonwealth University (VCU) have now developed a new approach. They reported their findings in the current issue of Science Advances.

The team built its biosensor by making an artificial version of the biological material that forms a cell membrane. Known as a lipid bilayer, it contains a tiny pore, about 2 nanometers (billionths of a meter) wide in diameter, surrounded by fluid. Ions that are dissolved in the fluid pass through the nanopore, generating a small electric current. However, when a molecule of interest is driven into the membrane, it partially blocks the flow of current. The duration and magnitude of this blockade serve as a fingerprint, identifying the size and properties of a specific molecule.

To make accurate measurements for a large number of individual molecules, the molecules of interest must stay in the nanopore for an interval that is neither too long nor too short (the "Goldilocks" time), ranging from 100 millionths to 10 thousandths of a second. The problem is that most molecules only stay in the small volume of a nanopore for this time interval if the nanopore somehow holds them in place. This means that the nanopore environment must provide a certain barrier -- for instance, the addition of an electrostatic force or a change in the nanopore's shape -- that makes it more difficult for the molecules to escape.

The minimum energy required to breach the barrier differs for each type of molecule and is critical for the biosensor to work efficiently and accurately. Calculating this quantity involves measuring several properties related to the energy of the molecule as it moves into and out of the pore.

Critically, the goal is to measure whether the interaction between the molecule and its environment arises primarily from a chemical bond or from the ability of the molecule to wiggle and move freely throughout the capture and release process.

Until now, reliable measurements to extract these energetic components have been missing for a number of technical reasons. In the new study, a team co-led by Joseph Robertson of NIST and Joseph Reiner of VCU demonstrated the ability to measure these energies with a rapid, laser-based heating method.

The measurements must be conducted at different temperatures, and the laser heating system ensures that these temperature changes occur rapidly and reproducibly. That enables researchers to complete measurements in less than 2 minutes, compared to the 30 minutes or more it would otherwise require.

"Without this new laser-based heating tool, our experience suggests that the measurements simply won't be done; they would be too time consuming and costly," said Robertson. "Essentially, we have developed a tool that may change the development pipeline for nanopore sensors to rapidly reduce the guesswork involved in sensor discovery," he added.

Once the energy measurements are performed, they can help reveal how a molecule interacts with the nanopore. Scientists can then use this information to determine the best strategies for detecting molecules.

For example, consider a molecule that interacts with the nanopore primarily through chemical -- essentially electrostatic -- interactions. To achieve the Goldilocks capture time, the researchers experimented with modifying the nanopore so that its electrostatic attraction to the target molecule was neither too strong nor too weak.

With this goal in mind, the researchers demonstrated the method with two small peptides, short chains of compounds that form the building blocks of proteins. One of the peptides, angiotensin, stabilizes blood pressure. The other peptide, neurotensin, helps regulate dopamine, a neurotransmitter that influences mood and may also play a role in colorectal cancer. These molecules interact with nanopores primarily through electrostatic forces. The researchers inserted into the nanopore gold nanoparticles capped with a charged material that boosted the electrostatic interactions with the molecules.

The team also examined another molecule, polyethylene glycol, whose ability to move determines how much time it spends in the nanopore. Ordinarily, this molecule can wiggle, rotate and stretch freely, unencumbered by its environment. To increase the molecule's residence time in the nanopore, the researchers altered the nanopore's shape, making it more difficult for the molecule to squeeze through the tiny cavity and exit.

"We can exploit these changes to build a nanopore biosensor tailored to detecting specific molecules," says Robertson. Ultimately, a research laboratory could employ such a biosensor to identify biological molecules of interest or a doctor's office could use the device to identify markers for disease.

"Our measurements provide a blueprint for how we can modify the interactions of the pore, whether it be through geometry or chemistry, or some combination of both, to tailor a nanopore sensor for detecting specific molecules, counting small numbers of molecules, or both," said Robertson.

Credit: 
National Institute of Standards and Technology (NIST)

Skoltech researchers propose a new data-driven tool to better understand startups

image: Google Trends and valuation data for Kabbage (a) and Lending Club (b)

Image: 
Malyy et al., 2021

Skoltech researchers used Google Trends' Big Data ensuing from human interactions with the Internet to develop a new methodology - a tool and a data source - for analyzing and researching the growth of startups. A paper reporting these important findings was published in technology management journal, Technological Forecasting and Social Change.

Startups and high-growth technology-based ventures they transform into are regarded as the key drivers of economic development, innovation, and job creation on the national and global level. However, despite their crucial importance for the economy and high interest from researchers and policy-makers, startups display growth patterns that are difficult to analyze. These fragile, early-stage private businesses, which may quickly scale up, do not have time, interest, or obligation to share much data about what they achieved, when, or how. Thus, to outside observers, startups look like "black boxes" whose progress can hardly be assessed due to a lack of objective information.

Maksim Malyy, a PhD student from the Skoltech Center for Entrepreneurship and Innovation (CEI), has been intrigued by this problem since he worked in a startup accelerator in St. Petersburg before joining Skoltech. Looking into theoretical and practical aspects of the problem for the last three years, Maksim, his supervisor, professor Zeljko Tekic, and Skoltech assistant professor Tatiana Podladchikova came up with valuable insights on how to deal with the data scarcity problem in studying startups. Some of their findings were published in the paper.

Maksim explains why this research is so important: We demonstrate that web-search traffic information, in particular Google Trends data, can serve as a valuable source of high-quality data for analyzing the advancement of startups and growth-oriented technology-based new ventures they evolve into. We analyzed a large and transparently selected set of US based companies and showed the existence of a strong correlation between the curves based on Google searches by company name and those depicting valuations achieved through a series of investment rounds.

According to the authors, this correlation enables using Google Trends data as a proxy measure of growth instead of non-public and rarely available measures like sales, employee and market share growth. Google Trends data, which are public, easy to collect and available for almost any company since its inception, can help in building more accurate and even real-time data-driven growth paths for startups. With these evolution curves, one could revisit some old answers, ask new questions, and come up with more solid concepts, theories, and predictions.

Maksim believes that this study has strong implications for start-up research: Our findings suggest that for startups, especially thriving unicorns or B2C digital platforms, the proposed approach may become an equivalent of an X-ray scan, offering a cheap, easy, and non-invasive way to understand the workings of a technology-based new venture.

By way of comment, professor Tekic and professor Podladchikova cite a report by one of the reviewers: "I think this paper will stand the test of time and be useful for many years to come. It truly is a fascinating study."

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

An atlas of HIV's favorite targets in the blood of infected individuals

image: Scientists at Gladstone Institutes have identified the blood cells that are most susceptible to HIV infection.

Image: 
Photo: Gladstone Institutes

SAN FRANCISCO, CA--April 27, 2021--In the 40-some years since the beginning of the HIV/AIDS epidemic, scientists have learned a lot about the virus, the disease, and ways to treat it. But one thing they still don't completely understand is which exact cells are most susceptible to HIV infection.

Without this knowledge, it is difficult to envision targeting these cells to protect the millions of people who encounter the virus for the first time every year, or the infected people in which infection will likely rebound if they go off therapy.

Scientists have known for a long time that the virus homes in on so-called memory CD4+ T cells, a type of cell that helps the human body build lasting immunity against pathogens. But that is still too broad a category to target for therapy.

"CD4+ T cells orchestrate the immune response against all kinds of pathogens, so you can't just eliminate them to prevent HIV infections," says Gladstone Associate Investigator Nadia Roan, PhD. "But if you can find the more specific subsets of CD4+ T cells that are highly susceptible to HIV infection, you may be able to specifically target those cells without detrimental side effects."

Much knowledge about HIV infection comes from in vitro experiments (in a petri dish), where scientists expose CD4+ T cells cultured in the lab to the virus. These cell cultures are not a perfect model for the human body's complex ecosystems in which infection normally takes place. Might in vitro infection yield a skewed view of the virus's preference?

To answer this question, Roan and her team compared CD4+ T cells infected in vitro to the CD4+ T cells circulating in the blood of 11 individuals at various stages of infection. Some blood samples were taken before the donors had started treatment with antiretroviral therapy, some after. Yet others came from individuals who had stopped their treatment and were experiencing new rounds of infection.

Using technology they have honed over the years, the researchers established a detailed atlas of the CD4+ T cells in individuals not on antiretroviral treatment, which they have now published in the scientific journal Cell Reports.

"Our work affords novel insight into the basics of how HIV behaves in the human body, rather than just in a lab dish," says Roan, who is also an associate professor of urology at UC San Francisco. "It informs our understanding of what really happens during an active infection, which is interesting in its own right. Moreover, we know that some infected cells become reservoirs of latent virus, so our work could help us better understand how the reservoir forms during an infection."

The technology Roan and her team deployed, called CyTOF/PP-SLIDE, distinguishes cells with exquisite precision based on the proteins they contain or carry on their surface. With this information, the scientists can classify CD4+ T cells into myriad subsets, and then determine whether some subsets are more susceptible to infection than others.

A crucial perk of this technology is that it can trace infected cells back to their original state prior to infection.

"That's important," says Guorui Xie, PhD, a postdoctoral researcher in Roan's lab and the first author of the study. "We know that when HIV infects cells, it remodels the cells such that they no longer contain the exact same levels of proteins as they did before infection. With CyTOF/PP-SLIDE, we can identify the uninfected cells that most closely match the infected ones in the same patient. These uninfected cells can give us important information about what the cells targeted by HIV resembled before the virus remodeled them."

Roan's team found that remodeling was indeed extensive in blood CD4+ T cells infected in vivo (in people) as well as in vitro. In the process, they made a surprising finding about one of HIV's preferred targets. Prior studies have suggested that HIV prefers to infect a subtype of CD4+ T cells, called Tfh, and Roan's team confirmed these cells to be susceptible to HIV. However, they also discovered that the virus can infect non-Tfh cells and remodel them such that they adopt features of Tfh cells.

"This result strikes a cautionary note in our field," says Roan. "You really can't tell which cells HIV prefers to target simply by looking at infected cells. You need to know what the cells looked like before remodeling."

The scientists also found that remodeling causes infected blood cells to alter their surface in ways that may change how they move through the body. Roan prudently speculates that this might help the virus steer infected cells toward sites where it can infect even more cells.

"Whatever its exact purpose, remodeling is probably not just a chance event," adds Roan. "A virus as small as HIV depends crucially on the resources provided by its host to grow and spread. It's likely that nothing the virus does to its host cell is an accident."

The profile of HIV's favorite cells differed somewhat between in vitro and in vivo infections. Nevertheless, the researchers found one subset of cells that was preferentially infected in both cases, and could become a useful model for further lab studies.

The team also confirmed that not all CD4+ T cells are equally susceptible to HIV infection in vivo, which gives them hope that the most susceptible cells could eventually become targets of preventive interventions.

Xie and Roan are now planning to obtain blood samples from more donors to see whether HIV's targets differ between a first infection and the return of the virus after a lapse in therapy, or between men and women. Ultimately, they would also like to look at in vivo-infected cells from mucosal tissues such as the gut and genital tract, where most HIV infections begin. But these samples are much harder to procure.

In the meantime, the researchers are making public the atlas of all the cells they have analyzed, along with the dozens of proteins they found to be affected in these cells after HIV infection, which they hope will be a valuable resource for the HIV research community.

"There is still much to discover in this atlas that may help uncover new insights into HIV infection and how it develops, and perhaps lead to the identification of new approaches for HIV/AIDS prevention," says Roan.

Credit: 
Gladstone Institutes