Tech

New research highlights how plants are slowing global warming

Chi Chen, a Boston University graduate researcher, and Ranga Myneni, a BU College of Arts & Sciences professor of earth and environment, released a new paper that reveals how humans are helping to increase the Earth's plant and tree cover, which absorbs carbon from the atmosphere and cools our planet. The boom of vegetation, fueled by greenhouse gas emissions, could be skewing our perception of how fast we're warming the planet.

Taking a closer look at 250 scientific studies, land-monitoring satellite data, climate and environmental models, and field observations, a team of Boston University researchers and international collaborators have illuminated several causes and consequences of a global increase in vegetation growth, an effect called greening.

In a new study, published in Nature Reviews Earth & Environment, the researchers report that climate-altering carbon emissions and intensive land use have inadvertently greened half of the Earth's vegetated lands. And while that sounds like it may be a good thing, this phenomenal rate of greening, together with global warming, sea-level rise, and sea-ice decline, represents highly credible evidence that human industry and activity is dramatically impacting the Earth's climate, say the study's first authors, Shilong Piao and Xuhui Wang of Peking University.

Green leaves convert sunlight to sugars while replacing carbon dioxide in the air with water vapor, which cools the Earth's surface. The reasons for greening vary around the world, but often involve intensive use of land for farming, large-scale planting of trees, a warmer and wetter climate in northern regions, natural reforestation of abandoned lands, and recovery from past disturbances.

And the chief cause of global greening we're experiencing? It seems to be that rising carbon dioxide emissions are providing more and more fertilizer for plants, the researchers say. As a result, the boom of global greening since the early 1980s may have slowed the rate of global warming, the researchers say, possibly by as much as 0.2 to 0.25 degrees Celsius.

"It is ironic that the very same carbon emissions responsible for harmful changes to climate are also fertilizing plant growth, which in turn is somewhat moderating global warming," says study coauthor Dr. Jarle Bjerke of the Norwegian Institute for Nature Research.

Boston University researchers previously discovered that, based on near-daily NASA and NOAA satellite imaging observations since the early 1980s, vast expanses of the Earth's vegetated lands from the Arctic to the temperate latitudes have gotten markedly more green.

"Notably, the NASA [satellite data] observed pronounced greening during the 21st century in the world's most populous and still-developing countries, China and India," says Ranga Myneni, the new study's senior author.

Even regions far, far removed from human reach have not escaped the global warming and greening trends. "Svalbard in the high-arctic, for example, has seen a 30 percent increase in greenness [in addition to] an increase in [summer temperatures] from 2.9 to 4.7 degrees Celcius between 1986 and 2015," says study coauthor Rama Nemani of NASA's Ames Research Center.

Over the last 40 years, carbon emissions from fossil fuel use and tropical deforestation have added 160 parts per million (ppm), a unit of measure for air pollutants, of CO2 to Earth's atmosphere. About 40 ppm of that has diffused passively into the oceans and another 50 ppm has been actively taken up by plants, the researchers say. But 70 ppm remains in the atmosphere, and together with other greenhouse gases, is responsible the land warming patterns that have been observed since the 1980s.

"Plants are actively defending against the dangers of carbon pollution by not only sequestering carbon on land but also by wetting the atmosphere through transpiration of ground water and evaporation of precipitation intercepted by their bodies," says study coauthor Philippe Ciais, of the Laboratory of Climate and Environmental Sciences, Gif-sur-Yvette, France. "Stopping deforestation and sustainable, ecologically sensible afforestation could be one of the simplest and cost-effective, though not sufficient, defenses against climate change," he adds.

It is not easy to accurately estimate the cooling benefit from global greening because of the complex interconnected nature of the climate system, the researchers say. "This unintended benefit of global greening, and its potential transitory nature, suggests how much more daunting, and urgent, is the stated goal of keeping global warming to below 1.5 to 2 degrees Celsius, especially given the trajectory of carbon emissions and history of inaction during the past decades," says study coauthor Hans Tømmervik of the Norwegian Institute for Nature Research, Norway.

Credit: 
Boston University

Zoo improvements should benefit all animals

image: A mix of corals and anemones provides a dynamic environment for fish to interact with

Image: 
Paul Rose

Zoo improvements should benefit all animals and include a wide range of "enrichment" techniques, researchers say.

Zoos have made great advances in "environmental enrichment" - making changes to encourage natural behaviour and improve animal wellbeing.

But researchers from the University of Exeter and the University of Winchester say efforts disproportionally focus on large, "popular" animals - with less focus on creatures such as invertebrates, fish and reptiles.

The study, based on interviews with zoo professionals, revealed support for enrichment - but a lack of evaluation and evidence to measure the effectiveness of changes.

"There are a range of different types of enrichment, and it seems that only certain types are used for certain species," said Dr Paul Rose, of the University of Exeter.

"For example, enrichment for large predators will often focus on the way they are fed.

"But nutrition is only one of the five categories of enrichment - along with the physical environment, sensory stimulation, occupation (activities) and social structure."

Previous Exeter research showed that research carried out in zoos focusses disproportionately on animals that are popular with zoo visitors - and a similar pattern exists in enrichment.

"It's common to see a lot of effort devoted to enriching the environment for lions or tigers," said Dr Rose.

"But who considers giving enrichment to invertebrates?

"We wanted to investigate what enrichment is out there for the 'less exciting' species we house in the zoo.

"Invertebrates, birds, reptiles and fish are all complex beings, and each species has evolved for a particular niche - so it's possible to enrich their environments to reflect their natural habitats and social structures."

He added: "Different planting and features make enclosures rich and varied - and not just to human eyes.

"By considering natural history and a species' social structure we increase the appeal of this enriched environment to the animals themselves, and to the zoo's visitors."

The paper says environmental enrichment must be "underpinned by an evidence-based approach".

"Zoos work hard to enrich environments, but they need to further evaluate its effectiveness," Dr Rose said.

"As there is little published information on how well enrichment works, to get best practice we need to keep researching what animals 'get' out of the enrichment they are provided with, so we can see its long-term effect.

"What's great to see is that zoo professionals appreciate that a species' natural behaviour and its ecology are the driving force behind the design of enrichment, so we are giving enrichment to zoo animals that enables them to behave in a natural way.

"We just need to measure the effect of this.

"The more we can encourage people to do science at the zoo, the more information we will have on how zoo animals like or enjoy the enrichment they are provided with."

Credit: 
University of Exeter

Young people putting music to the crisis: the role of music as a political expression

image: In Tunis, during the Revolution (june 2011), rap was the way of sharing the anger with the autoritarian regime of Ben Ali

Image: 
Carles Feixa and José Sánchez-García, UPF researchers

Songs that Sing the Crisis: Music, Words, Youth Narratives and Identities in Late Modernity is the title of a special issue of the journal Young (Nordic Journal of Youth Research) to be published on 1 February, now available online, that reflects on the role of music as an expression of the crisis. It contains case studies of musical genres rap, punk, folk metal, black metal, fado, reggaeton and mahraganat in countries like Spain, Portugal, Finland, Ireland and Egypt.

The special issue includes studies by researchers from the Youth, Society and Communication Research Group (JOVIS.com) at Department of Communication UPF: one by Mònica Figueras (together with lecturers at the URV Núria Araüna and Iolanda Tortajada) on feminist reggaeton in Spain, and another by the researchers José García Sánchez and Carles Feixa about rap and mahraganat in Egypt after the revolution.

As well as being an author, Carles Feixa Pàmpols (UPF) is also the editor of the special issue, together with Paula Guerra (University of Porto, Portugal), Shane Blackman (Canterbury Christ Church University, UK) and Jeanette Østergaard (The Danish National Centre for Social Research, Denmark).

The expansion of reggaeton in Spain

The article by Núria Araüna, Iolanda Tortajada and Mònica Figueras-Maz focuses on a non-Western musical style, reggaeton, which became commercialized and globalized at the turn of the century, but in Spain, after the crisis, adopted a more politicized stance. It originated as an underground hybrid style belonging to the lower classes of a peripheral region -the Caribbean-, and was considered a male domain (and subclass), but quickly spread from the marginalized sectors to the middle and central classes. Reggaeton can be seen as an exercise of resignation and empowerment, a tactic to subvert discriminatory gender representations.

The study examines the expansion of reggaeton in Spain from the point of view of gender relations and the mainstreaming of popular feminism. It focuses on three young popular artists: Brisa Fenoy, Ms. Nina and Tremenda Jauría, who have appropriated the style as a subversive tool to convey feminist messages, through the lyrics and body movements.

Both in the commercial slant of the former two and the alternative stance of the latter, the lyrics and their dissemination in political contexts, such as the #MeToo demonstrations on international women's day (8 March) 2018, allow the authors to conclude that reggaeton can be seen as exercise of resignation and empowerment, a tactic to subvert discriminatory gender representations: "these songs and performances are a manifestation of a complex underlying process (...) the so-called revival of feminist movements in Spain as a result of the crisis which caused greater insecurity, poverty in the working classes (but especially among women and young people)", the authors of the work explain.

Analysis of the music of the Arab Spring

The article by José Sánchez-García and Carles Feixa focuses on the politics of a popular world music -rap- and a glocal music -mahragan- in Tunisia and Egypt, respectively. The research is part of the European project TRANSGANG. Based on a comparative research project, the study combines the analysis of song lyrics with ethnographic data from the two countries, after the so-called Arab Spring. These hybrid musical styles could be seen as the soundtrack to the revolution, but also as a factor motivating the protests.

In Tunis, during the Jasmine Revolution (June 2011), rap was the means of spreading discontent with Ben Ali's authoritarian regime, as the songs of El General clearly depict, there was even a division of gender and class: institutionalized politics for middle class young people against marginalized young people. In Egypt, Cairo mahragan was a transformation of Sufi music and dance, mixed with commercial and electronic rhythms, popular in the poorer neighbourhoods, but regarded as "tasteless", crude and influenced by the Western ruling classes.

The lyrics changed with the anti-Mubarak uprising that erupted on 25 January 2011: mahragan songs were politicized and attracted different social groups and generations. As one singer says: "We made music to make people dance, but we also talk about their concerns". In both cases, these musical styles were re-signified from a generational and gender perspective going from resistance to compulsory resilience: rap music in Tunisia and mahragan in Cairo allow lower class young people to imagine the hope and the critical focus of multiple marginalizations.

A special issue that links music, identity, political and artistic protest

The presentation of the special issue states that "young people are often at the forefront of an important contemporary social and political change, and they believe that music has been a central element in these events, whether as a promoter of political mobilization or as an important indicator of the profound changes and reconstructions of youth identity in late modernity". Therefore, this special issue of Young has sought to explore the issues raised by this dilemma, crossing notions of music, identity, political and artistic protest, through interdisciplinary analysis in the fields of sociology, anthropology, literature, cultural studies, the media and history, among others, and, most importantly, it allows putting music at the centre of studies on youth.

Credit: 
Universitat Pompeu Fabra - Barcelona

Autonomous vehicles could benefit health if cars are electric and shared

What impact will self-driving cars have on public health? The Barcelona Institute for Global Health (ISGlobal), an institute supported by "la Caixa", has taken part in a study that analysed the potential risks and benefits of autonomous vehicles for public health. The conclusions of the study, published in the Annual Review of Public Health, indicate that this new type of mobility could benefit public health if the cars are electric and the model used is based on ridesharing.

Forecasts indicate that, in 2020, 5% of car sales will involve self-driving vehicles and that this figure could rise to 40% by 2030 (fully autonomous vehicles). 'Autonomous technology' refers to technology that can drive a vehicle without the need for any active physical control or monitoring by a human driver. Car autonomy is classified on a six-level scale starting at zero--a vehicle with no automation in which the driver performs all operating tasks and controls the driving environment--and going up to level five-- a fully autonomous, completely automated vehicle.

David Rojas, first author of the paper and a researcher at ISGlobal and Colorado State University, explains the current situation: "At the international level, we are still seeing very little research or planning by the authorities in anticipation of the advent of these new transport technologies, despite the fact that autonomous vehicles have the potential to significantly modify our cities and the way we travel. And this innovative autonomous technology will also have an impact on public health."

The authors of the study synthesised data from published research to identify the possible direct and indirect health impacts of autonomous vehicles on the population. The study also includes a series of recommendations aimed at policy makers, health professionals and researchers in the field.

"The advent of autonomous vehicles may result in either health benefits or risks depending on a number of factors, such as how the technology is implemented, what fuel and engines are used, how self-driving cars are used and how they are integrated with other modes of transport," asserts Rojas.

The use of autonomous vehicles is likely to reduce the number of road accidents. One of the studies discussed in this paper estimated that if 90% of the cars in the United States were to become fully autonomous, an estimated 25,000 lives could be saved every year, with economic savings estimated at over $200 billion a year.

As well as providing benefits in terms of road safety, autonomous vehicles would also offer major opportunities for public health if the vehicles are electric and are used in a ridesharing format and integrated into a model that also prioritises public transport, cycling and walking. Such a model would promote physical activity, reduce air and noise pollution, and provide more public space for a healthy urban design.

However, self-driving vehicles could have a negative impact on public health if the future model is based on fossil fuel engines and individual ownership, leading to an increase in motorised traffic, greater sedentarism and worse air quality.

Author Mark J. Nieuwenhuijsen, researcher and Director of ISGlobal's Urban Planning, Environment and Health Initiative, concludes: "We need to start planning the implementation of autonomous technology as soon as possible so as to minimise the risks and maximise the health benefits. This technology should be used to support public and active transport, prioritising the most disadvantaged communities and contributing to a shift in urban planning and transport models that will lead to a healthier urban environment."

Credit: 
Barcelona Institute for Global Health (ISGlobal)

Pre-eruption seismograms recovered for 1980 Mount St. Helens event

image: 14-track recorder used to record the telemetered data from the 1980 Mount St. Helens pre-eruption seismic sequence

Image: 
Stephen Malone

Nearly 40 years ago, analog data tapes faithfully recorded intense seismic activity in the two months before the historic eruption of Mount St. Helens in Washington State in May 1980. It took some lengthy and careful restoration efforts--including a turn in a kitchen oven for some of the tapes--to recover their data.

The data provide a near-continuous sequence of seismic activity leading up to the 18 May eruption, but they do not appear to contain any significant change in the seismic signals that would have hinted to researchers "that something big was coming," said Stephen Malone, an emeritus professor at the University of Washington and the former director of the Pacific Northwest Seismic Network.

In Seismological Research Letters, Malone describes how he worked from 2005 to 2014 to laboriously recover the data from the tapes into a digital form. His paper is part of the journal's focus section on the challenges, successes and analyses of historical seismograms.

The Mount St. Helens tapes "are a unique data set for the time, in that there aren't very many cases where you have a volcanic earthquake sequence, certainly not one that was as wild and crazy and active as St. Helens, for which you have data other than on paper records," Malone said.

Digitally transformed, the tape data have been archived at the Incorporated Research Institutions for Seismology (IRIS) Data Management Center. Malone said few researchers have accessed them so far.

The quality of the data is less impressive than today's digital recordings, he noted, but combined with modern software and techniques, they could give researchers new insights into volcanic systems, and potentially into the May 1980 eruption.

Even though the initial analysis of the tapes did not reveal anything that would have suggested an imminent major eruption, Malone said, "if someone did a systematic look at these data using much more modern analysis tools, they might see a gradual change or something that did progress in a subtle way."

The tapes come from seismic stations installed by the Pacific Northwest Seismic Network after a March 1980 magnitude 4.2 earthquake near the volcano. Every five days, the tapes at the stations had to be serviced by hand. Some of the stations used radio telemetry to transfer their data to a digital recording system at the PNSN labs, but the system was designed so that only the largest earthquakes triggered recordings for later analysis.

Malone recalls the frantic activity at the lab at the time, where he and his colleagues were attempting to keep up with the incoming seismic data "but also interpreting it in real time for the benefit of the volcano hazards people," he said. "We kept on top of things the best we could, but retaining data for longevity was a lower priority then."

The five-day tapes, along with larger tapes that held some of the telemetered data and were changed irregularly at the lab, were forgotten in storage until Malone's retirement in 2007. He worked with them off and on for years to see what data might still be recovered.

Techniques from the audio recording industry helped Malone figure out how to proceed. For instance, he learned that baking the tapes could help stabilize them so that they could be spun and read. "If you bake the tapes, actually cook them at low temperatures for a day, it sets the binder oxide on the tape such that it won't be scraped off as easily." Malone and his wife experimented with this technique using a home oven, "and we were able to recover data with pretty good fidelity," he said.

For the larger tapes, Malone turned to a Canadian audio recording professional who had some experience with analog seismic data tapes used by oil and gas companies, and who had the equipment to spin the tape reels. The recovery was paid with the help of a U.S. Geological Survey grant that provides funds to recover old seismic data.

Malone said the recovery "was a little like gambling. I didn't know how good the data would be until we had processed the whole tape. But I've been modestly pleased by the results."

All of the analog tapes were discarded after Malone determined that no more data could be recovered from them.

Credit: 
Seismological Society of America

Lost in translation: Organic matter cuts plant-microbe links

ITHACA, N.Y. - Soil scientists from Cornell and Rice Universities have dug around and found that although adding carbon organic matter to agricultural fields is usually advantageous, it may muddle the beneficial underground communication between legume plants and microorganisms.

In a symbiotic relationship, microbes called rhizobia act like agricultural 'butlers' to fetch nitrogen from the air for the legume plants. When carbon is added to the soil, it helps the soil retain nutrients, but it can repress plant-microbe communication by up to 70%, according to new research published in Science Advances.

"The communication connection gets a lot of static, you might say," said Johannes Lehmann, professor of soil and crop sciences and senior author. "With carbon amendments to the soil, the plants and the microbes cannot chemically communicate as well anymore. They can't 'hear' each other."

For more than a century, scientists have known about the symbiotic relationship between legumes and rhizobial microorganisms. To help the soil's microorganisms and plants interact, flavonoids (plant and fungus metabolites) act as chemical 'telephones,' but higher amounts of organic carbon - such as compost or wood chip mulch - in the soil hinder that communication.

Credit: 
Cornell University

Advanced medical imaging combined with genomic analysis could help treat cancer patients

PHOENIX, Ariz. -- Jan. 30, 2019 -- Melding the genetic and cellular analysis of tumors with how they appear in medical images could give physicians and other cancer therapy specialists new insights into how to best treat patients, especially those with brain cancer, according to a new study led by the Translational Genomics Research Institute (TGen), an affiliate of City of Hope.

Published in the scientific journal PLOS ONE, this study suggests that the tumor microenvironment -- essentially all the cells both in and surrounding a tumor -- play a vastly under-studied role in the development and growth of cancer.

"This study is a bridge between genetic sequencing, single-cell analysis and high-resolution medical imaging," said Dr. Michael Berens, Professor and Director of TGen's Cancer and Cell Biology Division, head of the institute's Glioma Research Lab, and one of the study's lead authors. "By literally focusing on how tumors look on the outside, as well as spelling out their DNA cell characteristics on the inside, we believe we can provide physicians, oncologists, radiologists, surgeons and others with timely information about how to best attack each patient's cancer."

This type of analysis is especially needed for brain cancer patients, whose fast-growing tumors are among the most difficult to diagnose, treat, remove and monitor. Because of the way brain tumors infiltrate surrounding healthy brain tissue, it is difficult for surgeons to remove all of the cancer without causing potentially catastrophic damage to a patient's memory and ability to function. And because of a matrix of tiny blood vessels that surround and protect the brain, only the smallest molecules can enter, limiting the types of drugs that could help shrink brain tumors.

Adding to the difficulty is the fact that each patient's brain tumor cells are different. Even the individual cancer cells within the tumor can vary.

"(The) diverse cellular composition and cellular interactions have not been well characterized," according to the study.

To better understand these cancers, Dr. Berens and others correlated the genetic and protein fingerprints of brain cancer cells with how those cells, and surrounding cells looked, using MRI (magnetic resonance imaging), a test that is routinely performed as soon as brain cancer is suspected.

Drawback of DNA analysis without imaging

Currently, to characterize tumors at the molecular level, scientists commonly grind up many cells from a biopsy and extract DNA, RNA and other genomic materials so that the tumor can be sequenced and researchers can tell what genes, or set of genes, might be misbehaving to cause the cancer. But they can't tell how those cancer cells may have interacted with other nearby cells. The spatial and cellular context is lost.

Even the new revolution of single-cell sequencing leaves researchers with no context as to what cells were adjacent to the individual diseased cells they analyze.

"The resolution of MRI can't 'see' individual cell differences. But we were able to find evidence for correlations between genetic and cellular changes. We can see the consequences of specific genetic changes in brain cancer tumors that show up on a medical image," Dr. Berens said.

This type of information could potentially help surgeons decide how much tissue must be removed to extract the cancer, the dosage and frequency a radiologist might use to treat the cancer, and what specific drugs might be best suited for each patient at differing points in time.

It could help answer other questions. For example: How is the tumor infiltrating adjacent tissue? How is it interfering with the body's immune system? How is it generating new adjacent blood vessels to obtain a surge of nutrients to keep it growing?

Using fluorescent tags to identify biomarkers

To help answer these questions, TGen's partnering investigators at General Electric Global Research Center deployed a novel imaging tool, which they developed over the past decade together with GE Healthcare, called "Cell DIVE™," or Multiplexed immunofluorescence imaging (MxIF), which is used to repeatedly stain tumor samples with antibodies attached to fluorescent dyes. In an iterative process of staining and imaging, the method allows cell level quantification of over 60 cell biomarkers in a single sample.

In this study, researchers analyzed more than 100,000 cells in brain tumor cases, using MxIF to uncover differences between two types of brain tumors based on mutations in the gene IDH1.

"Using this platform, we can visualize and analyze various cell types and cell states present in the tumor tissue as well as how they interact with each other and their microenvironment," said Dr. Anup Sood, a senior scientist at GE Global Research, and also a lead author of the study.

"Visualizing the microenvironment, especially, is key to understanding tumor behavior and the response to therapy, which has been difficult to analyze with conventional methods," Dr. Sood said. "The platform's unique capabilities, which allows deeper insights into cancer, were the result of a more than decade's long effort by a multidisciplinary team of more than 50 chemists, biologists, software and hardware engineers, computer scientists, statisticians with key industrial and academic partners."

Dr. Fiona Ginty, another GE senior scientist and the study's senior author, added: "The more cell-level data we analyze, the more we learn about tumor biology, cell-to-cell interactions, immune response and how tumors progress. Further, with the integration of cellular, medical imaging and genomic data, we gain a more holistic understanding of why certain tumor types progress more rapidly, and others are more slow-growing, and ultimately which drugs a patient may respond to."

Researchers' next step will be to use this new technology on a large cohort of patients to prove that it works.

Credit: 
The Translational Genomics Research Institute

Health: Vegetarian diet linked with lower risk of urinary tract infections

A vegetarian diet may be associated with a lower risk of urinary tract infections (UTIs), a study in Scientific Reports suggests.

UTIs are usually caused by gut bacteria, such as E. coli, which enter the urinary tract through the urethra and affect the kidneys and bladder. Previous research has shown that meat is a major reservoir for E. coli strains known to cause UTIs, but it is unknown whether avoiding meat reduces the risk of UTIs.

Chin-Lon Lin and colleagues assessed the incidence of UTIs in 9,724 Buddhists in Taiwan, who participated in the Tzu Chi Vegetarian Study, a study investigating the role of a vegetarian diet on health outcomes in Taiwanese Buddhists. The authors found that the overall risk of UTIs was 16% lower in vegetarians than in non-vegetarians. Of the 3,040 vegetarians in the study, 217 developed a UTI compared to 444 UTI cases in 6,684 non-vegetarians studied. The reduced UTI risk associated with a vegetarian diet was greater in men than women, although overall UTI risk for men was 79% lower than for women, regardless of diet.

The authors suggest that by not eating common sources of E. coli, such as poultry and pork, vegetarians may avoid ingesting E. coli that may cause UTIs. They also propose that the higher fibre diet of many vegetarians may prevent the growth of E. coli in the gut and decrease UTI risk by making the intestine more acidic.

Credit: 
Scientific Reports

Brain's 'GPS system' toggles between present and possible future paths in real time

Survival often depends on animals' ability to make split-second decisions that rely on imagining alternative futures: If I'm being chased by a hungry predator, do I zag left to get home safely or zig right to lead the predator away from my family? When two paths diverge in a yellow wood, which will lead me to breakfast and which will lead me to become breakfast? Both look really about the same, but imagination makes all the difference.

In a study of rats navigating a simple maze, neuroscientists at UC San Francisco have discovered how the brain may generate such imagined future scenarios. The work provides a new grounding for understanding not only how the brain makes decisions but also how imagination works more broadly, the researchers say.

"One of the brain's most amazing abilities is to imagine things that aren't right in front of it," said Loren Frank, PhD, a professor of physiology and Howard Hughes Medical Institute investigator in the UCSF Center for Integrative Neuroscience, co-director of the UCSF Kavli Institute for Fundamental Neuroscience, and member of the UCSF Weill Institute for Neurosciences. "Imagination is fundamental to decision-making, but so far neuroscience hasn't given a good explanation of how the brain generates imagined futures in real time to inform various kinds of everyday decisions -- while keeping track of reality at the same time."

In the new study, published January 30, 2020, in Cell, Frank's team had rats explore an M-shaped maze, while recording the firing of neurons in the hippocampus called "place cells", which are traditionally thought to keep track of an animal's location -- like a neural GPS system. But as the rats approached a fork in the maze, the researchers discovered that their place-cell activity began to switch back and forth extremely rapidly -- at a rate of eight times per second -- between representing the animal's current position and its two alternative future paths, as if to say: "Here I am -- go left? -- here I am -- go right?"

The team also extended this finding to another type of imagined scenario. Apart from location, place cells have also been known to keep track of an animal's travel direction. The team found that place cells representing opposite travel directions could also switch back and forth extremely rapidly, as if to say, "I'm going this way, but I could also turn around and go the other way."

"The cells' fast switching between present and possible paths was unmistakable because it was so regular," said Kenneth Kay, PhD, a post-doctoral researcher at Columbia University who led the study as a graduate student in Frank's lab. "It was exciting to see because speed plus consistency is exactly what's needed in any number of real-world settings, for both animals and humans."

The place cells' oscillations between the present and possible futures didn't appear to be directly controlling rats' decisions about which path to choose, but did become stronger as the rats approached the decision point, Kay and colleagues found. This suggested to the researchers that the role of the hippocampus in decision-making might be to generate a "menu" of imagined scenarios for other parts of the brain that can associate these options with past experience of their value or potential danger, then make an appropriate decision based on the animal's current drives -- hungry or thirsty, fearful or bold.

"We think this shows that the hippocampus is not just responsible for the recording the past and processing the present, but for imagining the future as well," Frank said. "This study is just a first step, but it opens new avenues for us to study how imagined scenarios are generated and evaluated in the brain as animals make decisions."

Study Suggests New Conception of Hippocampus as Source of Imagination

The hippocampus, a seahorse-shaped structure found on each side of the brain deep in the temporal lobes, is among the most intensively studied parts of the brain.

Hippocampal damage -- whether by brain injury or in a disease such as Alzheimer's -- robs people of the ability to form new memories, leading 20th century scientists to describe the hippocampus as the brain's memory center. In the 1970s, scientists identified hippocampal place cells, which spontaneously create maps of new environments as animals explore them, then store these maps for later use. This discovery, which was awarded the 2014 Nobel Prize in Physiology or Medicine, prompted scientists to recognize that the hippocampus is also a navigation center -- responsible for, say, allowing an animal to find its way back to the place where it remembers eating those delicious blackberries last summer.

Along these lines, previous work by Frank and others has shown that place cell activity can replay an animal's recent movements or even anticipate where an animal may be headed next, but such activity had only been seen intermittently -- typically when animals were resting or pausing during ongoing movement — as actively considering their next move.

The new Cell study is the first to show how hippocampal cells can represent different hypothetical scenarios consistently and systematically over time. Such a system could allow animals on the move to make extremely rapid decisions in the moment based on these imagined alternatives while also keeping track of the animal's present reality, the researchers say. It could even play a role in the brain’s ability to generate hypothetical scenarios or thoughts more broadly.

"The regular switching between present and possible -- or actual and imagined -- looks like be a robust system for generating lots of ideas, not just for mechanically remembering or predicting," Kay said. "The hippocampus could be at the root of our ability to imagine."

Credit: 
University of California - San Francisco

Researchers combine X-rays and laser light to image sprays

image: Researchers from Lund University developed an imaging method that provides an unprecedented view of sprays such as the ones used for liquid fuel combustion. Pictured (from the left) are PhD student Kristoffer Svendsen, postdoctoral researcher Diego Guénot, group leader at the Division of Combustion Physics Edouard Berrocal, group leader at the Division of Atomic Physics Olle Lundh and PhD student Jonas Björklund Svensson.

Image: 
Edouard Berrocal, Lund University

WASHINGTON -- Researchers have developed a new laser-based method that provides an unprecedented view of sprays such as the ones used for liquid fuel combustion in vehicle, ship and plane engines. The technique could provide new insights into these atomizing sprays, which are also used in a variety of industrial processes such as painting and producing food powders and drugs.

"We developed a new imaging method to better understand the transition from liquid to gas that occurs before fuel combustion," said research team leader Edouard Berrocal from the Division of Combustion Physics, Department of Physics at Lund University in Sweden. "This information could be used to develop smarter fuel injection strategies, better fuel-air mixing, more efficient combustion and, ultimately, reduce pollutant emissions from combustion devices typically used for transportation."

In Optica, The Optical Society's journal for high impact research, Berrocal and colleagues from the Department of Physics' Division of Atomic Physics describe a novel approach that combines x-rays and laser-induced fluorescence to observe and quantify atomizing spray phenomena that were not previously accessible. The fluorescence images provide details on the sprayed liquid's form, including its size and shape, while the x-ray radiographs quantify how the liquid is distributed.

"Usually, images of atomizing sprays are blurry and don't contain information about the spray's interior," said Diego Guénot, first author of the paper. "Our new imaging approach solves these problems and can even detect smaller amounts of liquid than have ever been detected before with x-rays."

Seeing into a spray

Sprays are very difficult to visualize with normal light because their thousands of small droplets scatter light in all directions. X-ray beams, however, are also absorbed, making it possible to measure the amount of liquid present by detecting the amount of x-ray radiation transmitted through the spray.

This type of analysis usually requires x-rays generated by large synchrotrons, which are available at only a few specialized facilities around the world. However, the researchers overcame this barrier by using a new table-top laser-plasma accelerator developed by Olle Lundh's team in the Division of Atomic Physics. It was designed to produce x-rays tailored for high-resolution and time-resolved x-ray imaging.

"Even though they are much smaller than a synchrotron, the new laser accelerators produce x-rays in the right energy range to be absorbed by liquids and can deliver it in femtosecond pulses that essentially freeze the spray motion for imaging," said Lundh. "Also, the x-ray flux is high enough to produce a good signal over a wide area."

In the laser-plasma accelerator, x-rays are generated by focusing an intense femtosecond laser pulse into a gas or a preformed plasma. The researchers also used these femtosecond laser pulses to perform two-photon fluorescence imaging. This fluorescence approach is often used in life science microscopy to provide high contrast images of submillimeter areas but has rarely been used to image sprays, which usually require an imaging area of a few square centimeters.

"Two-photon imaging of a relatively large area requires higher energy, ultrashort laser pulses," said Berrocal. "The fact that we used an intense femtosecond laser beam to generate x-rays meant we could simultaneously perform x-ray and two-photon fluorescence imaging. Performing these two imaging modalities at the same time with a relatively large viewed area has not been done before."

Getting a clear view

The researchers first tested the technique by generating x-rays and placing a spray in front of the x-ray camera. With the first image, it was immediately apparent that the spray could be clearly visualized. The researchers then modified the setup to add the two-photon fluorescence imaging. Using the combined technique to image water jets created by an automotive fuel injector produced a higher measurement sensitivity than has been achieved with the large synchrotron x-ray sources.

"This imaging approach will make studying sprays much easier for both academic and industry researchers because they will be able to perform studies, not only at the handful of synchrotron facilities, but also at various laser plasma accelerator laboratories over the world." explained Guénot.

The researchers plan to expand the technique to obtain 3D images of sprays and study how they evolve over time. They also want to apply it to more challenging and realistic sprays such as biodiesel or ethanol direct-injection sprays as well as for spray systems used for gas turbines.

Credit: 
Optica

UNC Lineberger discovery would allow researchers to fine-tune CAR-T activity

CHAPEL HILL -- A discovery by University of North Carolina Lineberger Comprehensive Cancer Center researchers could allow scientists to fine-tune genetically engineered immune cells to heighten their killing power against tumors or to decrease their activity level in the case of severe side effects.

In a study published in Cancer Cell, researchers led by UNC Lineberger's Gianpietro Dotti, MD, reported new findings about the regulation of co-stimulatory molecules that could be used to activate cancer-killing immune cells - chimeric antigen receptor T-cells, or CAR-T - or decrease their activity.

"In immunology, it's always about balance; you don't want to have too much T-cell activation, and you don't want T-cell activation to be too low," said Peishun Shou, PhD, postdoctoral research associate at UNC Lineberger and the study's co-first author. "We wanted to keep the T-cell activation and tumor killing at a suitable or sustainable level."

Cellular immunotherapy, or CAR-T immunotherapy, involves extracting specific immune cells from patients, engineering the cells in the lab to hunt tumor cells displaying a specific molecular target, and then re-infusing them to fight their cancer.

Through the Clinical Immunotherapy Program, UNC Lineberger researchers have designed novel investigational CAR-T therapies for Hodgkin and non-Hodgkin lymphoma, multiple myeloma, neuroblastoma and leukemia that are being studied in clinical trials.

"We are conducting and developing clinical studies with CAR-T cells in both liquid and solid tumors. In these studies, we are testing what we call the 'new generation' of CAR-T cells, hoping to further enhance the therapeutic index of this technology," said Dotti, the study's corresponding author, a professor in the UNC School of Medicine Department of Microbiology and Immunology and director of the UNC Lineberger Cellular Immunotherapy Program. "This latest study highlights how when translational and basic science come together, we can hopefully improve therapeutic strategies."

In the Cancer Cell study, researchers revealed new strategies for engineering investigational CAR-T to either increase the activity of modified T-cells to more effectively kill tumor cells or decrease their activity in case the therapies trigger severe side effects.

They developed strategies for improving two different types of modified T-cells. These two types of CAR-T cells are differentiated by the signals that activate them. First, they have a receptor that recognizes a specific marker on the tumor - the first signal. They also need a second signal that helps to fully activate them and increase their response. There are two different types of T-cells that have different "second signals" that activate them.

One type of CAR-T is co-stimulated by the CD28 protein, and another is stimulated by 4-1BB. UNC Lineberger researchers wanted to find a way to regulate these proteins in order to "fine-tune" the cells' disease-fighting response, since researchers reported each type of CAR-T has differences in terms of how long it typically lasts in the body to fight cancer, how quickly it responds and the strength of its response.

"T-cells have to be activated to kill tumor cells," Shou said. "If you have better activation, you have more cytokine release ... and the cells can better target a tumor and kill it. In some cases, we want to make the T cells stronger, more active, and depending on the tumor type, we may want to tune down the T-cell activation to help the T-cells survive and expand."

For CAR-T co-stimulated by 4-1BB, scientists found they could increase expression of the LCK molecule to increase the cells' activity.

"What we found is that the LCK molecule can bind to the CAR, enhancing the CAR-T cell activation and signaling transduction, which therefore will help CAR-T cells get a better tumor-killing effect," Shou said.

They also reported on the discovery of a new "safety switch" mechanism to reduce activity of CAR-T co-stimulated by CD28. Doctors could use the safety switch should patients experience severe side effects from the experimental therapy.

They found they could use a molecule called SHP1 to reduce T-cell activity. When they added a certain drug, SHP1 bound to the CAR to reduce the activity of CAR-T cells.

"In the presence of the drug, we can cool down or tune down the CAR-T cell activation," Shou said. "The advantage of this switch is that it will not kill the CAR-T cells; it's just temporarily tuning down the activity."

Researchers want to investigate using these findings to improve CAR-T treatments against blood cancers like leukemia, and to potentially improve experimental treatments for solid tumors.

"Researchers in the CAR-T immunotherapy field now want to solve the solid tumor problem," Shou said. "Solid tumors have an immunosuppressive microenvironment, so you need stronger CAR-T activation."

Credit: 
UNC Lineberger Comprehensive Cancer Center

Machine learning automates identification of crystal structures in new materials

Providing a method for eliminating some of the guesswork from crystal structure determination, a machine learning-based approach to determining crystal symmetry and structure from unknown samples may greatly improve the speed and accuracy of this process. The new method brings crystallography into the high-throughput world of artificial intelligence (AI). From geology to biology to materials science, identifying crystal structure is critical to understanding its general characteristics and properties. Electron backscatter diffraction (EBSD) is the standard technique for identifying crystal structure. However, while powerful, EBSD requires user input concerning critical elements of structure, such as crystal phase, which can be both time-consuming and prone to error. A fully autonomous approach to more hands-on crystallography would open the door to a high-throughput evaluation of a material's properties, according to the authors. Kevin Kaufmann and colleagues developed an autonomous, machine learning-based method for rapidly determining crystal structure from EBSD data with high accuracy. The authors used a convoluted neural network to identify unique crystal symmetries in EBSD pattern images using the same symmetry features a crystallographer would use. According to the results, the trained algorithm was capable of accurately identifying and classifying various aspects of crystal structure from diffraction patterns of materials that it was not trained on, and with almost no human input. The platform opens the door to high-throughput determination of structures for multiple fields.

Credit: 
American Association for the Advancement of Science (AAAS)

Machine learning technique speeds up crystal structure determination

image: Illustration of the inner workings of a convolutional neural network that computes the probability that the input diffraction pattern belongs to a given class (e.g. Bravais lattice or space group).

Image: 
Vecchio lab/Science

Nanoengineers at the University of California San Diego have developed a computer-based method that could make it less labor-intensive to determine the crystal structures of various materials and molecules, including alloys, proteins and pharmaceuticals. The method uses a machine learning algorithm, similar to the type used in facial recognition and self-driving cars, to independently analyze electron diffraction patterns, and do so with at least 95% accuracy.

The work is published in the Jan. 31 issue of Science.

A team led by UC San Diego nanoengineering professor Kenneth Vecchio and his Ph.D. student Kevin Kaufmann, who is the first author of the paper, developed the new approach. Their method involves using a scanning electron microscope (SEM) to collect electron backscatter diffraction (EBSD) patterns. Compared to other electron diffraction techniques, such as those in transmission electron microscopy (TEM), SEM-based EBSD can be performed on large samples and analyzed at multiple length scales. This provides local sub-micron information mapped to centimeter scales. For example, a modern EBSD system enables determination of fine-scale grain structures, crystal orientations, relative residual stress or strain, and other information in a single scan of the sample.

However, the drawback of commercial EBSD systems is the software's inability to determine the atomic structure of the crystalline lattices present within the material being analyzed. This means a user of the commercial software must select up to five crystal structures presumed to be in the sample and then the software attempts to find probable matches to the diffraction pattern. The complex nature of the diffraction pattern often causes the software to find false structure matches in the user selected list. As a result, the accuracy of the existing software's determination of the lattice type is dependent on the operator's experience and prior knowledge of their sample.

The method that Vecchio's team developed does this all autonomously, as the deep neural network independently analyzes each diffraction pattern to determine the crystal lattice, out of all possible lattice structure types, with a high degree of accuracy (greater than 95%).

A wide range of research areas including pharmacology, structural biology, and geology are expected to benefit from using similar automated algorithms to reduce the amount of time required for crystal structural identification, researchers said.

Credit: 
University of California - San Diego

In Cuba, cleaner rivers follow greener farming

video: University of Vermont geologist Paul Bierman discusses the background, results and implications of a new study of water quality in 25 rivers in Cuba that he co-led with a team of Cuban and US scientists. 'We found a lot less nutrients are coming off the Cuban landscape than are coming off the American landscape,' he says. The new study, '¡Cuba! River Water Chemistry Reveals Rapid Chemical Weathering, the Echo of Uplift, and the Promise of More Sustainable Agriculture,' was published in the journal GSA Today, from the Geological Society of America.

Image: 
Ian Thomas Jansen-Lonnquist

When the Soviet Union collapsed in the early 1990's, food production on the island of Cuba was disrupted--as the supply of Russian fertilizers, pesticides, tractors, and oil dried up. Under the stress of an imminent food crisis, the island quickly rebuilt a new form of diversified farming--including many urban organic gardens--that depended less on imported synthetic chemicals. Over the last two decades, Cuba blossomed into a world-class showcase of conservation agriculture, with improved soils and cleaner water.

At least that's been a popular story among journalists.

Now--for the first time in more than fifty years--a team of Cuban and U.S. field scientists have worked together to rigorously test a key aspect of this story: the impacts of contemporary agriculture on water quality in Cuba's rivers. Despite centuries of sugarcane plantations and other intensive farming, the international team discovered that none of the rivers they explored show deep damage.

Instead, the scientists measured much lower nutrient concentrations in all the twenty-five Cuban rivers they studied than are found in the U.S.'s Mississippi River. And they think Cuba's transition toward sustainable agriculture--and its reduced use of fertilizers on cropland--may be a primary cause.

"A lot of stories about the value of Cuba's shift to conservation agriculture have been based on fuzzy, feel-good evidence," say University of Vermont geologist Paul Bierman, who co-led the new research, "this study provides hard data that a crucial part of this story is true."

Bierman and geoscientist Amanda Schmidt from Oberlin College led the American half of the international team, while Rita Yvelice Sibello Hernández, a scientist with CEAC (Centro de Estudios Ambientales de Cienfuegos), an ecological research group, headed up the Cuban effort with CEAC science director Carlos Alonso-Hernández.

The new study, "¡Cuba! River Water Chemistry Reveals Rapid Chemical Weathering, the Echo of Uplift, and the Promise of More Sustainable Agriculture," was published January 30, in the early online edition of the journal GSA Today, the leading publication of the Geological Society of America.

POLLUTION PROBLEMS

The scientists from both countries worked side-by-side as one team doing extensive fieldwork--with support from the U.S. National Science Foundation--and then coordinated lab work and analysis to look at many measures of river water across central Cuba. The team found high levels of E. coli bacteria in the waters--likely the result of large numbers of livestock and Cuba's intensive use of horses and other draft animals for transportation and farm work.

However, the scientists also found much lower levels of phosphorus and nitrogen pollution in Cuban rivers than in the United States where intensive farming and chemical fertilizer use is widespread. The new study shows dissolved nitrogen levels in Cuban rivers running at roughly a quarter to a third of those found in the Mississippi River--where excess nitrogen is a primary engine of the dead zone in the Gulf of Mexico. "Cuban river waters provide evidence that agriculture need not overload rivers, and thus reservoirs and coastal zones, with nutrients," writes the 15-person research team that included seven Cuban scientists and students and eight U.S. scientists and students from UVM, Oberlin, and Williams College.

"This research can help the people of Cuba," said the CEAC's Rita Yvelice Sibello Hernández, "and may give a good example to other people in the Caribbean and all over the world."

SCIENTIFIC DIPLOMACY

Cuba is a motorboat trip from Florida--less than a hundred miles. And the island nation is the most populous in the Caribbean with more than 11 million citizens and a long and tortuous history of complex relations--cooperation and conflict--with the United States. But there has been vanishingly little collaboration between U.S. and Cuban scientists since the 1960s--much less than with other, more-potent geopolitical foes of the United States, from Iran to China.

"We have much to learn from each other," says Cuban scientist Alejandro Garcia Moya, a co-author on the new study. The kind of river data that the team collected "are needed to guide sustainable development in Cuba, and by example, in other tropical and island nations," the team writes. Not only did the U.S. team provide important technical expertise and verification of results--but the joint research reveals that Cuba also has a lot of opportunity to improve its river water quality. The new study points toward the need for improved management strategies to reduce animal manure and sediment loads going into rivers--such as fencing to keep cattle off river banks--that "could further and rapidly improve central Cuban river water quality," the scientists note.

Conversely, "Cuba has been having a forced experiment in organic agriculture since the late 1980s," says Oberlin's Amanda Schmidt. "So Cuba is a very interesting place to look at the effects of both conventional agriculture and the effects of organic agriculture at a national scale,"--and may suggest pathways to improve U.S. agriculture. Fertilizer use in Cuba peaked in 1978 and has been lower since, according to World Bank and other data. U.S. fertilizer use spiked after the 1960s and has remained at more than twice the Cuban use rate.

"There's a takeaway we bring back to the U.S.: our river waters do not need to look the way they do," says Paul Bierman--a professor in UVM's Geology Department, Rubenstein School of Environment and Natural Resources, and Gund Institute for Environment--"we can manage fertilizer differently." There are, of course, complex questions about yields, farm policy and more, but this newly reported data on the low levels of nutrient pollution found in twenty-five Cuban rivers, "suggests the benefits of Cuba's shift to conservation agriculture after 1990," the US/Cuban team writes, "and provides a model for more sustainable agriculture worldwide."

Credit: 
University of Vermont

ASU scientists boost gene-editing tools to new heights in human stem cells

image: David Brafman's Arizona State University lab has developed a new TREE method (an acronym short for transient reporter for editing enrichment, or TREE), which allows for bulk enrichment of DNA base-edited cell populations -- and for the first time, high efficiency in human stem cell lines. They have used the technology to better understand the causes of Alzheimer's disease.

Image: 
Arizona State University

During the past decade, the gene editing tool CRISPR has transformed biology and opened up hopeful avenues to correct deadly inherited diseases. Last fall, scientists began the first human clinical trials using CRISPR to combat diseases like cancer. They remove some of a person's cells, CRISPR edit the DNA, and then inject the cells back in, where hopefully, they will cure the disease.

But along with this promise of regenerative, personalized medicine, CRISPR can also have significant safety limitations. CRISPR may not edit in the right place (so-called off-target gene effects) or not being terribly efficient (successful editing may only be achieved in about 10% of the time for every available cell target).

These limitations have frustrated scientists such as Arizona State University's David Brafman, a cell bioengineer. Brafman initial hopes are to use gene editing to get at the heart of uncovering the causes of studies in his lab of neurodegenerative diseases like Alzheimer's.

"We study neurodegenerative diseases like Alzheimer's and use stem cells to study specific mutations or risk factors associated with Alzheimer's disease," said Brafman, a biomedical engineering faculty member in ASU's Ira A. Fulton Schools of Engineering. "We are not necessarily a gene-editing tool development lab, but we were running into difficulty generating stem cell lines by using a traditional CRISPR-based editing approach. For reasons that are still unknown, stem cells are really resistant to that sort of genetic modification."

Green light means go

Now, Brafman, using a new update to the CRISPR base editing technology originally developed in the lab of David Liu at Harvard, has vastly outperformed previous efforts by making highly accurate, single DNA base editing with an efficiency of up to 90% of human stem cells. The results were published in the journal Stem Cell Reports.

"Previously, with CRISPR, it's just been a random guess," said Brafman. "And so, if you are picking at random stem cells and the efficiency is low, you'll likely get only 10% or 5% because you have no idea if the edits have been made --- the cell isn't telling you."

Brafman's lab has developed a new TREE method (an acronym short for transient reporter for editing enrichment, or TREE), which allows for bulk enrichment of DNA base-edited cell populations----and for the first time, high efficiency in human stem cell lines.

""Most of the studies are done in immortalized cell lines or cancer cell lines, which are relatively easy to edit," said Brafman. "This is the first example of using base editors in pluripotent stem cells, which is a very valuable cell population to genetically modify. We envision this method will have important implications for the use of human stem cell lines in developmental biology, disease modeling, drug screening and tissue engineering applications,"

Last year, they had shown that their TREE approach can work in human cell lines, but wanted to further push the technology further to find a way to rapidly and efficiently edit human stem cell lines.

Unlike CRISPR, which cuts across both DNA stands, their TREE method only makes a single strand nick in DNA. For example, when a single DNA base is successfully edited from a C to a T, a protein gives off a signal, turning from blue to green.

"Now, if a cell is telling you, 'if I'm glowing green I have a 90% chance of being edited you are going to have better luck identifying edited populations," said Brafman. "Then, you can exclude all of the cells that are not edited. We isolate single cells that are glowing green, then grow those up into clonal populations that you are able to expand indefinitely."

Targeting Alzheimer's

Pluripotent stem cells are valued for regenerative medicine because they have the ability to become or differentiate into any cell type in the human body.

Brafman explains that there are two general sources, "embryonic stem cells, which are derived from the inner cell mass of a preimplantation blastocyst, and then there are induced pluripotent stem cells, which are derived from taking somatic cells like skin or blood from patients."

Brafman's lab uses the induced pluripotent stem cells for their research.

"For this study, we used pluripotent stem cells from both healthy patients and then patients with Alzheimer's disease. Some of the genes that we were interested in modulating are related to Alzheimer's disease. The majority of the patients suffering from Alzheimer's disease suffer from late onset, or sporadic Alzheimer's disease."

To provide their proof-of-concept, they targeted the APOE gene, which can come in three flavors. One of the three gene variants, called APOE4, has been associated with a higher risk for late onset Alzheimer's disease. For the study, they introduced single DNA based edits into the APOE gene.

"That's why we are interested in having these cells," said Brafman. "They are representative of the neurons and the various cell types in the central nervous system with patients with these various risk factors. Then, we can understand why an APOE variant can increase or decrease risk, and then we can start targeting those pathways that are affected."

Not only could TREE make single DNA edits to the APOE4 gene, but unlike CRISPR, make highly accurate corrections to both copies of the APOE4 gene that humans possess.

"The traditional CRISPR approach is that you have to edit once to get a heterozygous edit , then isolate that clone, edit again to get another heterozygous edit," said Brafman. "So, it's very inefficient in that way. We are generating homozygous edits at an efficiency approaching 90%. I haven't seen any other technologies that can do that in pluripotent stem cells."

In addition, TREE could also be used to engineer critical gene knockout mutations into stem cell lines. "The most fundamental experiment you can do if a gene has important implications in disease, development or physiology is knock it out," said Brafman. That opens up a whole bunch of questions that we can address. Using APOE as a case study, now we can knock out APOE in these cells if you don't have APOE at all. Is it beneficial? Detrimental? Or no difference?"

Complex cases

While diseases like sickle-cell anemia or cystric fibrosis are caused by single mutations in DNA, for most diseases and leading causes of death, like heart disease or high blood pressure, are complex, and involve multiple genes. Brafman wanted to also address the complex, root causes of Alzheimer's.

"Especially as it related to Alzheimer's disease, there can be multiple risk factors that act in concert, so we wanted a way to introduce multiple edits simultaneously in pluripotent stem cells. Because otherwise, you would have to take this sequential iterative approach, where you introduce one edit, isolate a clonal population introduce another edit, and so on.

They successfully demonstrated that TREE could be used to make new stem cell lines that had been simultaneously edited at multiple gene locations. Their results showed that more than 80% of stem clones had been targeted at all three different gene sites, and with all clones editing both gene copies.

"We found that if you multiplex you still get the same efficiency of editing as you would if you just edited a single allele," said Brafman. "Now, we can use these cells as in vitro models to study the disease and screen drugs."

Brafman is hopeful that their new tools will generate excitement in the gene editing community, and spur others on to make new discoveries.

"We want to keep expanding on that toolbox," said Brafman. "We've already gotten a high level of interest from other scientists who will be using this to generate their own cell lines. That's a good thing."

Credit: 
Arizona State University