Brain

Missing molecule hobbles cell movement

Cells missing a certain protein on their surface can't move normally, UConn researchers report in Science Signaling. The research could give insight into how cells move and repair wounds in normal tissue, as well as how cancer spreads through the body.

Cells are the body's workers, and they often need to move around to do their jobs. Frequently, a cell will move through a tissue - say, the wall of a blood vessel - the way a rock climber scales a cliff, using a protein called integrin to grab onto a spot and pull itself in that direction. When the cell moves forward, it releases the integrin grip at its rear and brings it inside itself for recycling to the front, where it is then reused to make a new grip and move forward.

This type of movement is important when cancers metastasize, breaking away from the primary tumor and spreading through the rest of the body. Cancer cells need to crawl through a tissue using integrin until they reach a blood vessel they can use to travel long distances. Disabling the integrin mode of transport might be one method of preventing cancer from spreading.

UConn Health vascular biologists Mallika Ghosh and Linda Shapiro wondered how a common protein found in a cell's skin, called the cell membrane, affected this type of movement. The protein, called CD13, spikes through the cell's membrane, with one end interacting with the inside of the cell and the other with the outside world. CD13 has many different functions, including binding a cell in place and helping cells communicate with each other.

To test CD13's role in cellular movement, Ghosh, Shapiro, and their colleagues first looked at mouse fibroblasts, a type of cell that makes the scaffolding that holds tissues and organs together. They added the fibroblasts to petri dishes filled with fibronectin, a material found outside of the cell that integrin grasps. Integrin, remember, is the protein that cells use to grab on and drag themselves through a tissue. Some of the fibroblasts were normal; others had had the gene for CD13 knocked out.

The researchers found that normal fibroblasts could move through the petri dish using their integrin method with no trouble, but CD13 knock-out fibroblasts couldn't move at all.

Then they stained the cell nucleus blue and the integrin on the cell surface green, and watched to see what happened. The normal fibroblasts pulled all their integrin inside, and after about two hours for recycling, it reappeared on the surface. The CD13 knock-out fibroblasts also pulled all their integrin inside after two hours, but the integrin never reappeared.

They tried the same experiment with human cervical cancer cells and got the same result. What appeared to be happening is that CD13 acts as an organizer, gathering the freshly recycled integrin and other necessary proteins at the cell membrane so it's ready to be pushed out when the cell needs to move.

"Without CD13, the integrins go inside and don't come back out," Shapiro says. The details of how CD13 gathers the integrin in the right place involves assembly of the cell's recycling machinery by the part of CD13 that extends inside of the cell in response to signals detected by the segment of CD13 that protrudes outside.

"And all these steps are critical for the cells to process information from the outside environment and move forward," Ghosh says.

The researchers are now looking at different versions of integrin proteins and various binding materials such as collagen and laminin, to see if CD13 plays the same role in cell movement in tissues that use those proteins for structure.

Credit: 
University of Connecticut

Researchers putting the brakes on lethal childhood cancer

Malignant rhabdoid tumor (MRT) is one of the most aggressive and lethal childhood cancers.

Although rare -- about 20 to 25 new cases are diagnosed annually in the United States -- there is no standard effective treatment for the disease, which is driven by loss of an anti-cancer protein called SNF5. The chances are very small that a child will survive a year after MRT diagnosis.

Now researchers at Vanderbilt University have discovered that a pro-cancer protein, MYC, is normally inhibited by SNF5. Loss of SNF5 effectively "takes the brakes off" MYC, thus accelerating cancerous growth.

Reporting this week in the journal Nature Communications, the researchers conclude that blocking MYC could be "unexpectedly effective" in treating MRT as well as other cancers driven by inactivation of SNF5.

"One of the difficulties in treating a cancer like MRT is that it's driven by the loss of a particular protein from the tumor cell," said William Tansey, PhD, Ingram Professor of Cancer Research and Professor of Cell and Developmental Biology.

"Showing that MYC is activated by SNF5 loss identifies a target you can conceivably go after in these cancers," he said.

Tansey, co-leader of the Genome Maintenance Research Program in the Vanderbilt-Ingram Cancer Center, is an internationally known expert on MYC, a family of three related proteins that are overexpressed in cancer and which contribute to an estimated 100,000 cancer deaths annually in the United States.

MYC proteins function as transcriptional regulators, controlling the expression of thousands of genes linked to cell growth, proliferation, metabolism and genomic instability. The Tansey lab is focused on determining basic mechanisms of MYC action that can lead to new strategies to target MYC in the clinic.

Using biochemical and genomic approaches, Tansey and his colleagues demonstrated that SNF5 selectively inhibited binding of MYC to DNA, something that is required for its tumorigenic function. Accordingly, reintroduction of SNF5 into MRT cells also displaced MYC from chromatin (the complex of DNA, RNA and protein that form chromosomes), inhibiting pro-cancerous gene expression programs.

Credit: 
Vanderbilt University Medical Center

Justin Trudeau's French isn't bad; Quebecers just don't think he belongs

image: This is Yulia Bosworth, assistant professor of French linguistics at Binghamton University, State University of New York.

Image: 
Binghamton University, State University of New York

BINGHAMTON, N.Y. -- Quebec's criticism of Justin Trudeau's French serves to position him as an "outsider" to Quebecois identity, according to a professor at Binghamton University, State University of New York.

Yulia Bosworth, assistant professor of French linguistics at Binghamton, studied Quebec's "obsession" with Justin Trudeau's French, which pundits and scholars perceive as terrible and construe as a major failure on his part, both personal and professional. She determined that Quebec's criticism reflects its view of Canada's current prime minister as an outsider and its denial of him as a francophone, or someone who speaks French. Bosworth determined that Quebec's language attitudes and ideologies are dominated by linguistic purism and French monolingualism.

"This collective stance denying Justin Trudeau the status of a francophone speaks to the larger issue of what it takes to "belong" in today's Quebec," said Bosworth. "It is not at all surprising that being a francophone serves as the basis for membership in Quebec's collective identity. What is controversial is the way that Quebec appears to construe what it means to be a francophone, which seems to be strongly biased in favor of monolingual French speakers of French-Canadian ancestry. This conceptualization of identity is likely to find itself at odds with Quebec's current and projected demographic situation, including the trend of increasing bilingualism, and its commitment to a pluralistic society that prizes diversity, by othering those who do not conform to this identity, which potentially describes a fairly large segment of Quebec's population, in particular, in urban settings."

Bosworth added that language is frequently used to discriminate against others.

"Language can be and often is used as a proxy for fostering negative judgments or making negative comments about an individual or a group," she said. "It is not okay to do so based on gender, race, sexual identity, etc., but criticizing the way someone speaks is not perceived in the same way. This is not a new idea, but it is crucial to our understanding of what I demonstrate to be Quebec's near-obsession with Trudeau's 'bad' French. Namely, language is used here as a way to criticize Trudeau as a politician and as an individual, with an underlying objective to position him as an outsider in Quebec, despite the fact that Trudeau himself strongly identifies with French and with Quebec, as evidenced in his personal interviews and his recent autobiography."

Bosworth investigated Quebec's fixation with the "bad" French of Trudeau and what it means in the larger context of identity, linguistic insecurity and Canadian bilingualism. Quebec's "obsession" with Trudeau's linguistic transgressions is demonstrated in an analysis of the harsh commentary generated in Quebec's mainstream press during the 2015 Canadian Federal Election. The intensely negative judgment, fostered and propagated by Quebec's educated elites, is shown to be rooted in the interplay of language ideologies and a complex sociolinguistic history of Quebec, which features the linguistic legacy of Trudeau's famous father, Pierre Elliott Trudeau. Bosworth said that Quebec's negative view of his father's politics influences the way they perceive Trudeau.

"In fact, it is not that the language makes them view him as an outsider, but that they use his French as a means to portray him as such," she said, "For better or worse, in Quebec, Justin Trudeau is inextricably linked to the legacy of his father, Pierre-Elliot Trudeau, whose tenure left a bitter taste in the mouths of Quebecers, to say the least. Curiously, his Quebec-born father was criticized for not speaking like a Quebecer, because he made a choice to align his French with the so-called International French spoken by educated Parisian elites. Justin Trudeau's French, on the other hand, is very much Quebec French, but he does not get credit for the aspects of it that are viewed positively by Quebecers at large and is acrimoniously criticized for those that are viewed negatively, although widely present in the speech of many, if not most, Quebecers."

Bosworth analyzed the roots of this fixation and extreme negativity, explaining and "debunking" this collective behavior. She suggested that the criticism is partly conditioned by Quebec's long-standing adherence to linguistic purism, which stems from its historical struggle against widespread negative perceptions of Quebec French.

"The negative judgments are also conditioned by the long-standing tradition of linguistic purism and normativism, which tells speakers that there is only one "correct" way to use the French language, that aligned with the usage of educated Parisian elites," she said. "Any deviation from this usage constitutes a violation, a deviation, incorrect usage. I show that when Trudeau is being accused of speaking French badly, it is either due to the use of anglicisms, or to the use of language - words, expressions, sentence structure and pronunciation - that characterize everyday use of an average Quebecer in a familiar setting."

Although Quebec's criticism of Trudeau's French was the most severe in 2015, Bosworth said the topic is still relevant today.

"On a larger scale, this forces us to draw a parallel between Quebec's denial of the status of francophone to Justin Trudeau and that of other Quebec's bilinguals, representing other French-speaking minorities in Canada, as well as immigrants from other French-speaking regions, strongly favored by Quebec's immigration practices, where French is spoken differently and carries with it cultural connotations that differ from those internalized by Quebecers of French-Canadian ancestry," said Bosworth. "Very recently, a discussion of Justin Trudeau's 'bad' French has made its way back into the media, albeit on a much more limited basis, and what is interesting about it this time around is that we see testimonies from other young English-French bilinguals representing French-speaking communities in Alberta, Manitoba, Ontario, who state that they very much identify with Justin Trudeau's French and its negative perception by Quebecers, because it strays from what Quebec views as 'correct,' 'good,' authentic French."

Bosworth plans to continue studying language attitudes and perceptions in Quebec and their connection to how Quebecers collectively construe identity.

The article, "The 'Bad' French of Justin Trudeau: When Language, Ideology, and Politics Collide," was published in the American Review of Canadian Studies.

Credit: 
Binghamton University

How to purify water with graphene

image: 1) Graphene oxide, added in water
2) Water after purification with graphene oxide
3) Graphene oxide 'flakes' with bacteria before extraction

Image: 
© NUST MISIS

Scientists from the National University of Science and Technology "MISIS" together with their colleagues from Derzhavin Tambov State University and Saratov Chernyshevsky State University have figured out that graphene is capable of purifying water, making it drinkable, without further chlorination. "Capturing" bacterial cells, it forms flakes that can be easily extracted from the water. Graphene separated by ultrasound can be reused. The article on the research is published in Materials Science & Engineering C.

Graphene and graphene oxide (a more stable version of the material in colloidal solutions) are carbon nanostructures that are extremely promising for Biomedicine. For example, it can be used for targeted drug delivery on graphene "scales" and for tumor imaging. Another interesting property of graphene and graphene oxide is the ability to destroy bacterial cells, even without the additional use of antibiotic drugs.

Scientists from the National University of Science and Technology "MISIS" together with their colleagues from Derzhavin Tambov State University and Saratov Chernyshevsky State University have conducted an experiment, injecting graphene oxide into solutions (nutrient medium and the saline) containing E.coli. Under the terms of the experiment, saline "simulated" water, and the nutrient medium simulated human body medium. The results showed that the graphene oxide along with the living and the destroyed bacteria form flakes inside the solutions. The resulting mass can be easily extracted, making water almost completely free of bacteria. If the extracted mass is then treated with ultrasound, graphene can be separated and reused.

"As working solutions, we chose a nutrient medium for the cultivation of bacteria (it is to the natural habitat of bacteria), as well as ordinary saline, which is used for injections. As a tested bacterial culture, E. coli modified with a luminescent agent was used to facilitate visualization of the experiments, was used", Aleksandr Gusev, one of the authors, Associate Professor of NUST MISIS Department of Functional Nanosystems and High-Temperature Materials, comments.

Graphene oxide was added to the nutrient solution in different concentrations - 0.0025 g/l, 0, 025 g/l, 0.25 g/l and 2.5 g/l. As it turned out, even at a minimum concentration of graphene oxide in saline (water), the observed antibacterial effect was significantly higher than in the nutrient medium (human body). Scientists believe that this indicates not a mechanical, but a biochemical nature of the mechanism of action, that is, since there are far fewer nutrients in the saline solution, the bacteria moved more actively and was "captured" by the scales of graphene oxide more often.

According to the fluorescent test data, confirmed by laser confocal microscopy and scanning electron microscopy, at 2.5 g/l concentration of graphene oxide, the number of bacteria decreased several times compared to the control group and became close to zero.

While it is not yet known exactly how the further destruction of bacteria occurs, researchers believe that graphene oxide provokes the formation of free radicals that are harmful to bacteria.

According to scientists, if such a purification system is used for water, it will be possible to avoid additional chlorination. There are other advantages: decontamination with graphene oxide has a low cost, in addition, this technology is easy to scale to the format of large urban wastewater treatment plants.

Credit: 
National University of Science and Technology MISIS

DNA folds into a smart nanocapsule for drug delivery

video: The pH responsive nanocapsule.

Image: 
Linko, Shen , Ijas/Aalto University

New study of University of Jyvaskyla and Aalto University shows that nanostructures constructed of DNA molecules can be programmed to function as pH-responsive cargo carriers, paving the way towards functional drug-delivery vehicles.

Researchers from University of Jyväskylä and Aalto University in Finland have developed a customized DNA nanostructure that can perform a predefined task in human body-like conditions. To do so, the team built a capsule-like carrier that opens and closes according to the pH level of the surrounding solution. The nanocapsule can be loaded--or packed--with a variety of cargo, closed for delivery and opened again through a subtle pH increase.

The function of the DNA nanocapsule is based on pH-responsive DNA residues.

To make this happen, the team designed a capsule-like DNA origami structure functionalized with pH-responsive DNA strands. Such dynamic DNA nanodesigns are often controlled by the simple hydrogen-bonding of two complementary DNA sequences. Here, one half of the capsule was equipped with specific double-stranded DNA domains that could further form a DNA triple helix -- in other words a helical structure comprised of three, not just two DNA molecules -- by attaching to a suitable single-stranded DNA in the other half.

'The triplex formation can happen only when the surrounding pH of the solution is right. We call these pH-responsive strands "pH latches", because when the strands interact, they function similarly to their macroscopic counterparts and lock the capsule in a closed state. We included multiple motifs into our capsule design to facilitate the capsule opening/closing based on cooperative behaviour of the latches. The opening of the capsule is actually very rapid and requires only a slight pH increase in the solution", explains first author of the study, doctoral student Heini Ijäs from Nanoscience Center at University of Jyvaskyla.

Nanoparticles and enzymes could be loaded and encapsulated within the capsules

To harness the nanocapsules for transporting molecular payloads or therapeutic substances, the team designed the capsule with a cavity that could host different materials. They demonstrated that both gold nanoparticles and enzymes could be loaded (high pH) and encapsulated within the capsules (low pH) and again displayed (high pH). By monitoring the enzyme activity, the researchers found that the cargo remained fully functional over the course of the process.

'The most intriguing thing about the DNA origami capsules is that the threshold pH at which the opening and closing take place is fully adjustable by selecting the base sequences of the pH latches. We designed the threshold pH to be 7.2-7.3, close to the blood pH. In the future, this type of drug carrier could be optimized to selectively open inside specific cancer cells, which can maintain a higher pH than normal healthy ones,' says Veikko Linko, Adjunct Professor at Aalto University.

Further, the capsules remained functional at physiological magnesium and sodium concentrations, and in 10% blood plasma, and may continue to do at even higher plasma concentrations. Together, these findings help pave the way for developing smart and fully programmable drug-delivery vehicles for nanomedicine.

The work was carried out in Professor Mauri Kostiainen's laboratory and led by Veikko Linko, both based at Aalto University.

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

Children, their parents, and health professionals often underestimate children's higher weight status

More than half of parents underestimated their children's classification as overweight or obese--children themselves and health professionals also share this misperception, according to new research being presented at this year's European Congress on Obesity (ECO) in Glasgow, UK (28 April-1 May).

The systematic review and meta-analysis, synthesising the available evidence from the scientific literature, included 87 studies conducted worldwide between 2000 and 2018, involving 24,774 children aged 0-19 years old and their parents.

"Despite attempts to raise public awareness of the obesity problem, our findings indicate that underestimation of child higher weight status is very common", says Abrar Alshahrani from the University of Nottingham, UK, who led the research.

"This misperception is important because the first step for a health professional in supporting families is a mutual recognition of higher weight status. This is particularly important for the child themselves, the parents, and the health professionals who look after them. Our study also found a tendency for health professionals to underestimate weight which suggests that overweight children may not be offered the support they need to ensure good health."

Worldwide, there has been a more than 10-fold increase in the number of children and adolescents with obesity in the past four decades, increasing from 5 million girls in 1975 to 50 million in 2016, and from 6 million to 74 million boys. In Europe, 19-49% of boys and 18-43% of girls are overweight or have obesity, representing approximately 12-16 million overweight youth, very few of whom receive adequate treatment [1].

Previous research in adults has shown that accuracy of body weight perception is associated with lifestyle behaviours, efforts to lose weight, and medical visits.

In this study, Alshahrani and colleagues from the University of Nottingham investigated the prevalence of, and risk factors associated with, underestimation of children's higher weight status. They conducted a systematic review and meta-analysis of qualitative and quantitative studies that assessed caregivers, children, and healthcare professional's perceptions of children's weight and compared this with recognised medical standards for defining overweight including the International Obesity Task Force cut-offs based on anthropometric measurements (height, weight, and circumference of waist and hip).

Results showed that over half (55%) of parents underestimated the degree of overweight in their children, whilst over a third (34%) of children and adolescents also underestimated their own weight status. Healthcare professionals shared this misperception, but limited studies prevented quantification.

Parents of younger children were less likely to perceive their child as overweight, and were less accurate at judging the weight of boys than girls.

Additionally, parents who were overweight themselves, and with less education, were also less likely to accurately assess their child's higher weight. The authors note that ethnicity and cultural norms may also have an effect on parental misperception, as some cultures prefer a larger body type and may not identify their child as overweight.

Interestingly, in qualitative studies, parents commonly described their children as "big boned", "thick", or "solid" rather than using the medical term obese, and expressed a strong desire to avoid labelling their child with medical terminology.

"Identifying weight problems in childhood and adolescence is a unique window of opportunity to have a lifetime impact on health", says Alshahrani. "The results suggest that underestimation of child overweight status is highly prevalent. Addressing the factors which lead to inaccuracy in assessing child weight will have a positive impact on communication between children, parents, and health professionals, and aid the mutual recognition of children's higher weight status."

The authors acknowledge that their findings show observational associations, so conclusions about cause and effect cannot be drawn. They point to several limitations, including the lack of statistical examination of healthcare professionals' perceptions due to the scarcity of relevant studies, and potential gender bias as the majority of studies only examined mother-child body weight perceptions.

Credit: 
European Association for the Study of Obesity

Diamonds reveal how continents are stabilized, key to Earth's habitability

image: A raw diamond from Sierra Leone with sulfur-containing mineral inclusions.

Image: 
Courtesy of the Gemological Institute of America.

Washington, DC-- The longevity of Earth's continents in the face of destructive tectonic activity is an essential geologic backdrop for the emergence of life on our planet. This stability depends on the underlying mantle attached to the landmasses. New research by a group of geoscientists from Carnegie, the Gemological Institute of America, and the University of Alberta demonstrates that diamonds can be used to reveal how a buoyant section of mantle beneath some of the continents became thick enough to provide long-term stability.

"We've found a way to use traces of sulfur from ancient volcanoes that made its way into the mantle and eventually into diamonds to provide evidence for one particular process of continent building," explained Karen Smit of the Gemological Institute of America, lead author on the group's paper, which appears this week in Science. "Our technique shows that the geologic activity that formed the West African continent was due to plate tectonic movement of ocean crust sinking into the mantle."

Diamonds may be beloved by jewelry collectors, but they are truly a geologist's best friend. Because they originate deep inside the Earth, tiny mineral grains trapped inside of a diamond, often considered undesirable in the gem trade, can reveal details about the conditions under which it formed.

"In this way, diamonds act as mineralogical emissaries from the Earth's depths," explained Carnegie co-author Steve Shirey.

About 150 to 200 kilometers, 93 to 124 miles, beneath the surface, geologic formations called mantle keels act as stabilizers for the continental crust. The material that comprises them must thicken, stabilize, and cool under the continent to form a strong, buoyant, keel that is fundamental for preserving the surface landmass against the relentless destructive forces of Earth's tectonic activity. But how this is accomplished has been a matter of debate in the scientific community.

"Solving this mystery is key to understanding how the continents came to exist in their current incarnations and how they survive on an active planet," Shirey explained. "Since this is the only tectonically active, rocky planet that we know, understanding the geology of how our continents formed is a crucial part of discerning what makes Earth habitable."

Some scientists think mantle keels form by a process called subduction, by which oceanic plates sink from the Earth's surface into its depths when one tectonic plate slides beneath another. Others think keels are created by a vertical process in which plumes of hot magma rise from much deeper in the Earth.

A geochemical tool that can detect whether the source of a mantle keel's makeup originated from surface plates or from upwelling of deeper mantle material was needed to help resolve this debate. Luckily, mantle keels have the ideal conditions for diamond formation. This means scientists can reveal a mantle keel's origin by studying inclusions from diamonds that formed in it.

The research group's analysis of sulfur-rich minerals, called sulfides, in diamonds mined in Sierra Leone indicate that the region experienced two subduction events during its history.

They were able to make this determination because the chemistry of the sulfide mineral grains is only seen in samples from Earth's surface more than 2.5 billion years ago--before oxygen became so abundant in our planet's atmosphere. This means that the sulfur in these mineral inclusions must have once existed on the Earth's surface and was then drawn down into the mantle by subduction.

The team's comparison to diamonds from Botswana showed similar evidence of keel-creation through subduction. But comparison to diamonds mined from northern Canada does not show the same sulfur chemistry, meaning that the mantle keel in this region originated in some way that did not incorporate surface material.

The group's findings suggest that thickening and stabilization of the mantle keel beneath the West African continent happened when this section of mantle was squeezed by collision with the sinking ocean floor material. This method of keel thickening and continent stabilization is not responsible for forming the keel under a portion of northern Canada. The sulfide minerals inside Canadian diamonds do not tell the researchers how this keel formed, only how it didn't.

"Our work shows that sulfide inclusions in diamonds are a powerful tool to investigate continent construction processes," Smit concluded.

Credit: 
Carnegie Institution for Science

Fixing a broken heart: Exploring new ways to heal damage after a heart attack

For people who survive a heart attack, the days immediately following the event are critical for their longevity and long-term healing of the heart's tissue. Now researchers at Northwestern University and University of California, San Diego (UC San Diego) have designed a minimally invasive platform to deliver a nanomaterial that turns the body's inflammatory response into a signal to heal rather than a means of scarring following a heart attack.

Tissue engineering strategies to replace or supplement the extracellular matrix that degrades following a heart attack are not new, but most promising hydrogels cannot be delivered to the heart using minimally invasive catheter delivery because they clog the tube. The Northwestern-UC San Diego team has demonstrated a novel way to deliver a bioactivated, biodegradable, regenerative substance through a noninvasive catheter without clogging.

The research, which was conducted in vivo in a rat model, was published recently in the journal Nature Communications. Northwestern's Nathan C. Gianneschi and UC San Diego's Karen Christman are the co-principal investigators.

"This research centered on building a dynamic platform, and the beauty is that this delivery system now can be modified to use different chemistries or therapeutics," Gianneschi said.

Gianneschi is the Jacob and Rosaline Cohn Professor in the department of chemistry in the Weinberg College of Arts and Sciences and in the departments of materials science and engineering and of biomedical engineering in the McCormick School of Engineering.

When a person has a heart attack, the extracellular matrix is stripped away and scar tissue forms in its place, decreasing the heart's functionality. Because of this, most heart attack survivors have some degree of heart disease, the leading cause of death in America.

"We sought to create a peptide-based approach because the compounds form nanofibers that look and mechanically act very similar to native extracellular matrix. The compounds also are biodegradable and biocompatible," said first author Andrea Carlini. She is now a postdoctoral fellow in the lab of John Rogers, in Northwestern's department of materials science and engineering.

"Most preclinical strategies have relied on direct injections into the heart, but because this is not a feasible option for humans, we sought to develop a platform that could be delivered via intracoronary or transendocardial catheter," said Carlini, who was a graduate student in Gianneschi's lab when the study was conducted.

Peptides are short chains of amino acids instrumental for healing. The team's approach relies on a catheter to deliver self-assembling peptides -- and eventually a therapeutic -- to the heart following myocardial infarction, or heart attack.

"What we've created is a targeting-and-response type of material," said Gianneschi, associate director of Northwestern's International Institute of Nanotechnology and a member of the Simpson Querrey Institute.

"We inject a self-assembling peptide solution that seeks out a target -- the heart's damaged extracellular matrix -- and the solution is then activated by the inflammatory environment itself and gels," he said. "The key is to have the material create a self-assembling framework, which mimics the natural scaffold that holds cells and tissues together."

The team's preclinical research was conducted in rats and segmented into two proof-of-concept tests. The first test established that the material could be fed through a catheter without clogging and without interacting with human blood. The second determined whether the self-assembling peptides could find their way to the damaged tissue, bypassing healthy heart tissue. Researchers created and attached a fluorescent tag to the self-assembling peptides and then imaged the heart to see where the peptides eventually settled.

"In previous work with responsive nanoparticles, we produced speckled fluorescence in the heart attack region, but in this case, we were able to see large continuous hydrogel assemblies throughout the tissue," Carlini said.

Researchers now know that when they remove the florescent tag and replace it with a therapeutic, the self-assembling peptides will locate to the affected area of the heart. One hurdle is that catheter delivery in a rodent model is far more complicated -- because of the animal's much smaller body -- than the same procedure in a human. This is one area where Christman's lab at UC San Diego has deep knowledge.

If the research team can prove their approach to be efficacious, then there is "a fairly clear path" in terms of progressing toward a clinical trial, Gianneschi said. The process, however, would take several years.

"We started working on this chemistry in 2012, and it took immense effort to produce a modular and synthetically simple platform that would reliably gel in response to the inflammatory environment," Carlini said. "A major breakthrough occurred when we developed sterically constrained cyclic peptides, which flow freely during delivery and then rapidly assemble into hydrogels when they come in contact with disease-associated enzymes."

By programming in a spring-like switch, Carlini was able to unfurl these naturally circular compounds to create a flat substance with much more surface area and greater stickiness. The process creates conditions for the peptides to better self-assemble, or stack, atop one another and form the scaffold that so closely resembles the native extracellular matrix.

Having demonstrated the platform's ability to activate in the presence of specific disease-associated enzymes, Gianneschi's lab also has validated analogous approaches in peripheral artery disease and in metastatic cancer, each of which produce similar chemical and biological inflammatory responses.

Credit: 
Northwestern University

Study shows zoos and aquariums increase species knowledge index 800 percent

image: The Species Knowledge Index maps what we know for 32,411 known species of mammals, birds, reptiles, and amphibians - in this case, with an eightfold gain after adding data from the Zoological Information Management System curated by 1200 zoos and aquariums worldwide.

Image: 
Species360 Conservation Science Alliance

Despite volumes of data currently available on mankind, it is surprising how little we know about other species. A paper published this week in the journal Proceedings of the National Academy of Sciences (PNAS) confirms that critical information, such as fertility and survival rates, is missing from global data for more than 98 percent of known species of mammals, birds, reptiles, and amphibians.

It's a gap with far-reaching implications for conservationists seeking to blunt the impact of the Earth's sixth mass extinction event. Scientists working worldwide on behalf of IUCN Red List, IUCN Species Survival Commission, Convention on International Trade in Endangered Species of Flora and Fauna (CITES), TRAFFIC, Monitor, and others, require demographic data to assess species populations and intervene where needed.

"It seems inconceivable. Yet scientists tasked with saving species often have to power through with best-guess assumptions that we hope approximate reality," said lead researcher and Species360 Conservation Science Alliance director Dalia A. Conde.

A multidisciplinary team led by researchers from the Interdisciplinary Center on Population Dynamics (CPop), Oxford, the Max Planck Institute for Demographic Research, the University of Southern Denmark, San Diego Zoo Global, and Species360 Conservation Science Alliance, with participants from 19 institutions, believes we can substantially increase what we know about species population dynamics by applying new analytics to data that has been long overlooked.

Predicting when species are at risk, and how best to bolster diversity and numbers, requires knowing at what age females reproduce, how many hatchlings or juveniles survive to adolescence, and how long adults live. To understand what data are currently available, and to measure the void, researchers developed a Species Knowledge Index (SKI) that classifies available demographic information for 32,144 known species of mammals, birds, reptiles and amphibians.

"The demographic knowledge of species index provides significant information that, in conjunction with genetic data, allows estimations of events that affect population viability. Severe population declines, sometimes called genetic bottlenecks, influence the sustainability of populations, as we have found in studying endangered rhinos," said Oliver Ryder, Ph.D., Director of Conservation Genetics, San Diego Zoo Global.

Turning first to go-to global sources of information, the index registers comprehensive birth and death rates for just 1.3 percent of these major classes of species. The map, which illustrates demographic knowledge for individual species, shows that many remain blank.

That changed when researchers added data from a previously untapped source, the Zoological Information Management System (ZIMS). Across classes of species, key blanks fill with salient data.

"Adding ZIMS was like turning on the lights in an otherwise very dim room," said Conde. "Class by class, from mammals through amphibians, we saw large gaps fill with fundamental data needed to help conservationists assess populations and advocate for threatened, endangered, and vulnerable species."

Incorporating ZIMS boosted the Species Knowledge Index eightfold for comprehensive life table information used to assess populations. Information on the age of first reproduction for females, a key piece to estimating how a population will fair in coming years, grew as much as 73 percent.

ZIMS is curated by wildlife professionals working within zoos, aquariums, refuge, research, and education centers in 97 countries. It is maintained by Species360, a non-profit member-driven organization that facilitates information sharing among its nearly 1,200 institutional members, and is the world's largest set of wildlife data.

The study, "Data gaps and opportunities for comparative and conservation biology," suggests a value far beyond the data itself. As Conservation Science Alliance and other researchers apply analytics to data aggregated across global sources, including ZIMS, they glean insights that impact outcomes for species in danger of extinction. Moreover, this can provide key insights for comparative and evolutionary biology, such as understanding the evolution of aging.

The team of 33 scientists including data analysts, biologists, and population dynamics specialists developed the first Species Knowledge Index to map just how much we know about species worldwide. The index aggregates, analyzes and maps data from 22 databases and the IUCN Red List of Threatened species.

Credit: 
Species360

Oregon researchers map sound, response and reward anticipation in mouse brain

EUGENE, Ore. - April 18, 2019 - University of Oregon neuroscientists report that two areas of the mouse brain combine representations of what is heard and anticipated, guiding behavior that leads mice to the best reward.

Researchers have known that signals go from the ears to the brain stem, the thalamus and auditory cortex and then onward. What was not known is how these signals about sounds are used by other brain areas to make decisions and drive behavior.

In a series of studies using mice, researchers in lab of Santiago Jaramillo, a professor of biology and member of the Institute of Neuroscience, identified the posterior tail of the dorsal striatum as a key player. In an April 2018 paper in Nature Communications, Jaramillo and colleagues found evidence that neurons in this region provide a stable representation of sounds during auditory tasks.

Follow-up studies published in the Journal of Neuroscience, have sought to further understand what is happening in the mouse brain's auditory sensory system, Jaramillo said.

In January, his lab reported that the dorsal posterior striatum receives signals from two parallel pathways, one from the auditory thalamus and other from the auditory cortex. The second study, published online March 5, looked more closely at how signals are integrated.

"Signals from both pathways tell you the frequency of sounds very well," Jaramillo said. "This explains why if you shut down the auditory cortex you still get the signals you need from the auditory thalamus. In our second study, we investigated the integration of sound, action and reward. We knew that the activity of neurons in these brain regions represents sounds, but what about actions and expectations about reward?"

It turns out that the integration of reward response, or the expectation of a learned reward, is enhanced in the posterior striatum, the researchers found by using electrophysiological recordings in simple two-choice scenarios.

Initially 11 adult male mice, over 100 trials, heard brief bursts of high- and low-frequency sounds. As a reward, one or two drops of water awaited the mice if they moved right or left based on a sound's frequency. At that point, researchers changed the reward-sound association to see if the programmed anticipation in the mice could be reprogrammed and influence a change in directional behavior.

Over time, the enhanced response in the posterior dorsal striatum emerged as the mice adapted their movements to seek the bigger reward. The new study suggests that the involved auditory neurons build representations about sounds, actions and reward expectation.

"You can tell from the firing of neurons which action the mouse expects will yield the best reward," Jaramillo said.

Research in Jaramillo's lab aims to understand how the brain learns to make better decisions. Brain regions and circuitry are similar in humans, he noted, but whether sound signals are reaching the posterior striatum from two pathways isn't known.

Eventually, Jaramillo said, his research could provide avenues for therapeutic strategies, potentially including specialized devices to treat human auditory disorders or damages associated with strokes or injuries.

"What we do in the lab is foundational science," he said. "We are trying to understand how the healthy brain works, so future research can use this knowledge to develop better diagnoses and therapies."

Credit: 
University of Oregon

Researchers improve method to recycle and renew used cathodes from lithium-ion batteries

image: Nanoengineering professor Zheng Chen holds vials of recycled cathode particles

Image: 
UC San Diego Jacobs School of Engineering

Researchers at the University of California San Diego have improved their recycling process that regenerates degraded cathodes from spent lithium-ion batteries. The new process is safer and uses less energy than their previous method in restoring cathodes to their original capacity and cycle performance.

Zheng Chen, a professor of nanoengineering who is affiliated with the Sustainable Power and Energy Center at UC San Diego, led the project. The work was published in Advanced Energy Materials.

"Due to the rapid growth of electric vehicle markets, the worldwide manufacturing capacity of lithium-ion batteries is expected to reach hundreds of gigawatt hours per year in the next five years," Chen said. "This work presents a solution to reclaim the values of end-of-life lithium-ion batteries after 5 to 10 years of operation."

Chen's team previously developed a direct recycling approach to recycle and regenerate degraded cathodes. It replenishes lithium ions that cathodes lose over extended use and restores their atomic structures back to their original states. However, that process involves pressurizing a hot lithium salt solution of cathode particles to around 10 atmospheres. The problem is this pressurizing step raises costs and requires extra safety precautions and special equipment, said Chen.

So the team developed a milder process to do the same job at ambient pressure (1 atmosphere). The key was using eutectic lithium salts--a mixture of two or more salts that melts at temperatures much lower than either of its components. This combination of solid lithium salts produces a solvent-free liquid that researchers can use to dissolve degraded cathode materials and restore lithium ions without adding any extra pressure in the reactors.

The new recycling method involves collecting cathode particles from spent lithium ion batteries and then mixing them with a eutectic lithium salt solution. The mixture is then heat treated in two steps: it is first heated to 300 C, then it goes through a short annealing process in which it is heated to 850 C for several hours and then cooled naturally.

Researchers used the method to regenerate NMC (LiNi0.5Mn0.3Co0.2), a popular cathode containing nickel, manganese and cobalt, which is used in many of today's electric vehicles.

"We made new cathodes from the regenerated particles and then tested them in batteries built in the lab. The regenerated cathodes showed the same capacity and cycle performance as the originals," said Yang Shi, the first author who performed this work as a postdoctoral researcher in Chen's lab at UC San Diego.

"In an end-of-life lithium-ion battery, the cathode material loses some of its lithium. The cathode's crystal structure also changes such that it's less capable of moving ions in and out. The recycling process that we developed restores both the cathode's lithium concentration and crystal structure back to their original states," Shi said.

The team is tuning this process so that it can be used to recycle any type of cathode materials used in lithium-ion and sodium-ion batteries.

"The goal is to make this a universal recycling process for all cathode materials," Chen said. The team is also working on a process to recycle degraded anodes, such as graphite as well as other materials.

Chen is also collaborating with UC San Diego nanoengineering professor Shirley Meng, who is the director of the Sustainable Power and Energy Center, to identify subtle changes in the cathode microstructure and local composition using high-resolution microscopic imaging tools.

Credit: 
University of California - San Diego

New software tool could provide answers to some of life's most intriguing questions

A University of Waterloo researcher has spearheaded the development of a software tool that can provide conclusive answers to some of the world's most fascinating questions.

The tool, which combines supervised machine learning with digital signal processing (ML-DSP), could for the first time make it possible to definitively answer questions such as how many different species exist on Earth and in the oceans. How are existing, newly-discovered, and extinct species related to each other? What are the bacterial origins of human mitochondrial DNA? Do the DNA of a parasite and its host have a similar genomic signature?

The tool also has the potential to positively impact the personalized medicine industry by identifying the specific strain of a virus and thus allowing for precise drugs to be developed and prescribed to treat it.

ML-DSP is an alignment-free software tool which works by transforming a DNA sequence into a digital (numerical) signal, and uses digital signal processing methods to process and distinguish these signals from each other.

"With this method even if we only have small fragments of DNA we can still classify DNA sequences, regardless of their origin, or whether they are natural, synthetic, or computer-generated," said Lila Kari, a professor in Waterloo's Faculty of Mathematics. "Another important potential application of this tool is in the healthcare sector, as in this era of personalized medicine we can classify viruses and customize the treatment of a particular patient depending on the specific strain of the virus that affects them."

In the study, researchers performed a quantitative comparison with other state-of-the-art classification software tools on two small benchmark datasets and one large 4,322 vertebrate mitochondrial genome dataset. "Our results show that ML-DSP overwhelmingly outperforms alignment-based software in terms of processing time, while having classification accuracies that are comparable in the case of small datasets and superior in the case of large datasets," Kari said. "Compared with other alignment-free software, ML-DSP has significantly better classification accuracy and is overall faster."

The authors also conducted preliminary experiments indicating the potential of ML-DSP to be used for other datasets, by classifying 4,271 complete dengue virus genomes into subtypes with 100 per cent accuracy, and 4,710 bacterial genomes into divisions with 95.5 per cent accuracy.

Credit: 
University of Waterloo

RIT researcher collaborates with UR to develop new form of laser for sound

The optical laser has grown to a $10 billion global technology market since it was invented in 1960, and has led to Nobel prizes for Art Ashkin for developing optical tweezing and Gerard Mourou and Donna Strickland for work with pulsed lasers. Now a Rochester Institute of Technology researcher has teamed up with experts at the University of Rochester to create a different kind of laser - a laser for sound, using the optical tweezer technique invented by Ashkin.

In the newest issue of Nature Photonics, the researchers propose and demonstrate a phonon laser using an optically levitated nanoparticle. A phonon is a quantum of energy associated with a sound wave and optical tweezers test the limits of quantum effects in isolation and eliminates physical disturbances from the surrounding environment. The researchers studied the mechanical vibrations of the nanoparticle, which is levitated against gravity by the force of radiation at the focus of an optical laser beam.

"Measuring the position of the nanoparticle by detecting the light it scatters, and feeding that information back into the tweezer beam allows us to create a laser-like situation," said Mishkat Bhattacharya, associate professor of physics at RIT and a theoretical quantum optics researcher. "The mechanical vibrations become intense and fall into perfect sync, just like the electromagnetic waves emerging from an optical laser."

Because the waves emerging from a laser pointer are in sync, the beam can travel a long distance without spreading in all directions - unlike light from the sun or from a light bulb. In a standard optical laser the properties of the light output are controlled by the material from which the laser is made. Interestingly, in the phonon laser the roles of light and matter are reversed - the motion of the material particle is now governed by the optical feedback.

"We are very excited to see what the uses of this device are going to be - especially for sensing and information processing given that the optical laser has so many, and still evolving, applications," said Bhattacharya. He also said the phonon laser promises to enable the investigation of fundamental quantum physics, including engineering of the famous thought experiment of Schrödinger's cat, which can exist at two places simultaneously.

Credit: 
Rochester Institute of Technology

A novel data-compression technique for faster computer programs

A novel technique developed by MIT researchers rethinks hardware data compression to free up more memory used by computers and mobile devices, allowing them to run faster and perform more tasks simultaneously.

Data compression leverages redundant data to free up storage capacity, boost computing speeds, and provide other perks. In current computer systems, accessing main memory is very expensive compared to actual computation. Because of this, using data compression in the memory helps improve performance, as it reduces the frequency and amount of data programs need to fetch from main memory.

Memory in modern computers manages and transfers data in fixed-size chunks, on which traditional compression techniques must operate. Software, however, doesn't naturally store its data in fixed-size chunks. Instead, it uses "objects," data structures that contain various types of data and have variable sizes. Therefore, traditional hardware compression techniques handle objects poorly.

In a paper being presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems this week, the MIT researchers describe the first approach to compress objects across the memory hierarchy. This reduces memory usage while improving performance and efficiency.

Programmers could benefit from this technique when programming in any modern programming language -- such as Java, Python, and Go -- that stores and manages data in objects, without changing their code. On their end, consumers would see computers that can run much faster or can run many more apps at the same speeds. Because each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.

In experiments using a modified Java virtual machine, the technique compressed twice as much data and reduced memory usage by half over traditional cache-based methods.

"The motivation was trying to come up with a new memory hierarchy that could do object-based compression, instead of cache-line compression, because that's how most modern programming languages manage data," says first author Po-An Tsai, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

"All computer systems would benefit from this," adds co-author Daniel Sanchez, a professor of computer science and electrical engineering, and a researcher at CSAIL. "Programs become faster because they stop being bottlenecked by memory bandwidth."

The researchers built on their prior work that restructures the memory architecture to directly manipulate objects. Traditional architectures store data in blocks in a hierarchy of progressively larger and slower memories, called "caches." Recently accessed blocks rise to the smaller, faster caches, while older blocks are moved to slower and larger caches, eventually ending back in main memory. While this organization is flexible, it is costly: To access memory, each cache needs to search for the address among its contents.

"Because the natural unit of data management in modern programming languages is objects, why not just make a memory hierarchy that deals with objects?" Sanchez says.

In a paper published last October, the researchers detailed a system called Hotpads, that stores entire objects, tightly packed into hierarchical levels, or "pads." These levels reside entirely on efficient, on-chip, directly addressed memories -- with no sophisticated searches required.

Programs then directly reference the location of all objects across the hierarchy of pads. Newly allocated and recently referenced objects, and the objects they point to, stay in the faster level. When the faster level fills, it runs an "eviction" process that keeps recently referenced objects but kicks down older objects to slower levels and recycles objects that are no longer useful, to free up space. Pointers are then updated in each object to point to the new locations of all moved objects. In this way, programs can access objects much more cheaply than searching through cache levels.

For their new work, the researchers designed a technique, called "Zippads," that leverages the Hotpads architecture to compress objects. When objects first start at the faster level, they're uncompressed. But when they're evicted to slower levels, they're all compressed. Pointers in all objects across levels then point to those compressed objects, which makes them easy to recall back to the faster levels and able to be stored more compactly than prior techniques.

A compression algorithm then leverages redundancy across objects efficiently. This technique uncovers more compression opportunities than previous techniques, which were limited to finding redundancy within each fixed-size block. The algorithm first picks a few representative objects as "base" objects. Then, in new objects, it only stores the different data between those objects and the representative base objects.

Credit: 
Massachusetts Institute of Technology

This gene could play a major role in reducing brain swelling after stroke

Could a medication someday help the brain heal itself after a stroke, or even prevent damage following a blow to the head? A new USC study lends support to the idea.

When a person has a stroke, the brain responds with inflammation, which expands the area of injury and leads to more disability. In the April 9 issue of Cell Reports, USC researchers describe a key gene involved with tamping down inflammation in the brain, as well as what happens when the injured brain gets an added boost of that gene.

The gene -- called TRIM9 -- is abundant in the youthful brain but grows scarce with age, just as people become more at risk from stroke. In a lab model of stroke, researchers found that older brains with low TRIM9 levels -- or engineered brains missing the TRIM9 gene entirely -- were prone to extensive swelling following stroke.

But when the scientists used a harmless virus to carry a dose of the gene directly into TRIM9-deficient brains, the swelling decreased dramatically and recovery improved.

Jae Jung, lead author and chair of the Department of Molecular Microbiology and Immunology at the Keck School of Medicine of USC, says it's unlikely that gene therapy delivered by viruses will become the go-to treatment for strokes, head injuries or encephalitis. It's too slow, he said, and the best shot at treating stroke is within the first 30 minutes to one hour. Jung says the next step will be identifying what, exactly, flips on the switch for TRIM9 gene expression.

"Maybe there will be a way to chemically activate TRIM9 right after a stroke," Jung said. "Or maybe a football player can take a medication that turns on TRIM9 gene expression right after they get a blow to the head."

Not all inflammation in the brain is bad, Jung added. Inflammation plays a role in fighting infection and helps clear away dead tissue. But when it goes on too long, neurons die; inflammation causes the brain's blood vessels to become permeable, allowing white blood cells to enter tissue where they don't belong.

Credit: 
University of Southern California