Tech

Regulating blood supply to limbs improves stroke recovery

image: Remote ischemic limb conditioning reduces swelling and lesion size (bottom).

Image: 
Yang et al., <em>JNeurosci</em> 2019

Cutting off and then restoring blood supply to a limb following a stroke reduces tissue damage and swelling and improves functional recovery, according to a new study in mice published in JNeurosci. The simple, noninvasive technique could be developed into a treatment for stroke patients of varying severity.

Remote ischemic limb conditioning utilizes a blood pressure cuff and has been shown to aide in stroke recovery, potentially by affecting monocytes. Monocytes are a type of white blood cell involved in immune responses that can either reduce or promote inflammation, which is necessary in the tissue recovery process.

Sunghee Cho and colleagues at Burke Neurological Institute treated mice that experienced a stroke with remote ischemic limb conditioning and tested the monocyte levels in their blood. The research team found that the ratio of inflammatory to non-inflammatory monocytes circulating in the blood increased, resulting in more available inflammatory cells.

Surprisingly, the increase in circulating inflammatory cells was associated with reduced brain tissue damage and swelling and improved motor function. The symptoms improved for both moderate and severe strokes, indicating the potential for wide application as a stroke treatment.

Credit: 
Society for Neuroscience

Engaging educational videos elicit similar brain activity in students

image: Illustration of the video watching procedure.

Image: 
Zhu et al., eNeuro 2019

The most engaging educational videos are correlated with similar brain activity across learners, according to research in young adults recently published in eNeuro.

Yi Hu and colleagues at East China Normal University showed university students two-minute introduction clips for 15 online classes and monitored their brain activity via electroencephalogram. The students ranked the clips based on their desire to learn the material and then rated how interesting and valuable the class seemed.

The students displayed highly similar brain activity while they watched the clips that were universally ranked as the most appealing, while the lowest ranking videos correlated with the largest variety in brain activity. Additionally, the highest ranked videos were chosen because of their interest rather than their value. These results build on previous studies that found that the most effective political speeches and public service announcements are correlated with the most similar brain activity among observers.

Credit: 
Society for Neuroscience

Global change is triggering an identity switch in grasslands

image: Konza Prairie Biological Station in northeastern Kansas. Humans and animals alike depend on grasslands for survival. In addition to providing land for cattle and sheep to graze, grasslands can also store up to 30 percent of the world's carbon.

Image: 
Kim Komatsu, Smithsonian Environmental Research Center

Since the first Homo sapiens emerged in Africa roughly 300,000 years ago, grasslands have sustained humanity and thousands of other species. But today, those grasslands are shifting beneath our feet. Global change--which includes climate change, pollution and other widespread environmental alterations--is transforming the plant species growing in them, and not always in the ways scientists expected, a new study published Monday revealed.

Grasslands make up more than 40 percent of the world's ice-free land. In addition to providing food for human-raised cattle and sheep, grasslands are home to animals found nowhere else in the wild, such as the bison of North America's prairies or the zebras and giraffes of the African savannas. Grasslands also can hold up to 30 percent of the world's carbon, making them critical allies in the fight against climate change. However, changes in the plants that comprise grasslands could put those benefits at risk.

"Is it good rangeland for cattle, or is it good at storing carbon?" said lead author Kim Komatsu, a grassland ecologist at the Smithsonian Environmental Research Center. "It really matters what the identities of the individual species are....You might have a really invaded weedy system that would not be as beneficial for these services that humans depend on."

The new paper, a meta-analysis published in the Proceedings of the National Academy of Sciences, offers the most comprehensive evidence to date on how human activities are changing grassland plants. The team looked at 105 grassland experiments around the world. Each experiment tested at least one global change factor--such as rising carbon dioxide, hotter temperatures, extra nutrient pollution or drought. Some experiments looked at three or more types of changes. Komatsu and the other authors wanted to know whether global change was altering the composition of those grasslands, both in the total plant species present and the kinds of species.

They discovered grasslands can be surprisingly tough--to a point. In general, grasslands resisted the effects of global change for the first decade of exposure. But once they hit the 10-year mark, their species began to shift. Half of the experiments lasting 10 years or more found a change in the total number of plant species, and nearly three-fourths found changes in the types of species. By contrast, a mere one-fifth of the experiments that lasted under 10 years picked up any species changes at all. Experiments that examined three or more aspects of global change were also more likely to detect grassland transformation.

"I think they're very, very resilient," said Meghan Avolio, co-author and assistant professor of ecology at Johns Hopkins University. "But when conditions arrive that they do change, the change can be really important."

To the scientists' surprise, the identity of grassland species can change drastically, without altering the number of species. In half the plots where individual plant species changed, the total amount of species remained the same. In some plots, nearly all the species had changed.

"Number of species is such an easy and bite-sized way to understand a community...but what it doesn't take into account is species identity," Avolio said. "And what we're finding is there can be a turnover."

For Komatsu, it's a sign of hope that most grasslands could resist the experimentally induced global changes for at least 10 years.

"They're changing slowly enough that we can prevent catastrophic changes in the future," she said.

However, time may not be on our side. In some experiments, the current pace of global change transformed even the "control plots" that were not exposed to experimentally higher global change pressures. Eventually, many of those plots looked the same as the experimental plots.

"Global change is happening on a scale that's bigger than the experiments we're doing....The effects that we would expect through our experimental results, we're starting to see those effects occurring naturally," Komatsu said.

Credit: 
Smithsonian

A map of the brain can tell what you're reading

image: These color-coded maps of the brain show the semantic similarities during listening (top) and reading (bottom).

Image: 
Image by Fatma Deniz

Too busy or lazy to read Melville's Moby Dick or Tolstoy's Anna Karenina? That's OK. Whether you read the classics, or listen to them instead, the same cognitive and emotional parts of the brain are likely to be stimulated. And now, there's a map to prove it.

Neuroscientists at the University of California, Berkeley, have created interactive maps that can predict where different categories of words activate the brain. Their latest map is focused on what happens in the brain when you read stories.

The findings, to appear Aug. 19 in the Journal of Neuroscience, provide further evidence that different people share similar semantic -- or word-meaning -- topography, opening yet another door to our inner thoughts and narratives. They also have practical implications for learning and for speech disorders, from dyslexia to aphasia.

"At a time when more people are absorbing information via audiobooks, podcasts and even audio texts, our study shows that, whether they're listening to or reading the same materials, they are processing semantic information similarly," said study lead author Fatma Deniz, a postdoctoral researcher in neuroscience in the Gallant Lab at UC Berkeley and former data science fellow with the Berkeley Institute for Data Science.

For the study, people listened to stories from "The Moth Radio Hour," a popular podcast series, and then read those same stories. Using functional MRI, researchers scanned their brains in both the listening and reading conditions, compared their listening-versus-reading brain activity data, and found the maps they created from both datasets were virtually identical.

The results can be viewed in an interactive, 3D, color-coded map, where words -- grouped in such categories as visual, tactile, numeric, locational, violent, mental, emotional and social -- are presented like vibrant butterflies on flattened cortices. The cortex is the coiled surface layer of gray matter of the cerebrum that coordinates sensory and motor information.

The interactive 3D brain viewer is scheduled to go online this month.

As for clinical applications, the maps could be used to compare language processing in healthy people and in those with stroke, epilepsy and brain injuries that impair speech. Understanding such differences can aid recovery efforts, Deniz said.

The semantic maps can also inform interventions for dyslexia, a widespread, neurodevelopmental language-processing disorder that impairs reading.

"If, in the future, we find that the dyslexic brain has rich semantic language representation when listening to an audiobook or other recording, that could bring more audio materials into the classroom," Deniz said.

And the same goes for auditory processing disorders, in which people cannot distinguish the sounds or "phonemes" that make up words. "It would be very helpful to be able to compare the listening and reading semantic maps for people with auditory processing disorder," she said.

Nine volunteers each spent a couple of hours inside functional MRI scanners, listening and then reading stories from "The Moth Radio Hour" as researchers measured their cerebral blood flow.

Their brain activity data, in both conditions, were then matched against time-coded transcriptions of the stories, the results of which were fed into a computer program that scores words according to their relationship to one another.

Using statistical modeling, researchers arranged thousands of words on maps according to their semantic relationships. Under the animals category, for example, one can find the words "bear," "cat" and "fish."

The maps, which covered at least one-third of the cerebral cortex, enabled the researchers to predict with accuracy which words would activate which parts of the brain.

The results of the reading experiment came as a surprise to Deniz, who had anticipated some changes in the way readers versus listeners would process semantic information.

"We knew that a few brain regions were activated similarly when you hear a word and read the same word, but I was not expecting such strong similarities in the meaning representation across a large network of brain regions in both these sensory modalities," Deniz said.

Her study is a follow-up to a 2016 Gallant Lab study that recorded the brain activity of seven study subjects as they listened to stories from "The Moth Radio Hour."

Future mapping of semantic information will include experiments with people who speak languages other than English, as well as with people who have language-based learning disorders, Deniz said.

Credit: 
University of California - Berkeley

Need a mental break? Avoid your cellphone, Rutgers researchers say

Using a cellphone to take a break during mentally challenging tasks does not allow the brain to recharge effectively and may result in poorer performance, Rutgers researchers found.

The experiment, published in the Journal of Behavioral Addictions, assigned college undergraduates to solve challenging sets of word puzzles. Halfway through, some were allowed to take breaks using their cellphones. Others took breaks using paper or a computer while some took no break at all.

The participants who took phone breaks experienced the highest levels of mental depletion and were among the least capable of solving the puzzles afterwards. Their post-break efficiency and quickness was comparable to those with no break. Their number of word problems solved after the break was slightly better than those who took no break, but worse than all other participants.

Participants who took a break on their cell phone took 19% longer to do the rest of the task, and solved 22% fewer problems than did those in the other break conditions combined.

"The act of reaching for your phone between tasks, or mid-task, is becoming more commonplace. It is important to know the costs associated with reaching for this device during every spare minute. We assume it's no different from any other break - but the phone may carry increasing levels of distraction that make it difficult to return focused attention to work tasks," said Terri Kurtzberg, co-author and associate professor of management and global business at Rutgers Business School.

"Cellphones may have this effect because even just seeing your phone activates thoughts of checking messages, connecting with people, access to ever-refilling information and more, in ways that are different than how we use other screens like computers, and laptops," she continued.

The 414 participants were given sets of 20 word puzzles. Some were given a break halfway through, during which they were told to choose three items to buy within a specific budget, using either their cellphone, a paper circular or a computer. They were told to type or write the reasons for their selections.

Credit: 
Rutgers University

Single protein plays important dual transport roles in the brain

MADISON, Wis. -- Just as a packaging breakdown can hamstring delivery of cables, switches and connectors to a house under construction, removing a protein from neurons can block the "shipment" of proteins to developing axons.

Axons are the telephone wires of the nervous system. They convey information to dendrites on other nerve cells, in a processing network of phenomenal complexity that is the backbone of the entire nervous system.

In a paper published Aug. 6 in Nature Communications, Edwin Chapman of the Howard Hughes Medical Institute and the University of Wisconsin¬-Madison reports that halting production of synaptotagmin 17 (syt-17) blocks growth of axons.

Equally significant, when cells made more syt-17, axon growth accelerated. A wide range of neurological conditions could benefit from the growth of axons, including spinal cord injuries and some neurodegenerative diseases.

The protein in question, syt-17, is made by the 17th (and last) synaptotagmin gene to be identified.

"Lots of work has been done on this family since it was discovered in 1981," Chapman says.

In many cases, synaptotagmin proteins serve as calcium sensors that trigger the release of chemical messengers called neurotransmitters, which nerve cells use to communicate, when calcium ions are present.

"Calcium ions are a basic signal in the nervous system, and so synaptotagmin proteins have been intensely studied," Chapman says.

In a search to locate synaptotagmin proteins in neurons, Chapman and first author David Ruhl, then his graduate student, traced syt-17 to the Golgi apparatus. The Golgi is a shipping center inside the neuron that "packages" proteins for delivery from another part of the cell to the end of an axon, where growth occurs.

"It's a bit of a simplification," says Chapman, a professor of neuroscience, "but basically, you can't build without supplies, and one of the ways that neurons are able to build such long, complicated axons is through syt-17 speeding up the production line."

The key observation relating axon growth to syt-17 occurred about six years ago, when Chapman's lab was doing basic work to find the different synaptotagmin proteins inside neurons. "We made an accidental discovery that it makes axons grow really long," Chapman says. "Well, that was interesting! We decided to work on it."

One standard way to learn what a gene does is to "knock out," or silence, it. In syt-17 knockout mice, the axons barely grew, Chapman says. "But in mice genetically programmed to make an abnormally large quantity of syt-17, the axons grew much faster than normal."

The interaction is much like a construction project, Chapman says. "To grow an axon, you've got to send a lot of stuff down pipelines that supply the growing end of an axon. Think of building a house: You need shipments of studs, floor joists and roof shingles. A growing axon needs its own parcels, though they are much smaller."

In 2016, Ruhl discovered a second pool of syt-17 in the neuron. Finding two stashes "was weird," Chapman says. Ruhl began to notice that the protein had a split personality and eventually discovered it does two completely unrelated things in the same cell.

While the first stash was on the signal-shipping side, the second pool of syt-17 was in the dendrite, the signal-sensing side of the synapse. The synapse is the communication junction between two neurons.

"It's the exact opposite of what we'd have guessed," says Ruhl, now a postdoctoral researcher at the University of California, San Diego. "I think the second function is pretty cool."

Ruhl eventually discovered that syt-17 at the dendrite tunes down synaptic communication by keeping a reserve of receptors inside the cell. Receptors bind to neurotransmitters in the synapse.

"Without syt-17 (at the dendrite) most of the receptors wind up on the surface and synapses are turned up to 11," he says.

This is not a bad thing in terms of brain plasticity -- the ability of an adult brain to adapt and learn. "In plasticity, an important feature is increasing or decreasing receptivity to neurotransmitters," Chapman says.

Without this type of dampening, neurons could begin to fire uncontrollably, leading to problems like seizures.

And so syt-17 turns out to be "a key player in the negative half of the balance sheet," Chapman says. "It doesn't just help axons grow; it regulates how existing synapses respond to signals.

There's a certain amount of truth to the old saw that the brain, like a hard disk, needs to forget in order to make "room" for the new, Chapman says. "Remembering is important, but forgetting is important, too."

Credit: 
University of Wisconsin-Madison

How ergonomic is your warehouse job? Soon, an app might be able to tell you

image: To test how well the algorithm might work in a warehouse, the researchers had a robot (white arm) monitor participants performing activities in a warehouse-like setting. Within three seconds of the end of each activity, the robot showed a score on its display (right).

Image: 
Parsa et al./IEEE Robotics and Automation Letters

In 2017 there were nearly 350,000 incidents of workers taking sick leave due to injuries affecting muscles, nerves, ligaments or tendons -- like carpal tunnel syndrome -- according to the U.S. Bureau of Labor Statistics. Among the workers with the highest number of incidents: people who work in factories and warehouses.

Musculoskeletal disorders happen at work when people use awkward postures or perform repeated tasks. These behaviors generate strain on the body over time. So it's important to point out and minimize risky behaviors to keep workers healthy on the job.

Researchers at the University of Washington have used machine learning to develop a new system that can monitor factory and warehouse workers and tell them how risky their behaviors are in real time. The algorithm divides up a series of activities -- such as lifting a box off a high shelf, carrying it to a table and setting it down -- into individual actions and then calculates a risk score associated with each action.

The team published its results June 26 in IEEE Robotics and Automation Letters and will present the findings Aug. 23 at the IEEE International Conference on Automation Science and Engineering in Vancouver, British Columbia.

"Right now workers can do a self-assessment where they fill out their daily tasks on a table to estimate how risky their activities are," said senior author Ashis Banerjee, an assistant professor in both the industrial & systems engineering and mechanical engineering departments at the UW. "But that's time consuming, and it's hard for people to see how it's directly benefiting them. Now we have made this whole process fully automated. Our plan is to put it in a smartphone app so that workers can even monitor themselves and get immediate feedback."

For these self-assessments, people currently use a snapshot of a task being performed. The position of each joint gets a score, and the sum of all the scores determines how risky that pose is. But workers usually perform a series of motions for a specific task, and the researchers wanted their algorithm to be able to compute an overall score for the entire action.

Moving to video is more accurate, but it requires a new way to add up the scores. To train and test the algorithm, the team created a dataset containing 20 three-minute videos of people doing 17 activities that are common in warehouses or factories.

"One of the tasks we had people do was pick up a box from a rack and place it on a table," said first author Behnoosh Parsa, a UW mechanical engineering doctoral student. "We wanted to capture different scenarios, so sometimes they would have to stretch their arms, twist their bodies or bend to pick something up."

The researchers captured their dataset using a Microsoft Kinect camera, which recorded 3D videos that allowed them to map out what was happening to the participants' joints during each task.

Using the Kinect data, the algorithm first learned to compute risk scores for each video frame. Then it progressed to identifying when a task started and ended so that it could calculate a risk score for an entire action.

The algorithm labeled three actions in the dataset as risky behaviors: picking up a box from a high shelf, and placing either a box or a rod onto a high shelf.

Now the team is developing an app that factory workers and supervisors can use to monitor in real time the risks of their daily actions. The app will provide warnings for moderately risky actions and alerts for high-risk actions.

Eventually the researchers want robots in warehouses or factories to be able to use the algorithm to help keep workers healthy. To see how well the algorithm could work in a hypothetical warehouse, the researchers had a robot monitor two participants performing the same activities. Within three seconds of the end of each activity, the robot showed a score on its display.

"Factories and warehouses have used automation for several decades. Now that people are starting to work in settings where robots are used, we have a unique opportunity to split up the work so that the robots are doing the risky jobs," Banerjee said. "Robots and humans could have an active collaboration, where a robot can say, 'I see that you are picking up these heavy objects from the top shelf and I think you may be doing that a lot of times. Let me help you.'"

Credit: 
University of Washington

Optic nerve stimulation to aid the blind

image: OpticSELINE electrode array for intraneural stimulation of the optic nerve, developed in the Translational Neural Engineering Lab, and used in preliminary studies.

Image: 
© 2019 EPFL / Markus Ding

Scientists from EPFL in Switzerland and Scuola Superiore Sant’Anna in Italy are developing technology for the blind that bypasses the eyeball entirely and sends messages to the brain. They do this by stimulating the optic nerve with a new type of intraneural electrode called OpticSELINE. Successfully tested in rabbits, they report their results in Nature Biomedical Engineering.

“We believe that intraneural stimulation can be a valuable solution for several neuroprosthetic devices for sensory and motor function restoration. The translational potentials of this approach are indeed extremely promising”, explains Silvestro Micera, EPFL’s Bertarelli Foundation Chair in Translational Neuroengineering, and Professor of Bioelectronics at Scuola Superiore Sant’Anna, who continues to innovate in hand prosthetics for amputees using intraneural electrodes.

Blindness affects an estimated 39 million people in the world. Many factors can induce blindness, like genetics, retinal detachment, trauma, stroke in the visual cortex, glaucoma, cataract, inflammation or infection. Some blindness is temporary and can be treated medically. How do you help someone who is permanently blind?

The idea is to produce phosphenes, the sensation of seeing light in the form of white patterns, without seeing light directly. Retinal implants, a prosthetic device for helping the blind, suffer from exclusion criteria. For example, ½ million people worldwide are blind due to Retinitis pigmentosa, a genetic disorder, but only a few hundred patients qualify for retinal implants for clinical reasons. A brain implant that stimulates the visual cortex directly is another strategy albeit risky. A priori, the new intraneural solution minimizes exclusion criteria since the optic nerve and the pathway to the brain are often intact.

Previous attempts to stimulate the optic nerve in the 1990s provided inconclusive results. EPFL’s Medtronic Chair in Neuroengineering Diego Ghezzi explains, “Back then, they used cuff nerve electrodes. The problem is that these electrodes are rigid and they move around, so the electrical stimulation of the nerve fibers becomes unstable. The patients had a difficult time interpreting the stimulation, because they kept on seeing something different. Moreover, they probably have limited selectivity because they recruited superficial fibers.”

Intraneural electrodes may indeed be the answer for providing rich visual information to the subjects. They are also stable and less likely to move around once implanted in a subject, according to the scientists. Cuff electrodes are surgically placed around the nerve, whereas intraneural electrodes pierce through the nerve.

Together, Ghezzi, Micera and their teams engineered the OpticSELINE, an electrode array of 12 electrodes. In order to understand how effective these electrodes are at stimulating the various nerve fibers within the optic nerve, the scientists delivered electric current to the optic nerve via OpticSELINE and measured the brain’s activity in the visual cortex. They developed an elaborate algorithm to decode the cortical signals. They showed that each stimulating electrode induces a specific and unique pattern of cortical activation, suggesting that intraneural stimulation of the optic nerve is selective and informative.

As a preliminary study, the visual perception behind these cortical patterns remains unknown. Ghezzi continues, “For now, we know that intraneural stimulation has the potential to provide informative visual patterns. It will take feedback from patients in future clinical trials in order to fine-tune those patterns. From a purely technological perspective, we could do clinical trials tomorrow.”

With current electrode technology, a human OpticSELINE could consist of up to 48-60 electrodes. This limited number of electrodes is not sufficient to restore sight entirely. But these limited visual signals could be engineered to provide a visual aid for daily living.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

New lipid signaling target may improve T cell immunotherapy

image: Hollings Cancer Center researchers Dr. Ogretmen (left) and Dr. Mehrotra aim to regulate the fate of T cells.

Image: 
Emma Vought (left) and Sarah Pack (right)

The immune system surveils our body looking for things that don't belong, often bacteria and viruses. While cancer cells are abnormal cells that undergo unregulated cell growth, they are good at evading detection by the immune system. T cell immunotherapy uses the body's own T cells but reprograms them to target cancer cells. Three different signaling pathways are known to be important for regulating T cell function: the cytokine interleukin-15 (IL-15) promotes a central memory-like T cell (Tcm) phenotype that can kill unwanted cells, transforming growth factor beta (TGF-?) pushes T cells to differentiate into T regulatory cells (Tregs), and peroxisome proliferator-activated receptor gamma (PPAR?) regulates lipid metabolism, which is important for providing energy to T cells. The mechanism by which these pathways determine T cell function, however, remains unknown.

In recent work published by two collaborative research groups at the Medical University of South Carolina (MUSC) who study lipid signaling in the context of cancer biology and cancer immunology, these three seemingly disparate pathways have been linked. The two groups collaborated to examine the role of sphingosine 1-phosphate (S1P), a lipid generated by sphingosine kinase 1 (SphK1), in regulating T cell differentiation. Their results, published online on August 13, 2019 by Cell Reports, showed that loss of SphK1 from T cells and the resulting decrease in S1P levels foster the maintenance of a Tcm phenotype and inhibits their differentiation into Tregs. Ultimately, this signaling pathway improves T cell-mediated immunotherapy.

"A lot of information is known about SphK1 in tumors, but there is little known about how SphK1 regulates T cell function," says Shikhar Mehrotra, Ph.D., co-senior author, Hollings Cancer Center (HCC) researcher, associate director of the Cell Therapy Unit, and associate professor at MUSC.

To evaluate the impact of SphK1 on T cells, SphK1 function was inhibited both genetically and by using a chemical drug. They found that inhibition of SphK1, and therefore reduced S1P levels, led to a Tcm phenotype that reduced tumor size and decreased mortality in preclinical cancer models.

"When we inhibit S1P, generated by SphK1, we can make these T cells more active for killing tumors," says Besim Ogretmen, Ph.D., co-senior author, HCC researcher, program coleader of HCC's Developmental Cancer Therapeutics Research Program, professor and SmartState endowed chair of biochemistry and molecular biology at MUSC. "I think this was the first discovery that internal lipid signaling can play an important role in regulating the function of T cells against cancer cells."

They next worked out the mechanism of how SphK1 influences the T cell phenotype. Depletion of S1P levels increased the activity of a transcription factor that turns on genes associated with the memory phenotype. Additionally, loss of S1P reduced the activity of PPAR?, with two consequences: reduced PPAR? activity prevented T cells from differentiating into Tregs, and reduced PPAR? activity led to an increase in lipid utilization for energy production. Cumulatively, the multiple impacts of S1P depletion led to the Tcm phenotype.

"This is an upstream molecule that regulates T cells in many different ways," says Mehrotra.

These molecular details explain the different impacts of T cell regulation that were known previously. IL-15 leads to a Tcm phenotype by inhibiting SphK1 and S1P; conversely, TGF-? pushes cells towards the Treg phenotype by activating SphK1. Furthermore, these different pathways influence each other to intricately control T cell fate.

"Everything has to be in balance, and it remains that way until an infection increases the signaling when the immune response needs to be hyperactive," says Mehrotra. "Then Tregs need to tolerize our immune system and prevent autoimmunity. However, to combat cancer cells, we need to break that tolerance because we need the T cells to be hyperactive."

Common cancer treatments often center around chemotherapy, which not only targets and kills cancer cells but also kills immune cells. Targeting SphK1 allows the immune cells to stick around to target and kill cancer cells. Furthermore, Mehrotra and Ogretmen have shown that combination therapy, using a drug called PD1 mixed with compounds that inhibit SphK1, increased the efficacy of treatment in preclinical models.

"There is a lot of communication between the cancer cells in the body and the immune cells," says Ogretmen. "We don't really understand this communication yet and whether the cancer cells signal to the T cells to increase their S1P levels, making them more inactive."

Interestingly, S1P levels are high in cancer cells, allowing them to survive better. This might also impact the ability of T cells to target the cancer cells. This new work suggests that depletion of S1P might function in two ways, both inhibiting cancer cell survival and promoting T cell activity.

"This has opened up many interesting areas of further research," says Mehrotra. "Now we know that just by modulating intrinsic levels of S1P you can reach a different phenotype."

"The key is understanding the mechanism of how this pathway regulates T cell function and differentiation," adds Ogretmen.

This work paves the way to calibrate T cell immunotherapy for cancer by dampening the accumulation of S1P. Future work will be aimed at validating this pathway in several preclinical cancer models. While the mechanism of action should not change across the various model systems, this is an important next step in bringing this therapy to the clinic. Furthermore, Mehrotra and Ogretmen think this pathway has the potential to modulate autoimmune diseases such as multiple sclerosis, lupus and colitis.

Credit: 
Medical University of South Carolina

Materials that can revolutionize how light is harnessed for solar energy

image: Magnetic field data that shows the formation and decay of the excitons generated by singlet fission.

Image: 
A. Asadpoor Darvish, McCamey Lab

Researchers at Columbia University have developed a way to harness more power from singlet fission to increase the efficiency of solar cells, providing a tool to help push forward the development of next-generation devices.

In a study published this month in Nature Chemistry, the team details the design of organic molecules that are capable of generating two excitons per photon of light, a process called singlet fission. The excitons are produced rapidly and can live for much longer than those generated from their inorganic counterparts, which leads to an amplification of electricity generated per photon that is absorbed by a solar cell.

"We have developed a new design rule for singlet fission materials," said Luis Campos, an associate professor of chemistry and one of three principal investigators on the study. "This has led us to develop the most efficient and technologically useful intramolecular singlet fission materials to date. These improvements will open the door for more efficient solar cells."

All modern solar panels operate by the same process - one photon of light generates one exciton, Campos explained. The exciton can then be converted into electric current. However, there are some molecules that can be implemented in solar cells that have the ability to generate two excitons from a single photon - a process called singlet fission. These solar cells form the basis for next-generation devices, which are still at infancy. One of the biggest challenges of working with such molecules, though, is that the two excitons "live" for very short periods of time (tens of nanoseconds), making it difficult to harvest them as a form of electricity.

In the current study, funded in part by the Office of Naval Research, Campos and colleagues designed organic molecules that can quickly generate two excitons that live much longer than the state-of-the-art systems. It is an advancement that can not only be used in next-generation solar energy production, but also in photocatalytic processes in chemistry, sensors, and imaging, Campos explained, as these excitons can be used to initiate chemical reactions, which can then be used by industry to make drugs, plastics, and many other types of consumer chemicals.

"Intramolecular singlet fission has been demonstrated by our group and others, but the resulting excitons were either generated very slowly, or they wouldn't last very long," Campos said. "This work is the first to show that singlet fission can rapidly generate two excitons that can live for a very long time. This opens the door to fundamentally study how these excitons behave as they sit on individual molecules, and also to understand how they can be efficiently put to work in devices that benefit from light-amplified signals."

The team's design strategy should also prove useful in separate areas of scientific study and have many other yet-unimaginable applications, he added.

Credit: 
Columbia University

AJR publishes gender affirmation surgery primer for radiologists

image: Scout image from contrast-enhanced CT shows erectile implant; stainless steel and silicone anchors (arrow) transfixed to pubic bone are asymmetric.

Image: 
<i>American Journal of Roentgenology</i> (AJR)

Leesburg, VA, August 19, 2019--An ahead-of-print article published in the December issue of the American Journal of Roentgenology (AJR) provides a much needed overview of gender affirmation surgical therapies encountered in diagnostic imaging, defining normal postsurgical anatomy and describing select complications using a multidisciplinary, multimodality approach.

With gender incongruence now categorized as a sexual health condition--no longer a mental illness--in the most recent revision to the International Classification of Diseases, lead author Florence X. Doo and colleagues at Mount Sinai West in New York City contend that all subspecialties must be prepared to identify radiologic correlates and distinguish key postoperative variations in the three major categories of gender affirmation surgery.

Genital Reconstruction

For trans-females, pelvic MRI remains the most reliable modality to evaluate the two most common complications arising from vaginoplasty: hematomas and fluid collection. Cellulitis, abscess, neovaginal prolapse, and focal skin necrosis can occur, as well. As Doo cautions, "at the end of the procedure, radiopaque vaginal packing is inserted, which should not be mistaken for other foreign bodies on postoperative imaging." Neovaginal fistulas present less frequently, and for most trans-female patients, these complications may be diagnosed on the basis of clinical symptoms and physical examinations. Although vaginoplasty typically preserves the prostate, it may have atrophied from adjuvant hormonal therapy with estrogen and progesterone, so regular prostate cancer screening guidelines should still be followed.

When evaluating urethral complications from phalloplasty in trans-males, because the neo-to-native urethra anastomosis site will evidence diameter differences, retrograde urethrograms can result in stricture overdiagnosis. Apropos, preliminary assessments should be for functional stricture, alongside the performance of urodynamic studies. "However," notes Doo, "for confirmation of stricture with abnormal function tests and also for evaluation for fistula, a retrograde urethrogram or voiding cystourethrogram can be obtained." Should a patient desire erectile potential with the fully-healed neophallus, an implant may be placed, which is prone to infection, attrition, malposition, and constituent separation.

For trans-males instead pursuing metoidioplasty (i.e., hormone-induced clitoral hypertrophy, followed by clitoral degloving and ligament detachment for neophallus lengthening), no penile implant presently exists that can sustain erectile rigidity for sexual function.

Body Contouring

Related to gender affirmation surgery, silicone or saline breast implants in trans-females often evidence as incidental notations on chest radiography, CT, and MRI, yet the most common body contouring gender affirmation surgery is subcutaneous mastectomy. Since the nipple-areola complex is preserved, retaining malignant transformation risk, Doo et al. recommend trans-males submit to regular postsurgical breast cancer screening. Likewise, trans-female patients who have undergone neoadjuvant hormone replacement therapy have an increased risk for breast cancer and should be routinely screened.

Regarding soft-tissue placement for desired aesthetic results, according to Doo: "the sequelae of fat augmentation, including complications such as fat necrosis, are seen incidentally on radiologic imaging and are not routinely evaluated postoperatively. Because of many factors, patients may instead choose to obtain gluteal silicone injections or implants, which may also be incidentally encountered on routine imaging, either as stand-alone findings or as complications including granulomas and emboli to the brain or lungs."

Maxillofacial Contouring

Preoperative medical imaging, especially for facial feminization, is utilized to assess the anatomical need for frontal eminence reduction, with surgeons downstream referencing a skull radiograph to evaluate sinus cavity size and anterior table thickness. Meanwhile, Illegal silicone injections, long targeted toward all transgender populations, typically register incidentally on imaging studies, as do facial augmentations achieved via neurotoxin injections or fillers, such as calcium hydroxylapatite or hyaluronic acid. As Doo explains, "postoperative imaging is not typically obtained because external aesthetic results can be adequately evaluated by the surgeon," unless unique complications with radiologic correlates--bony erosions from impaction of alloplastic silicone prostheses or bone and cartilage autografts, embolization from injection or filler materials, etc.--present themselves.

"Otherwise," Doo says, "radiologists may typically see incidental uncomplicated postsurgical findings on routine head and neck imaging."

Credit: 
American Roentgen Ray Society

Microgravity changes brain connectivity

An international team of Russian and Belgian researchers, including scientists from HSE University, has found out that space travel has a significant impact on the brain: they discovered that cosmonauts demonstrate changes in brain connectivity related to perception and movement.

Some areas, such as regions in the insular and parietal cortices, work more synchronously with other brain areas after the space flight. On the other hand, connectivity of some other regions, such as the cerebellum and vestibular nuclei, decreases. The results of the study were published in
Frontiers in Physiology.

While Roscosmos is discussing future manned flights to Mars, NASA plans to open the International Space Station for commercial tourism, and SpaceX is testing its Starship Mars prototype, scientists are seriously concerned about the impact of a prolonged stay in space on the human body. During flights, astronauts are continuously exposed to weightlessness, which requires adaptation and causes changes within the body. Life on colonised planets and satellites - humanity's likely future - will demand special conditions to become safe for our body. While the effects of weightlessness on bones, muscles and the vestibular system are well known, how the human brain copes with microgravity has yet to be fully examined. Recent studies using neuroimaging show that space travel does not leave the brain unaffected.

An international team which included scientists from the HSE University, RAS Institute of Biomedical Problems, Federal Center of Treatment and Rehabilitation, Lomonosov Moscow State University, Gagarin Cosmonaut Training Centre and several Belgian research organisations used functional magnetic resonance imaging (fMRI) to measure functional brain connectivity in a group of eleven cosmonauts in a groundbreaking research project. It turned out that adaptation to microgravity and related changes in motor activity can cause the modifications of functional connectivity between the brain areas.

The researchers performed brain fMRIs on the cosmonauts before and after space missions lasting on average six months and then compared their data to those of healthy volunteers who had stayed on Earth. The researchers were looking for changes in connectivity between brain areas underlying sensorimotor functions such as movement and perception of body position. These brain areas were activated using gait-imitating plantar stimulation.

The researchers discovered changes in the cosmonauts' brain connections. To compensate for the lack of information from the organs of balance, which cannot provide reliable information in micrograviry, the brain develops an auxiliary system of somatosensory control, with increased reliance on visual and tactile feedback instead of vestibular input.

On the one hand, decreased connectivity between the cerebral cortex and vestibular nuclei has been revealed. Under Earth's gravity, vestibular nuclei are responsible for processing signals coming from the vestibular system. But in space, according to the researchers, the brain may downweight the activity of these structures to avoid conflicting information about the environment. They also found that after space flight, the connections of the cerebellum and a number of other structures, particularly those responsible for movement, decrease.

On the other hand, fMRI showed increased connections between the insular cortex in the left and right hemispheres, as well as between the insular cortex and other areas of the brain. Insular lobes, among other things, are responsible for the integration of signals coming from different sensor systems. Similar functions are performed by the area of parietal cortex in the right supramarginal gyrus, which also demonstrated increased connectivity with other areas of the brain after the flight.

'It's an interesting fact that connectivity increase between the right supramarginal gyrus and the left insular cortex was greater among those cosmonauts who experienced a less comfortable initial adaptation process on the space station (those who experienced vertigo, the illusion of body position, etc.),' says Ekaterina Pechenkova, Leading Research Fellow at the HSE Laboratory for Cognitive Research. The researchers believe that this kind of information will eventually help to better understand why it takes different lengths of time for different people to adapt to the conditions of space flight, and will help to develop more effective individual training programmes for space travelers.

Credit: 
National Research University Higher School of Economics

University of Michigan study indicates negative outcomes for Native American children who are spanked

ANN ARBOR--Some people may believe that if you live in a community with different cultural values, spanking might not be harmful--an assumption that does not appear to be correct, according to a new University of Michigan study.

In the first longitudinal examination of the effects of spanking among the Native American population, U-M researchers say that spanking is just as harmful for them as it is for black and white children. They say it can lead to greater externalizing behavior (e.g., being defiant, hitting others, throwing temper tantrums).

The findings appear in the recent issue of the Journal of Interpersonal Violence.

Research has increasingly recommended that parents avoid spanking, concluding that the harms of physical punishment outweigh the benefits. When race and ethnicity are factored into studies, most of the research focuses on whites and African Americans, but not Native Americans.

In the current study, researchers analyzed data of more than 3,600 mothers from 20 U.S. cities with more than 200,000 residents. Three waves of data were collected when children were 1, 3 and 5 years old. Participants disclosed the frequency they spanked their children.

Among white, African American and Native American groups, spanking was associated with greater child externalizing behavior. In other words, spanking is harmful for all three racial groups despite the fact that the practice may be considered "acceptable" or "normal" in some groups.

"Contrary to the idea that spanking may be 'normal,' and therefore not harmful in some groups, these results demonstrate that spanking is similarly associated with detrimental outcomes among white, black and American Indian children in the United States," said the study's lead author Kaitlin Ward, U-M doctoral student in social work and developmental psychology.

Native American and white mothers were equally likely to use spanking, the study indicated. Additionally, the effects of spanking on Native American children were statistically indistinguishable from the effects found among white and African American children.

The research showed that across all groups, maternal spanking of children at age 1 predicted child behavior issues at age 3, which then made spanking more likely to happen at age 5.

Ward said mental health workers and practitioners working with the Native American population--when recommending other discipline alternatives to spanking--should be very mindful of the historical trauma and oppression associated with the use of physical punishment.

Credit: 
University of Michigan

Increasing blood pressure medications at hospital discharge may pose serious risk

Increasing medications for blood pressure when discharging older patients from the hospital may pose a greater risk of falls, fainting and acute kidney injury that outweighs the potential benefits, according to a study by researchers at UC San Francisco and the affiliated San Francisco VA Health Care System.

Among more than 4,000 VA patients who were at least 65 years old and hospitalized for non-cardiac conditions, the researchers found that being discharged with intensified antihypertensives did not reduce cardiovascular events or improve blood pressure control after a year, but did increase the risk for readmission and serious adverse events within 30 days. Findings appear Aug. 19, 2019, in JAMA Internal Medicine.

"Blood pressure management is about long-term control, but during hospitalization, patients' blood pressure can be temporarily elevated in response to illness and stress," said lead author Timothy Anderson, MD, MAS, MA, a primary care research fellow in the Division of General Internal Medicine at UCSF.

"Our findings suggest that making medication changes during this period is not beneficial," Anderson continued. "Instead, deferring medication adjustments to outpatient doctors to consider once patients are recovered from their acute illness is likely to be a safer course."

Blood pressure is measured frequently during hospitalizations and often fluctuates. Previous research has shown that higher blood pressure due to pain, stress, anxiety and exposure to new medications while in the hospital may lead clinicians to intensify antihypertensive treatment, potentially without knowledge of other patient factors, such as prior medication history, drug intolerance, barriers to medication adherence and long-term success at disease control.

In the JAMA Internal Medicine study, Anderson and his colleagues used national VA and Medicare data to examine the clinical outcomes of 4,056 veterans with hypertension who were hospitalized between January 2011 and December 2013 for common, non-cardiac conditions that typically do not require intensified hypertension treatment. The patients were equally split between those discharged home from the hospital on intensified antihypertensives and those who were not.

At 30 days after discharge, veterans on blood pressure medication had a significantly higher risk for readmission to the hospital than patients who did not receive additional antihypertensives -- 21.4 percent (434 of 2,028 patients who received antihypertensives) vs. 17.7 percent (358 patients who did not) -- and of experiencing medication-related serious adverse events, such as falls, fainting and acute kidney injury, at 4.5 percent (91 patients) vs. 3.1 percent (62 patients).

The study found no reduction in blood pressure or readmission to the hospital for cardiovascular conditions within a year after discharge among patients who received intensified antihypertensives compared to those who did not, at 13.8 percent (280 patients) vs. 11.9 percent (242 patients).

"The goal of starting patients on new blood pressure medications is to reduce their long-term risk of heart attacks, heart failure and strokes, but our finding suggests the right time to start these medications is not when patients are hospitalized for other conditions," said senior author Michael Steinman, MD, a UCSF professor of geriatrics and clinician in the geriatrics clinic and inpatient general medicine service at the San Francisco VA Medical Center. "It is possible that we observed no benefit to these medications because patients stopped the intensified medications after returning home due to the side effects, overtreatment or because their outpatient doctors felt they were not indicated."

The authors recommend that hospital clinicians review patients' prior blood pressure and medication records, as well as communicate elevated inpatient blood pressure readings to patients' outpatient providers for further management following discharge, rather than simply prescribing more blood pressure medications.

Anderson cautioned that the findings do not apply to people admitted to the hospital for heart conditions, in which changing blood pressure medications may be beneficial. They also may not apply to younger or healthier populations than those in the VA study.

"Our study was focused on blood pressure, but medications for other chronic conditions may also be adjusted during hospitalization with uncertain outcomes," Anderson said.

The researchers currently are exploring how diabetes medications are impacted by hospitalization and the long-term outcomes associated with those decisions.

Credit: 
University of California - San Francisco

Wired for sound: A third wave emerges in integrated circuits

image: Conceptual illustration of integrated circuit incorporating stimulated Brillouin scattering devices.

Image: 
<em>Nature Photonics</em>

Optical fibres are our global nervous system, transporting terabytes of data across the planet in the blink of an eye.

As that information travels at the speed of light across the globe, the energy of the light waves bouncing around inside the silica and polymer fibres create tiny vibrations that lead to feedback packets of sound or acoustic waves, known as 'phonons'.

This feedback causes light to disperse, a phenomenon known as 'Brillouin scattering'.

For most of the electronics and communications industry, this scattering of light is a nuisance, reducing the power of the signal. But for an emerging group of scientists this feedback process is being adapted to develop a new generation of integrated circuits that promise to revolutionise our 5G and broadband networks, sensors, satellite communication, radar systems, defence systems and even radio astronomy.

"It's no exaggeration to say there is a research renaissance into this process under way," said Professor Ben Eggleton, Director of the University of Sydney Nano Institute and co-author of a review paper published today in Nature Photonics.

"The application of this interaction between light and sound on a chip offers the opportunity for a third-wave revolution in integrated circuits."

The microelectronics discoveries after World War II represented the first wave in integrated circuitry, which led to the ubiquity of electronic devices that rely on silicon chips, such as the mobile phone. The second wave came at the turn of this century with the development of optical electronics systems that have become the backbone of huge data centres around the world.

First electricity then light. And now the third wave is with sound waves.

Professor Eggleton is a world-leading researcher investigating how to apply this photon-phonon interaction to solve real-world problems. His research team based at the Sydney Nanoscience Hub and the School of Physics has produced more than 70 papers on the topic.

Working with other global leaders in the field, today he has published a review article in Nature Photonics outlining the history and potential of what scientists refer to as 'Brillouin integrated photonics'. His co-authors are Professor Christopher Poulton at the University of Technology Sydney; Professor Peter Rakich from Yale University; Professor Michael Steel at Macquarie University; and Professor Gaurav Bahl from the University of Illinois at Urbana-Champaign.

Professor Bahl said: "This paper outlines the rich physics that emerges from such a fundamental interaction as that between light and sound, which is found in all states of matter.

"Not only do we see immense technological applications, but also the wealth of pure scientific investigations that are made possible. Brillouin scattering of light helps us measure material properties, transform how light and sound move through materials, cool down small objects, measure space, time and inertia, and even transport optical information."

Professor Poulton said: "The big advance here is in the simultaneous control of light and sound waves on really small scales.

"This type of control is incredibly difficult, not least because the two types of waves have extremely different speeds. The enormous advances in fabrication and theory outlined in this paper demonstrate that this problem can be solved, and that powerful interactions between light and sound such as Brillouin scattering can now be harnessed on a single chip. This opens the door to a whole host of applications that connect optics and electronics."

Professor Steel said: "One of the fascinating aspects of integrated Brillouin technology is that it spans the range from fundamental discoveries in sound-light interactions at the quantum level to very practical devices, such as flexible filters in mobile communications."

The scattering of light caused by its interaction with acoustic phonons was predicted by French physicist Leon Brillouin in 1922.

BACKGROUND INFORMATION

In the 1960s and 1970s an interesting process was discovered where you could create an enhanced feedback loop between the photons (light) and phonons (sound). This is known as stimulated Brillouin scattering (SBS).

In this SBS process light and sound waves are 'coupled', a process enhanced by the fact that the wavelength of the light and sound are similar, although their speeds are many orders of magnitude apart: light travels 100,000 times faster than sound, which explains why you see lightning before you hear thunder.

But why would you want to increase the power of this Brillouin feedback effect?

"Managing information on a microchip can take up a lot of power and produce a lot of heat," Professor Eggleton said.

"As our reliance on optical data has increased, the process of interaction of light with microelectronics systems has become problematic. The SBS process offers us a completely new way to integrate optical information into a chip environment using sound waves as a buffer to slow down the data without the heat that electronic systems produce.

"Further, integrated circuits using SBS offer the opportunity to replace components in flight and navigation systems that can be 100- or a 1000-times heavier. That will not be a trivial achievement."

REDUCING COMPLEXITY

How to contain the process of light-sound interaction has been the sticking point, but as Professor Eggleton and colleagues point out in Nature Photonics today, the past decade has seen tremendous advances.

In 2017, researchers Dr Birgit Stiller and Moritz Merklein from the Eggleton Group at the University of Sydney announced the world-first transfer of light to acoustic information on a chip. To emphasise the difference between the speeds of light and sound, this was described as 'storing lightning inside thunder'.

Dr Amol Choudhary further developed this work in 2018, developing a chip-based information recovery technique that eliminated the need for bulky processing systems.

"It's all about reducing complexity of these systems so we can develop a general conceptual framework for a complete integrated system," Professor Eggleton said.

There is increasing interest from industry and government in the deployment of these systems.

Sydney Nano has recently signed a partnership with the Royal Australian Air Force to work with its Plan Jericho program to revolutionise RAAF's sensing capability. Companies such as Lockheed Martin and Harris Corporation are also working with the Eggleton Group.

THE CHALLENGES AHEAD

There are barriers to overcome before this chip-scale integrated system can be deployed commercially, but the payoff in terms of size, weight and power (SWAP) will be worth the effort, Professor Eggleton said.

The first challenge is to develop an architecture that integrates microwave and radio frequency processors with optical-acoustic interactions. As the Eggleton Group results show, there have been great strides towards achieving this.

Another challenge comes with reducing 'noise' (or interference) in the system caused by unwanted light scattering that deteriorates the signal-to-noise ratio. One proposition is to have chips operating at cryogenic temperatures near absolute zero. While this would have significant practical implications, it could also bring quantum processes into play, delivering greater control of the photon-phonon interaction.

There is also a live investigation into the most appropriate materials upon which to build these integrated systems. Silicon has its obvious attractions given most microelectronics are built using this cheap, abundant material.

However, the silica used in the optic fibres when coupled with the silicon substrate means that information can leak out given the similarity of materials.

Finding materials that are elastic and inelastic enough to contain the light and sound waves while allowing them to interact is one suggested avenue. Some research groups use chalcogenide, a soft glass substrate with a high refractive index and low stiffness that can confine the optical and elastic waves.

Co-author of the review, Professor Steel from Macquarie University, said: "At this stage, all material systems have their strengths and weaknesses, and this is still an area of fruitful research.

Professor Eggleton said: "This new paradigm in signal processing using light waves and sound waves opens new opportunities for fundamental research and technological advances."

Declaration: Professor Eggleton acknowledges support from the Australian Research Council Linkage grant (LP170100112) with Harris Corporation and the US Office of Naval Research. Professor Stell with Professor Eggleton and Professor Poulton acknowledge support of the ARC Discovery Project DP160101691. Professor Bahl acknowledges support from the US Office of Naval Research and the US National Science Foundation.

Credit: 
University of Sydney