Culture

McGill researchers lay foundation for next generation aortic grafts

A new study by researchers at McGill University has measured the dynamic physical properties of the human aorta, laying the foundation for the development of grafts capable of mimicking the native behaviour of the human body's largest artery.

Marco Amabili, a Canada Research Chair professor in McGill's Department of Mechanical Engineering and his team used their experimental design to establish how Dacron grafts, used as vascular prostheses to replace faulty aortas, measure up to real ones. The polyester grafts, they found, are extremely rigid and don't expand when the heart pushes blood through them.

"Because the grafts don't expand at all, they induce several cardiovascular problems for patients," Amabili said. "It's the equivalent of implanting a sick aorta instead of a healthy one."

The researchers used lasers to measure the dynamic displacement of human aortas - obtained from hearts harvested for transplants - attached to a model circulatory loop designed to mimic the pulsing flow of blood generated by heartbeats.

The results, recently published in the journal Physical Review X, showed that the expansion capacity of an aorta greatly varies with age - aortas of younger donors could expand to about 10 % of their circumference while those of older donors could only expand up to 2 %. The expansion has a slight delay with respect to the pulsating pressure, which makes the blood flow more uniform; this delay reduces with age.

"The dynamic behaviour of the human aorta was poorly understood. What we did know was obtained using invasive catheters to gather ultrasound measurements of the aorta's motion in humans while having their blood pressure measured so the data was limited to resting states," said Amabili, who is also the study's senior author. "Our experiments were able to simulate the effects of blood pressure and flow on the aorta so as to understand how it reacts in both a resting state or during heavy exercise."

The study will provide crucial information about the materials needed to design a new generation of aortic prostheses with similar biomechanical properties to that of human aortas.

"This research could greatly improve patients' quality of life, especially for those who have grafts implanted at a young age because they will undergo subsequent surgery throughout their lives to replace the grafts once they start to fail," explained Isabella Bozzo, a former master's student in Amabili's lab and co-author on the paper. "These surgeries are extremely invasive and the recovery is painful, so we want to develop grafts that will give them the best chance of success, by minimizing future surgery and reproducing the hemodynamics of healthy aortas."

Expanding knowledge on the dynamics of the human aorta should also provide invaluable clues in understanding the development and progression of numerous vascular pathologies such as atherosclerotic plaque, aortic aneurysms and dissections.

Credit: 
McGill University

Hemp 'goes hot' due to genetics, not growing conditions

image: Horticulture professor Larry Smart examines industrial hemp plants growing in a greenhouse at Cornell AgriTech in Geneva, New York.

Image: 
Justin James Muir/Cornell University

ITHACA, N.Y. - As the hemp industry grows, producers face the risk of cultivating a crop that can become unusable - and illegal - if it develops too much of the psychoactive chemical THC. Cornell University researchers have determined that a hemp plant's propensity to 'go hot' - become too high in THC - is determined by genetics, not as a stress response to growing conditions, contrary to popular belief.

"[People thought] there was something about how the farmer grew the plant, something about the soil, the weather got too hot, his field was droughted, something went wrong with the growing conditions," said Larry Smart, horticulture professor and senior author of the study. "But our evidence from this paper is that fields go hot because of genetics, not because of environmental conditions."

Smart and his team conducted field trials at two sites, studying the genetics and chemistry of 217 hemp plants. They found that differences in growing conditions between the sites had no significant influence on which chemicals the plants produced. But when they compared the CBD (cannabidiol) and THC levels of each of the plants against their genomes, they found very high correlation between their genetics and the chemicals they produced.

Jacob Toth, first author of the paper and a doctoral student in Smart's lab, developed a molecular diagnostic to demonstrate that the hemp plants in the study fell into one of three genetic categories: plants with two THC-producing genes; plants with two CBD-producing genes; or plants with one gene each for CBD and THC.

To minimize the risk of plants going hot, hemp growers ideally want plants with two CBD-producing genes.

While conducting the research, the team also discovered that as many as two-thirds of the seeds they obtained of one hemp variety - which were all supposed to be low-THC hemp - produced THC above legal limits.

The researchers hope their work will help address this problem by providing breeders with easy-to-use genetic markers that can be utilized much earlier on seedlings and both sexes of plants.

The study was published in Global Change Biology-Bioenergy.

Credit: 
Cornell University

Immune response in brain, spinal cord could offer clues to treating neurological diseases

image: University of Alberta neuroscientist Jason Plemel was part of a team of Canadian researchers who discovered that immune cells in the brain and spinal cord behave differently from blood immune cells in their response to nerve damage.

Image: 
Ryan O'Byrne

An unexpected research finding is providing new information that could lead to new treatments of certain neurological diseases and disorders, including multiple sclerosis, Alzheimer's disease and spinal cord injury.

University of Alberta medical researcher Jason Plemel and key collaborators Joanne Stratton from McGill University, and Wee Yong and Jeff Biernaskie from the University of Calgary, found that immune cells in our brain and central nervous system, called microglia, interfere with blood immune cells called macrophages.

This discovery suggests that the immune cells in our brain and central nervous system are preventing the movement of the blood immune cells.

"We expected the macrophages would be present in the area of injury, but what surprised us was that microglia actually encapsulated those macrophages and surrounded them--almost like police at a riot. It seemed like the microglia were preventing them from dispersing into areas they shouldn't be," said Plemel.

"We're not sure why this happens. More research is required to answer that question," he added.

The central nervous system contains both white and grey matter. White matter is composed of nerve fibres covered by myelin, which speeds up the signals between the cells and allows the brain to quickly send and receive messages. In various neurological diseases and disorders, the myelin becomes damaged, exposing the nerves to deterioration.

"We found that both the immune cells that protect the central nervous system, microglia, and the immune cells of the peripheral immune system, macrophages, are present early after demyelination, and microglia continue to accumulate at the expense of macrophages.

"When we removed the microglia to understand what their role was, the macrophages entered into uninjured tissue," explained Plemel, who is also a member of the Neuroscience and Mental Health Institute.

"This suggests that when there is injury, the microglia interfere with the macrophages in our central nervous system and act as a barrier preventing their movement."

An opposite effect happens when a nerve is injured elsewhere in the body. For example, when a nerve is injured in your leg, the macrophages accumulate but the other resident immune cells do not, making the microglia's response in the central nervous system unique.

While there are several differences in the operation and origin of microglia and macrophages, it has historically been impossible to tell the two types of cells apart.

It is this ability to differentiate between the two that may lead to an increased understanding of how each specific type of immune cell responds to demyelination, and as a result, lead to the development of new techniques and treatments that can combat and repair the damage being caused.

Using the same technique, Plemel and his collaborators also discovered there was more than one type of microglia responding to demyelination.

"The indication of at least two different populations of microglia is an exciting confirmation for us," said Plemel. "We are continuing to study these populations and hopefully, in time, we can learn what makes them unique in terms of function. The more we know, the closer we get to understanding what is going on (or wrong) when there is neurodegeneration or injury, and being able to hypothesize treatment and prevention strategies."

Credit: 
University of Alberta Faculty of Medicine & Dentistry

Research zeroing in on electronic nose for monitoring air quality, diagnosing disease

image: Depiction of a gas sensor array composed of microscale balances coated with thin films of nanoporous materials called metal-organic frameworks.

Image: 
Arni Sturluson, Melanie Huynh, OSU College of Engineering

CORVALLIS, Ore. - Research at Oregon State University has pushed science closer to developing an electronic nose for monitoring air quality, detecting safety threats and diagnosing diseases by measuring gases in a patient's breath.

Recently published research led by Cory Simon, assistant professor of chemical engineering in the OSU College of Engineering, in collaboration with chemical engineering professor Chih-Hung Chang focused on materials known as metal-organic frameworks, or MOFs.

The research took aim at a critical yet understudied hurdle in using MOFs as gas sensors: Out of the billions of possible MOFs, how do you determine the right ones for building the optimal electronic nose?

MOFs have nanosized pores and selectively adsorb gases, similar to a sponge. They are ideal for use in sensor arrays because of their tunability, enabling engineers to use a diverse set of materials that allows an array of MOF-based sensors to deliver detailed information.

Depending on which components make up a gas, different amounts of the gas will adsorb in each MOF. That means the composition of a gas can be inferred by measuring the adsorbed gas in the array of MOFs using micro-scale balances.

The challenge is that all MOFs adsorb all gases - not to the same extent, but nevertheless the absence of perfect selectivity prevents an engineer from simply saying, "let's just dedicate this MOF to carbon dioxide, that one to sulfur dioxide, and another one to nitrogen dioxide."

"Curating MOFs for gas sensor arrays is not that simple because each MOF in the array will appreciably adsorb all three of those gases," Simon said.

Human noses navigate this same problem by relying on about 400 different types of olfactory receptors. Much like the MOFs, each olfactory receptor is activated by many different odors, and each odor activates many different receptors; the brain parses the response pattern, allowing people to distinguish a multitude of different odors.

"In our research, we created a mathematical framework that allows us, based on the adsorption properties of MOFs, to decide which combination of MOFs is optimal for a gas sensor array," Simon said. "There will inevitably be some small errors in the measurements of the mass of adsorbed gas, and those errors will corrupt the prediction of the gas composition based on the sensor array response. Our model assesses how well a given combination of MOFs will prevent those small errors from corrupting the estimate of the gas composition."

Though the research was primarily mathematical modeling, the scientists used experimental adsorption data in real MOFs as input, Simon said, adding that Chang is an experimentalist "who we are working with to make a real-life electronic nose to detect air pollutants."

"We are currently seeking external funding together to bring this novel concept into physical realization," Simon said. "Because of this paper, we now have a rational method to computationally design the sensory array, which encompasses simulating gas adsorption in the MOFs with molecular models and simulations to predict their adsorption properties, then using our mathematical method to screen the various combinations of MOFs for the most accurate sensor array."

Meaning that instead of an experimental trial-and-error approach to decide which MOFs to use in a sensor array, engineers can use computational power to curate the best collection of MOFs for an electronic nose.

Another exciting application of such a nose could be diagnosing disease. The volatile organic compounds humans emit, such as through our breath, are filled with biomarkers for multiple diseases, and studies have shown that dogs -- which have twice the number of different olfactory receptors as humans -- can detect diseases with their nose.

Marvelous though they are, however, dogs' noses aren't as practical for widespread diagnostic use as a carefully crafted and manufactured sensor array would be.

Credit: 
Oregon State University

If it takes a hike, riders won't go for bike sharing

ITHACA, N.Y. - Even a relatively short walk to find the nearest bicycle is enough to deter many potential users of bike sharing systems, new Cornell research suggests.

"If a docking station is more than two or three blocks away, they just won't go there," said Karan Girotra, professor of operations, technology and innovation at Cornell Tech and the Cornell SC Johnson College of Business. "And if they encounter a station without bikes, it's very unlikely they will go to the next station."

Girotra co-authored "Bike-Share Systems: Accessibility and Availability," published in November by Management Science, with Elena Belavina, associate professor at the School of Hotel Administration in the SC Johnson College, and Ashish Kabra, assistant professor at the University of Maryland's Robert H. Smith School of Business.

Their findings imply that, outside of a few big stations at major transit hubs, cities and bike-share operators should strive to create denser networks with many smaller stations, Girotra and Belavina said, and keep them stocked.

"It's no surprise that people want stations close to them, but it's much closer than most planners and bike-share systems thought they needed," Belavina said. "Most systems are nowhere close to their optimal density."

Bike sharing systems hold the potential to reduce traffic and pollution in dense, flat cities such as London, New York, Paris and Shanghai, the researchers noted. They encourage and enhance public transit use by providing "last mile" connections to bus and train stations.

But "their promise of urban transformation is far from being fully realized," according to the paper. Many systems were established quickly, sometimes through public-private partnerships, and with less rigorous planning than higher-cost transit systems, Girotra said.

"There was perhaps an opportunity to put a little more thought into how a bike-share system can be introduced in a city," he said.

To that end, the research team built a model to produce the first estimates of how station proximity and bike availability influence bike-share operations.

The structural demand model analyzed data from Paris' Vélib' system - the largest outside China with roughly 17,000 bikes and 950 stations - during four months of 2013, a period that included nearly 4.4 million trips. The data provided snapshots of system usage every two minutes, showing how stations changed throughout each day.

The researchers blended that information with data about population density in different city districts, metro ridership, attendance at top tourist destinations and weather conditions. The team also logged the locations of thousands of points of interest such as transit stations, parks, libraries, hotels, grocery stores, restaurants and cafes.

"Put together," Belavina said, "that gave us some ability to disentangle what guides people's decisions in choosing bike sharing and different bike-share stations."

The model determined that someone roughly 300 meters (nearly 1,000 feet) from a docking station is 60% less likely to use the service than someone very near the station. The odds decrease slightly with every additional meter, such that someone 500 meters away - about one-third of a mile - is "highly unlikely to use the system."

But a 10% increase in bike availability - the likelihood of finding a bicycle at a station - would grow ridership by roughly 12%, the study found, thanks to fewer lost sales at out-of-stock stations and improved expectations of the system.

Among the various points of interest, placing stations near grocery stores provides the most benefit, the model showed.

Generating the study's findings required methodological advances to adapt demand modeling to a bike-share context, the researchers said.

Models have long predicted shifts in usage patterns when considering new locations for transit stations, retail outlets or bank ATMs. But bike-share demand in a major city, with hundreds of stations changing inventory throughout each day, involved studying a more dynamic system with much finer resolution, Girotra said.

The team's huge volume of data might have required completing more than a quadrillion calculations to generate the best estimates, likely taking over a year, according to the paper. Instead, the researchers developed new computational techniques, Belavina said, to condense some data and make the process more manageable.

The resulting model, according to the co-authors, can be applied not only to bike-share systems but other micro-mobility services: scooters, powered bikes, local food delivery and ride-sharing. The researchers plan to look more broadly at micro-mobility in a future study partnering with London's transit agency.

Regarding bike sharing, the study's advice was clear: "Make bikes and stations more available," Girotra said. "People don't like walking to access a bike-share system."

Credit: 
Cornell University

People may lie to appear honest

WASHINGTON - People may lie to appear honest if events that turned out in their favor seem too good to be true, according to new research published by the American Psychological Association.

"Many people care greatly about their reputation and how they will be judged by others, and a concern about appearing honest may outweigh our desire to actually be honest, even in situations where it will cost us money to lie," said lead researcher Shoham Choshen-Hillel, PhD, a senior lecturer at the School of Business Administration and Center for the Study of Rationality at The Hebrew University of Jerusalem. "Our findings suggest that when people obtain extremely favorable outcomes, they anticipate other people's suspicious reactions and prefer lying and appearing honest over telling the truth and appearing as selfish liars."

The study found similar findings about lying to appear honest in a series of experiments conducted with lawyers and college students in Israel, as well as online participants in the United States and United Kingdom. The research was published online in the Journal of Experimental Psychology: General.

In one experiment with 115 lawyers in Israel, the participants were told to imagine a scenario where they told a client that a case would cost between 60 and 90 billable hours. The lawyer would be working in an office where the client wouldn't know how many hours were truly spent on the case. Half of the participants were told they had worked 60 hours on the case while the other half were told they worked 90 hours. Then they were asked how many hours they would bill the client. In the 60-hour group, the lawyers reported an average of 62.5 hours, with 17% of the group lying to inflate their hours. In the 90-hour group, the lawyers reported an average of 88 hours, with 18% of the group lying to report fewer hours than they had actually worked.

When asked for an explanation for the hours they billed, some lawyers in the 90-hour group said they worried that the client would think he had been cheated because the lawyer had lied about the number of billable hours.

In another experiment, 149 undergraduate students at an Israeli university played online dice-rolling and coin-flipping games in private and then reported their scores to a researcher. The participants received approximately 15 cents for each successful coin flip or dice roll they reported. The computer program was manipulated for half of the students so they received perfect scores in the games, while the other group had random outcomes based on chance. In the perfect-score group, 24% underreported their number of wins even though it cost them money, compared with 4% in the random-outcome group.

"Some participants overcame their aversion toward lying and the monetary costs involved just to appear honest to a single person who was conducting the experiment," Choshen-Hillel said.

In another online experiment with 201 adults from the United States, participants were told to imagine a scenario where they drove on many work trips for a company that had a maximum monthly compensation of 400 miles. They were told that most employees reported 280 to 320 miles per month.

Half of the participants were told they had driven 300 miles in a month while the other half were told they drove 400 miles. When the participants were asked how many miles they would report, the 300-mile group told the truth and reported an average of 301 miles. For the 400-mile group, the participants reported an average of 384 miles, with 12% lying and underreporting their mileage. There were similar findings in another online experiment with 544 participants in the United Kingdom.

Choshen-Hillel said she believes the study findings would apply in the real world, but there could be situations where the amount of money or other high stakes would lead people to tell the truth even if they might appear dishonest.

"While our findings may seem ironic or counterintuitive, I think most people will recognize a time in their lives when they were motivated to tell a lie to appear honest," she said.

Credit: 
American Psychological Association

Biological diversity as a factor of production

image: Biodiversity and ecosystem functions rarely form a steadily rising curve. Rather, scientists under the leadership of TUM found empirical and theoretical evidence for strictly concave or strictly convex relationships between biodiversity and economic value.

Image: 
K. Baumeister / TUM

The main question addressed by the study is: Does greater biodiversity increase the economic value of managed ecosystems? "We have found that the possible relationships between economic value and biodiversity are varied," says Professor Thomas Knoke, Head of the Institute of Forest Management at the TUM School of Life Sciences Weihenstephan.

It all depends on the purpose

Even a layman can guess the main purpose of single-species timber plantations: economic benefit through the sale of wood. But forests have a number of functions. They serve as home to a variety of animal and plant species, function as a source of wood as a raw material, have a protective function such as protecting the soil and helping combat global warming and serve recreational purposes as well.

It is common ecological knowledge that the more biodiverse a forest is, the higher the productivity will be. However, the researchers found that "after you have reached a certain mix of trees, adding new species no longer produces significant economic benefits to people." What counts here are the characteristics of the species of trees inhabiting the forest as not every tree has the same value.

"The different functions of an ecosystem never stand to an equal degree in positive relation to biodiversity," explains Carola Paul, University of Göttingen, who until recently was a member of Thomas Knoke's team. If you were to compile all functions of an ecosystem, you would find a mathematical maximum in terms of its value.

The team found that, "maximizing biodiversity at the level of the ecosystem does not maximize economic value in most cases." This particularly holds true if compromises have to be made between different purposes or different economic yields and risks. In such cases, applying a medium level of biological diversity proves most beneficial.

Where biodiversity pays off

The more diverse the plants in an ecosystem are, the better the situation is in terms of risk diversification. This affects the variability of cash value of the ecosystem. The research shows that risk premiums can be lowered just by making a minor change to the level of biodiversity. Risk premium is the reward that a risk-averse person requires to accept a higher risk.

The researchers identified high value potential in biodiversity particularly in connection with the avoidance of social costs. These costs are borne by the public such as diseases caused by air pollution. In its mathematical calculations of these social costs, the study argues that more diverse, mixed agriculture and forest management systems pay off. "Biodiverse ecosystems require less pesticides and fertilizer," explains Thomas Knoke.

A medium degree of biodiversity often creates the best value

"Based on theoretical considerations and empirical evidence, we have found that ecosystems with several, but in all actuality relatively few, plant species can produce more economic benefits than those with only one species as well as those with a large number of species," the scientist summarizes.

According to the research, biodiversity and ecosystem functionality rarely create a consistent upward curve. Instead, the team found empirical and theoretical evidence of strictly concave or strictly convex relationships between biodiversity and economic value.

These findings in no way indicate that mega biodiverse ecosystems are not worth protecting. Instead they show that economic arguments alone are not sufficient when talking about these biodiversity "hot spots."

What the relationships do highlight are the economic benefits that even a minor increase in biodiversity could have in the agricultural sector. When it comes to forests, the study shows that it is possible to manage a stable forest that serves a variety of functions with four to five species of trees. The relationships identified in the study can therefore be of considerable value in land use planning going forward.

Credit: 
Technical University of Munich (TUM)

Computer servers now able to retrieve data much faster

Computer scientists at the University of Waterloo have found a novel approach that significantly improves the storage efficiency and output speed of computer systems.

Current data storage systems use only one storage server to process information, making them slow to retrieve information to display for the user. A backup server only becomes active if the main storage server fails.

The new approach, called FLAIR, optimizes data storage systems by using all the servers within a given network. Therefore, when a user makes a data request, if the main server is full, another server automatically activates to fill it.

"The key enabler for FLAIR is the recent introduction of programmable networks," said Samer Al-Kiswany, a professor in Waterloo's David R. Cheriton School of Computer Science and co-author of the study introducing the FLAIR technique. "Since the invention of computers, networks that connect storage servers in any system were rigid and inflexible. FLAIR leverages a new cutting-edge networking technology to build a smart network layer that can find the fastest way to fulfil information retrieval requests. Our evaluation shows that this approach can fulfil requests up to 2.5 times faster, compared to classical designs."

In developing the new protocol, the researchers first had to prove its correctness and formally verify it to ensure the approach will not return bad results. They were able to test FLAIR with real workloads on campus, as Waterloo is one of the few universities that have a cluster with the new programmable network.

Al-Kiswany and his team found that FLAIR increased retrieval speeds by anywhere from 35 to 97 percent.

"This will lead to a whole range of applications as this type of system is the core building block of a wide range of applications," said Ibrahim Kettaneh, the graduate student leading the FLAIR development. "FLAIR can significantly improve the performance of databases and data processing engines, which are the backends for health systems, banking systems and financial transactions. It will also be applicable to any modern computer application hosted on the cloud, such as online documents, social networks and emails."

The study, FLAIR: Accelerating Reads with Consistency-Aware Network Routing, authored by Waterloo's Faculty of Mathematics' Al-Kiswany and his graduate students; Kettaneh, Ahmed Alquraan and Hatem Takruri, will be presented at the USENIX Symposium on Networked Systems Design Implementation to be held in Santa Clara, USA from February 25-27.

Credit: 
University of Waterloo

Cells' springy coils pump bursts of RNA

image: Models by Rice University chemists calculate the chemical and mechanical energies involved in "bursty" RNA production in cells. Their models show how RNA polymerases create supercoils of DNA that allow production of RNA that goes on to produce proteins.

Image: 
Alena Klindziuk/Rice University

HOUSTON - (Jan. 30, 2020) - In your cells, it's almost always spring. Or at least springy.

Bioscientists have known for some time that chromosomes tend to express their protein products in bursts, rather than in a steady manner. A new theoretical study by Rice University scientists, detailed in the Biophysical Journal, aims to better explain the process that combines chemical reactions and mechanical forces.

Rice chemist Anatoly Kolomeisky and applied physics graduate student and lead author Alena Klindziuk built the first simplified analytical model of "bursting" to show how pressure from a RNA polymerase enzyme triggers the rush of RNA production, but only to the degree that it can push a coil of DNA.

As it compresses like a spring, that DNA "supercoil" continues to express RNA -- which goes on to make the proteins themselves -- until the enzyme can push no more. It isn't until another enzyme, a gyrase, comes along to release the tension that production can start anew.

"With advances in experimental techniques, people are able to measure how much RNA you are producing, and so it was a naive expectation that the speed of production was more or less constant," said Kolomeisky, a chemist by title whose group has long been interested in how biochemical reactions work to power biological mechanisms, and vice versa.

"It was surprising when we found it actually doesn't work this way," he said. "A lot of RNA is produced and then there's a period of silence. RNA is produced in a very bursty behavior, but the molecular details have been lacking."

He said the way RNA polymerase aligns with the double-helical DNA coils it in the process. "It rotates, putting mechanical constraints on DNA," Kolomeisky said. "A spring is an excellent example. The more you push a spring, the harder it becomes to push.

"We think the RNA polymerase coils DNA to start RNA production," he said. "At the beginning of the process, you get a burst, but the process slows down as it squeezes the spring. Then gyrases come in; they untangle this supercoil so that normal production can begin again." At the same time, he said gyrases also relieve negative stress created on the other side of the polymerase.

"Essentially, we created the first quantitative energetic model that explains this burstiness," Kolomeisky said. "We are able to interpret the experimental data (gathered in experiments on bacteria) and found this supercoil exists."

He said the calculations show the DNA "spring" is relatively weak. "That has biological significance because it means we can more easily regulate the process by regulating gyrases," he said.

Klindziuk noted there are many other players in the process that ultimately need to be accounted for. "We could have added many effects, like transcriptional and other epigenetic factors," she said. "We want to make a model where there are multiple polymerases on DNA. In this model, we only had one, but in reality, there are many polymerases. There might be effects from polymerase traffic, like they're bumping into each other and stopping and resuming their activity."

"This is an example of advanced experimentation that led us to seek a significant theoretical solution," Kolomeisky said. "It usually happens in the opposite direction, but this time experiments were able to visualize the process, and that led us to think about and start to explain it."

Credit: 
Rice University

KU Leuven researchers discover new piece of the puzzle for Parkinson's disease

image: Professor Peter Vangheluwe (KU Leuven)

Image: 
KU Leuven - Rob Stevens

Biomedical scientists at KU Leuven have discovered that a defect in the ATP13A2 gene causes cell death by disrupting the cellular transport of polyamines. When this happens in the part of the brain that controls body movement, it can lead to Parkinson's disease.

With more than six million patients around the world, Parkinson's disease is one of the most common neurodegenerative disorders. Around twenty genetic defects have already been linked to the disease, but for several of these genes, we don't know what function they fulfil. The ATP13A2 gene used to be one of these genes, but researchers at KU Leuven have now discovered its function in the cell, explaining how a defect in the gene can cause Parkinson's disease.

"We found that ATP13A2 transports polyamines and is crucial for their uptake into the cell," explains senior author Peter Vangheluwe from the Laboratory of Cellular Transport Systems at KU Leuven. "Polyamines are essential molecules that support many cell functions and protect cells in stress conditions. But how polyamines are taken up and transported in human cells was still a mystery. Our study reveals that ATP13A2 plays a vital role in that process."

"Our experiments showed that polyamines enter the cell via lysosomes and that ATP13A2 transfers polyamines from the lysosome to the cell interior. This transport process is essential for lysosomes to function properly as the 'waste disposal system' of the cell where obsolete cell material is broken down and recycled."

"However, mutations in the ATP13A2 gene disrupt this transport process, so that polyamines build up in lysosomes. As a result, the lysosomes swell and eventually burst, causing the cells to die. When this happens in the part of the brain that controls body movement, this process may trigger the motion problems and tremors related to Parkinson's disease."

Unravelling the role of ATP13A2 is an important step forward in Parkinson's research and sheds new light on what causes the disease, but a lot of work remains to be done. Professor Peter Vangheluwe: "We now have to investigate how deficient polyamine transport is linked to other defects in Parkinson's disease such as the accumulation of plaques in the brain and malfunctioning of the mitochondria, the 'energy factories' of the cell. We need to examine how these mechanisms influence each other."

"The discovery of the polyamine transport system in animals has implications beyond Parkinson's disease as well, because polyamine transporters also play a role in other age-related conditions, including cancer, cardiovascular diseases, and several neurological disorders."

"Now that we have unravelled the role of ATP13A2, we can start searching for molecules that influence its function. Our lab is already collaborating with the Centre for Drug Design and Discovery - a tech transfer platform established by KU Leuven and the European Investment Fund - and receives support from the Michael J. Fox Foundation."

Credit: 
KU Leuven

Those who believe that the economic system is fair are less troubled by poverty, homelessness, and extreme wealth

We react less negatively to extreme manifestations of economic disparity, such as homelessness, if we think the economic system is fair and legitimate, and these differences in reactivity are even detectable at the physiological level, finds a team of psychology researchers. The research, which appears in the journal Nature Communications, offers new insights into why we have varying reactions to inequality.

"Research has shown that people generally have an aversion to unequal distributions of resources, an example of which may be a person we see sleeping on a grate or lacking access to basic necessities, healthcare, and education," explains Shahrzad Goudarzi, the paper's lead author and a doctoral candidate in New York University's Department of Psychology. "Yet many people either pay little attention to or are otherwise unbothered by rising economic disparities--responses that some may have difficulty understanding. This research begins to explain such differences: beliefs that legitimize and justify the economic system diminish our deep-seated aversion to inequality, buffering us against negative emotions in response to it."

Previous research has shown that humans, and some other primates, have developed an evolutionary aversion towards inequality in distribution of goods and resources. For instance, children as young as six years old have been found to refuse items if it meant having more than their peers. Nonetheless, public opinion data suggest that a large percentage of Americans are not bothered by economic inequality. For example, a 2018 Gallup Poll showed that one-third of Americans are satisfied with the existing distribution of income and wealth. Such acceptance, despite general preferences for greater equality, raises the question of how people manage such contradictions.

To address this, the scientists in the Nature Communications study conducted a series of six experiments. Two of these (Studies 1 and 2) were done using participants from Amazon's "Mechanical Turk" and Prolific Academic, tools in which individuals are compensated for completing small tasks and which are frequently used in running behavioral science studies. Four others (Studies 3-6) involved college undergraduates.

In Studies 1 and 2, participants were asked their views of the American economic system by registering their agreement with statements such as the following: "Economic positions are legitimate reflections of people's achievements" and "If people work hard, they almost always get what they want." A week later, some viewed a video in which a homeless interviewee described their circumstances, recounting their routines and struggles. Separate control groups viewed mundane videos, depicting interviews about fishing and producing coffee.

Those who believed the American economic system was fair, legitimate, and justified ("system justifiers"), compared with those who did not, reported feeling less negative emotions after watching videos depicting homelessness.

Studies 3-5 replicated these steps, then added a new component: participants' physiological responses were measured by gauging their skin conductance levels and subtle facial muscle movements. This method affords a deeper accounting of our responses because it captures involuntary reactions to stimuli--negative arousal and emotional distress. Here, economic system justifiers showed comparatively low levels of negative affect and arousal while viewing people experiencing homelessness. By contrast, economic system justification was not associated with emotional reactions to the control videos.

Study 6 went a step further--it was aimed at capturing emotions in the context of people's daily lives. In this study, undergraduates received four text messages a day for nine consecutive days, prompting them to complete a short survey using their smartphones. Two of the daily surveys were designed to measure reactions to inequality, with one survey asking participants to indicate whether they had encountered someone they considered very poor and another whether they had encountered someone very rich compared with themselves; the order of these surveys was randomized across days. Regardless of whether participants reported such an encounter, they were asked about their emotions--either in light of the encounter (if one was reported) or over the preceding two hours (if no encounter was reported).

Consistent with the previous studies, those identified as "system justifiers" reported less negative emotion after their everyday exposure to rich and poor people than did people who were more critical of the existing economic system.

"These results provide the strongest evidence to date that system-justifying beliefs diminish aversion to inequality in economic contexts," observes Eric Knowles, an associate professor of psychology at NYU and one of the paper's co-authors.

Credit: 
New York University

SUTD's novel approach allows 3D printing of finer, more complex microfluidic networks

image: 2D and 3D fluidic networks by modularized stereolithography.

Image: 
SUTD

First introduced in the 1980s, stereolithography (SL) is an additive manufacturing process that prints 3D objects by the selective curing of liquid polymer resin using an ultra-violet (UV) light source in a layer-by-layer fashion. The polymer employed undergoes a photochemical reaction which turns it from liquid to solid when exposed to UV illumination. Today, SL is touted as one of the most accurate forms of 3D printing that is accessible to consumers, with desktop models (e.g., liquid crystal display variants) costing as little as USD 300.

SL is an attractive option for researchers in the field of microfluidics. Not only does it have the ability to fabricate microfluidic devices in a single step from a computer-generated model, but it also allows the fabrication of truly 3D structures that would otherwise have been challenging, if not impossible, with the existing fabrication approaches.

However, when employing SL printers in printing microfluidic channels, two representative problems occur. Firstly, inadvertent polymerization of uncured resin in channel void. During the print, the liquid resin is trapped within the channel void. Illumination from subsequent layers may inadvertently cure the trapped liquid resin, which will result in a channel clog.

Secondly, in the event where inadvertent polymerization of resin does not occur, the evacuation of the trapped resin within the channel void can still be a challenge. This is because existing liquid resin is viscous (i.e., consistency like honey), making the evacuation of narrow channels or networks with multiple branches challenging. These two challenges limit the attainability of channel dimensions and complexity in fluidic networks printed by SL.

To tackle these limitations, researchers from the Singapore Unversity of Technology and Design (SUTD) in collaboration with Assistant Professor Toh Yi-Chin's research group from the National University of Singapore, developed a design approach that can improve the attainable channel dimensions and complexity of networks with existing SL (refer to image).

"The conventional way of printing microfluidic devices with SL printers is to print the entire device as a monolithic entity. However, issues like inadvertent polymerization of channel void and difficulty in evacuating channel void arises from printing as a monolithic entity," explained principal investigator Assistant Professor Michinao Hashimoto from Engineering Product Development, SUTD.

Instead, the researchers took a modularization approach - where they spatially deconstructed a microfluidic channel into simpler subunits, printed them separately, and subsequently assembled them to form microfluidic networks. By applying this approach, they were able to print microfluidic networks with greater intricacy (such as hierarchical branching) and smaller channel dimensions.

"By design, each subunit is spatially deconstructed to have simple geometries that would not result in inadvertent polymerization. The simple geometries also facilitated the evacuation of uncured resin," said lead author Terry Ching, a graduate student from SUTD.

The team was able to fabricate a range of fluidic networks that were challenging to print using conventional methods. Their demonstration includes hierarchical branching networks, rectilinear lattice networks, helical networks, etc. They were also able to demonstrate the efficacy of their approach by showing a substantial improvement channel dimensions (i.e., channel w = 75 μm and h = 90 μm) when compared to using the conventional 'monolithic' printing approach.

One obvious use case is the application of this approach to fabricate fluidic networks using hydrogel to mimic native vasculature. To date, the variety of SL printable hydrogels is limited, and they often lack mechanical properties necessary for an accurate print or biocompatibility required for the incorporation of living cells. By simplifying the geometries of each subunit, the team used hydrogel to fabricate intricate fluidic networks, mimicking native vasculature.

"Simplifying the geometries of the subunits also reduces the use of additives that may be harmful to biological cells," added Ching.

All in all, this is a general design approach that can circumvent some of the biggest challenges in SL printed microfluidics - by applying this approach, existing SL printers can now fabricate microfluidics with finer channel dimensions, and more branching intricacies. This research paper has been published in Advanced Engineering Materials.

Credit: 
Singapore University of Technology and Design

New insights into how the human brain solves complex decision-making problems

image: (modified from the figures of the original paper doi:10.1038/s41467-019-13632-1). Computations implemented in the inferior prefrontal cortex during meta reinforcement learning. (A) Computational model of human prefrontal meta reinforcement learning (left) and the brain areas where the neural activity patterns are explained by the latent variables of the model. (B) Examples of behavioral profiles. Shown on the left is choice bias for different goal types and on the right is choice optimality for task complexity and uncertainty. (C) Parameter recoverability analysis. Compared are the effects of task uncertainty (left) and task complexity (right) on choice optimality.

Image: 
KAIST

A new study on meta reinforcement learning algorithms helps us understand how the human brain learns to adapt to complexity and uncertainty when learning and making decisions. A research team, led by Professor Sang Wan Lee at KAIST jointly with John O'Doherty at Caltech, succeeded in discovering both a computational and neural mechanism for human meta reinforcement learning, opening up the possibility of porting key elements of human intelligence into artificial intelligence algorithms. This study provides a glimpse into how it might ultimately use computational models to reverse engineer human reinforcement learning.

This work was published on Dec 16, 2019 in the journal Nature Communications. The title of the paper is "Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning."

Human reinforcement learning is an inherently complex and dynamic process, involving goal setting, strategy choice, action selection, strategy modification, cognitive resource allocation etc. This a very challenging problem for humans to solve owing to the rapidly changing and multifaced environment in which humans have to operate. To make matters worse, humans often need to often rapidly make important decisions even before getting the opportunity to collect a lot of information, unlike the case when using deep learning methods to model learning and decision-making in artificial intelligence applications.

In order to solve this problem, the research team used a technique called 'reinforcement learning theory-based experiment design' to optimize the three variables of the two-stage Markov decision task - goal, task complexity, and task uncertainty. This experimental design technique allowed the team not only to control confounding factors, but also to create a situation similar to that which occurs in actual human problem solving.

Secondly, the team used a technique called 'model-based neuroimaging analysis.' Based on the acquired behavior and fMRI data, more than 100 different types of meta reinforcement learning algorithms were pitted against each other to find a computational model that can explain both behavioral and neural data. Thirdly, for the sake of a more rigorous verification, the team applied an analytical method called 'parameter recovery analysis,' which involves high-precision behavioral profiling of both human subjects and computational models.

In this way, the team was able to accurately identify a computational model of meta reinforcement learning, ensuring not only that the model's apparent behavior is similar to that of humans, but also that the model solves the problem in the same way as humans do.

The team found that people tended to increase planning-based reinforcement learning (called model-based control), in response to increasing task complexity. However, they resorted to a simpler, more resource efficient strategy called model-free control, when both uncertainty and task complexity were high. This suggests that both the task uncertainty and the task complexity interact during the meta control of reinforcement learning. Computational fMRI analyses revealed that task complexity interacts with neural representations of the reliability of the learning strategies in the inferior prefrontal cortex.

These findings significantly advance understanding of the nature of the computations being implemented in the inferior prefrontal cortex during meta reinforcement learning as well as providing insight into the more general question of how the brain resolves uncertainty and complexity in a dynamically changing environment. Identifying the key computational variables that drive prefrontal meta reinforcement learning, can also inform understanding of how this process might be vulnerable to break down in certain psychiatric disorders such as depression and OCD. Furthermore, gaining a computational understanding of how this process can sometimes lead to increased model-free control, can provide insights into how under some situations task performance might break down under conditions of high cognitive load.

Professor Lee said, "This study will be of enormous interest to researchers in both the artificial intelligence and human/computer interaction fields since this holds significant potential for applying core insights gleaned into how human intelligence works with AI algorithms."

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

On the menu: Study says dining out is a recipe for unhealthy eating for most Americans

video: Dariush Mozaffarian, senior author and dean of the Friedman School of Nutrition Science and Policy at Tufts University, discusses the goals of a study, published on January 29, 2020 in The Journal of Nutrition. The study found that most of what Americans ate while dining out over a 14-year period was of poor nutritional quality.

Image: 
Tufts University

BOSTON (Jan. 29, 2020, 9:00 a.m. EST)--The typical American adult gets one of every five calories from a restaurant, but eating out is a recipe for meals of poor nutritional quality in most cases, according to a new study by researchers at the Friedman School of Nutrition Science and Policy at Tufts University.

Published today in The Journal of Nutrition, the study analyzed the dietary selections of more than 35,000 U.S. adults from 2003-2016 in the National Health and Nutrition Examination Survey (NHANES) who dined at full-service (those with wait staff) or fast-food restaurants, which included pizza shops and what has become known as fast-casual. The researchers assessed nutritional quality by evaluating specific foods and nutrients in the meals, based on the American Heart Association 2020 diet score.

At fast-food restaurants, 70 percent of the meals Americans consumed were of poor dietary quality in 2015-16, down from 75 percent in 2003-04. At full-service restaurants, about 50 percent were of poor nutritional quality, an amount that remained stable over the study period. The remainder were of intermediate nutritional quality.

Notably, the authors found that less than 0.1 percent - almost none - of all the restaurant meals consumed over the study period were of ideal quality.

"Our findings show dining out is a recipe for unhealthy eating most of the time," said Dariush Mozaffarian, senior author and dean of the Friedman School. "It should be a priority to improve the nutritional quality of both full-service and fast-food restaurant meals, while reducing disparities so that all Americans can enjoy the pleasure and convenience of a meal out that is also good for them."

The disparities documented by the study authors show some groups ate more healthfully than others while dining out. For example, the average quality of fast-food meals consumed by non-Hispanic whites and Mexican-Americans improved, but there was no change in the average quality of fast-food meals consumed by non-Hispanic blacks. Also, the proportion of poor-quality fast-food meals decreased from 74 percent to 60 percent over this period for people with college degrees, but remained high at 76 percent for people without a high school diploma.

The researchers also looked at the extent to which Americans relied on restaurants during the study period and found:

Restaurant meals accounted for 21 percent of Americans' total calorie intake.

Full-service restaurant meals represented 9 percent of total calories consumed.

Fast-food meals represented 12 percent of total calories consumed.

Fast-food breakfasts increased from just over 4 percent to nearly 8 percent of all breakfasts eaten in America.

The researchers assessed specific foods and nutrients in restaurant meals and identified priorities for improvement. "We found the largest opportunities for enhancing nutritional quality would be adding more whole grains, nuts and legumes, fish, and fruits and vegetables to meals while reducing salt," said first author Junxiu Liu, a postdoctoral scholar at the Friedman School. She noted the study findings showed no improvement in sodium levels in fast-food meals and worsening levels in full-service dishes consumed.

"Our food is the number one cause of poor health in the country, representing a tremendous opportunity to reduce diet-related illness and associated healthcare spending," Mozaffarian said. "At restaurants, two forces are at play: what's available on the menu, and what Americans are actually selecting. Efforts from the restaurant industry, consumers, advocacy groups, and governments should focus on both these areas."

NHANES participants are representative of the national population and completed at least one valid 24-hour dietary recall questionnaire from nine consecutive cycles of NHANES between 2003-2016, including types of foods and beverages consumed and the source.

The study authors used the American Heart Association (AHA) diet score to assess meal quality, which is based on the AHA 2020 Strategic Impact Goals and is a validated risk factor for cardiovascular and metabolic outcomes. The AHA diet score includes both a primary and secondary score. The primary score assesses the consumption of fruits and vegetables, fish/shellfish, whole grains, sodium, and sugar-sweetened beverages, and the secondary score assesses intake of nuts/seeds/legumes, processed meat, and saturated fat.

Researchers also evaluated individual food groups and nutrients based on the USDA Food Patterns Equivalents Database (FPED) and MyPyramid Equivalents Database (MPED) associated with chronic illnesses.

Limitations of the study include the fact that self-reported food recall data is subject to measurement error due to daily variations in food intake. Participants may also overreport or underreport healthy or unhealthy foods due to social desirability perceptions.

Credit: 
Tufts University, Health Sciences Campus

Color-changing bandages sense and treat bacterial infections

image: A bandage changed color from green to yellow, and from green to red, in the presence of increasing concentrations of drug-sensitive (DS) and drug-resistant (DR) E.coli, respectively.

Image: 
Adapted from <i>ACS Central Science</i> <b>2020</b>, DOI: 10.1021/acscentsci.9b01104

According to the World Health Organization, antibiotic resistance is one of the biggest threats to global health. Sensing and treating bacterial infections earlier could help improve patients' recovery, as well curb the spread of antibiotic-resistant microbes. Now, researchers reporting in ACS Central Science have developed color-changing bandages that can sense drug-resistant and drug-sensitive bacteria in wounds and treat them accordingly.

Xiaogang Qu and colleagues developed a material that changes color from green to yellow when it contacts the acidic microenvironment of a bacterial infection. In response, the material, which is incorporated into a bandage, releases an antibiotic that kills drug-sensitive bacteria. If drug-resistant bacteria are present, the bandage turns red in color through the action of an enzyme produced by the resistant microbes. When this happens, the researchers can shine light on the bandage, causing the material to release reactive oxygen species that kill or weaken the bacteria, making them more susceptible to the antibiotic. The team showed that the bandage could speed the healing of wounds in mice that were infected with drug-sensitive or drug-resistant bacteria.

Credit: 
American Chemical Society