Culture

'Spring forward' to daylight saving time brings surge in fatal car crashes

Fatal car accidents in the United States spike by 6% during the workweek following the "spring forward" to daylight saving time, resulting in about 28 additional deaths each year, according to new University of Colorado Boulder research.

The study, published January 30 in the journal Current Biology, also found that the farther west a person lives in his or her time zone, the higher their risk of a deadly crash that week.

"Our study provides additional, rigorous evidence that the switch to daylight saving time in spring leads to negative health and safety impacts," said senior author Celine Vetter, assistant professor of integrative physiology. "These effects on fatal traffic accidents are real, and these deaths can be prevented."

The findings come at a time when numerous states, including Oregon, Washington, California and Florida, are considering doing away with the switch entirely, and mounting research is showing spikes in heart attacks, strokes, workplace injuries and other problems in the days following the time change.

For the study - the largest and most detailed to date to assess the relationship between the time change and fatal motor vehicle accidents - the researchers analyzed 732,835 accidents recorded through the U.S. Fatality Analysis Reporting System from 1996 to 2017. They excluded Arizona and Indiana, where Daylight Savings Time was not consistently observed.

After controlling for factors like year, season and day of the week, they found a consistent rise in fatal accidents in the week following the spring time change. Notably, that spike moved in 2007, when the Energy Policy Act extended daylight saving time to begin on the second Sunday of March instead of the first Sunday in April.

"Prior to 2007, we saw the risk increase in April, and when daylight saving time moved to March, so did the risk increase," said Vetter. "That gave us even more confidence that the risk increase we observe is indeed attributable to the daylight saving time switch, and not something else."

With the arrival March 9 of daylight saving time, clocks shift forward by one hour, and many people will miss out on sleep and drive to work in darkness - both factors that can contribute to crashes.

Those on the western edge of their time zone, in places like Amarillo, Texas, and St. George, Utah, already get less sleep on average than their counterparts in the east - about 19 minutes less per day, research shows - because the sun rises and sets later but they still have to be at work when everyone else does.

"They already tend to be more misaligned and sleep-deprived, and when you transition to daylight saving time it makes things worse," said first author Josef Fritz, a postdoctoral researcher in the Department of Integrative Physiology. In such western regions, the spike in fatal accidents was more than 8%, the study found.

The increase kicks in right away, on the Sunday when the clocks spring forward, and the bulk of the additional fatal accidents that week occur in the morning.

Changes in accident patterns also occur after the "fall back" time change, the study showed, with a decline in morning accidents and a spike in the evening, when darkness comes sooner.

Because they balance each other out, there is no overall change in accidents during the "fall back" week.

In all, over the course of the 22 years of data analyzed, about 627 people died in fatal car accidents associated with the spring shift to Daylight Savings Time, the study estimated.

Because the data only include the most severe of car accidents, the authors believe the results underestimate the true risk increase to drivers when time springs forward.

"Our results support the theory that abolishing time changes completely would improve public health," said Vetter. "But where do we head from here? Do we go to permanent standard time or permanent daylight saving time?"

Generally speaking, research has shown, it's better for sleep, the body clock, and overall health to have more morning light and less evening light, as is the case under standard time. Under permanent daylight saving time, mornings would stay dark later in winter all over the country, with the western parts of each time zone seeing the sun the latest, Vetter noted.

"As a circadian biologist, my clear preference is toward standard time."

Credit: 
University of Colorado at Boulder

Letting your child pick their snack may help you eat better, study suggests

Giving in to your kid's desire for an unhealthy snack may improve your own eating choices, a new University of Alberta study shows.

The research, published in Appetite, showed that parents and other adult caregivers such as babysitters tended to make better food choices for themselves if they accommodated the youngster's request for a particular snack--whether that snack was healthy or not.

It was a "striking finding" that shows the psychological impacts of decision-making, said lead researcher Utku Akkoc, a lecturer in the Alberta School of Business and a consumer behaviour expert who did the study for his PhD.

Through a series of experiments and a field study, Akkoc, along with co-author and U of A business professor Robert Fisher, measured how powerful caregivers felt and what foods they consumed after making decisions in various scenarios, such as when they packed a treat the child had asked for in a school lunch.

Caregivers who listened to their children's preferences ate a lower number of unhealthy foods themselves. In one experiment, participants who granted a child's snack request ate on average 2.7 fewer unhealthy snacks and 1.9 more healthy snacks than those who imposed their own preferences on the child.

The reason likely lies in how the caregivers feel about their decision, Akkoc said.

"Our theory is that moms who accommodate the child's preferences against their better judgment would end up feeling less powerful, compared to moms who successfully impose their own food choices on their children. This happens because accommodation involves a passive and less stressful willingness to yield to the child. When people feel less powerful, they make more inhibited, healthier choices like a dieter would."

By contrast, adults imposing their own choices involves "an active exercise of persuasion in trying to get the child to eat that healthy fruit salad, not a piece of chocolate cake. You feel powerful after that, because you succeeded, and you feel licensed to reward yourself with treats," Akkoc said, noting that the same was also true for caregivers who successfully imposed unhealthy food choices on their child.

The research also showed the caregivers were influenced in their personal choices if they were eating together with their child, consuming the same healthy or unhealthy food.

"We believe it's because people would feel hypocritical if they ate cake in front of a child that's made to eat fruit," Akkoc said.

The findings offer an "effective, simple recipe" in tackling the problems of poor eating and obesity, Akkoc believes.

"It shows some ways parents and other adults can increase their own healthy eating by dining together with their children after making healthy choices for them," he said.

Credit: 
University of Alberta

Improvements in care could save the lives of more acute bowel obstruction patients

A study by the National Confidential Enquiry into Patient Outcome and Death has discovered "significant opportunities" to improve patient care for those with acute bowel obstruction

Around 6.4 per cent of patients admitted each year die within 90 days

The report identified recurring delays at every stage of the treatment process

It also found room for improvement in post discharge care

A new report has shared recommendations to improve the chance of survival for patients with acute bowel obstruction.

Delay in Transit, published by the National Confidential Enquiry into Patient Outcome and Death (NCEPOD), reviewed 686 cases of patients aged 16 and over, in an attempt to improve the high mortality rates for the condition which are currently at around 10 per cent in cases where surgery is needed.

Matt Lee, NIHR Clinical Lecturer in General Surgery at the University of Sheffield Medical School, and specialist in the area, led a National Audit of Small Bowel Obstruction in 2017 and was asked to use experience from this to support the new study.

He said: "There are over 22,000 admissions for bowel obstruction in England and Wales each year, of whom 6.4 per cent will die within 90 days.

"At the moment, there is considerable variation in both patient care and outcomes. This includes delays in patients experiencing a bowel obstruction being diagnosed and receiving treatment and aftercare."

The study identified the key causes of delays to treatment, finding that in almost 21 per cent of cases, there was a delay in providing a CT scan of the patient. In these cases, 61 per cent of patients were then subsequently delayed in being diagnosed.

This compared to just six per cent if there was no delay in diagnostic imaging. Following diagnosis, around 20 per cent of patients saw a delay to their surgery, which in all cases was either due to an operating theatre not being available, or there being no anaesthetist.

There was also found to be room for improvement in clinical care, with only 55 per cent of patients being subject to an "adequate" risk assessment, and only 38 per cent having a nutrition assessment when they were discharged.

Delay in Transit makes a series of 11 new recommendations for caring for those with acute bowel obstruction, including:

Undertaking a CT scan with intravenous contrast promptly to ensure timely diagnosis.

Undertaking a consultant review in all patients with acute bowel obstruction as soon as clinically indicated, and within 14 hours of admission at the latest

Measure and document the hydration status of those presenting with symptoms of acute bowel obstruction to minimise the risk of acute kidney injury

Ensure local policies are in place for the escalation of patients requiring surgery to enable rapid access to an operating theatre.

Minimise delays to diagnosis and treatment by auditing the time taken between each step in the patients' treatment

Mr Lee said: "These are areas we have the ability to make changes easily which will have a direct benefit for patient outcomes.

"Patients with a bowel obstruction cannot eat properly and our previous report highlighted areas such as the lack of clinical guidelines for monitoring the nutritional intake for this group of patients. This report builds on our research and recommends the need for clinical processes to be introduced such as frequent and repeated assessments of nutritional status, and supporting nutrition where needed.

"Acute bowel obstruction is a common condition, the most common cause of which is blockage of the bowel by scar tissue from previous surgery and close to one in five patients have bowel obstruction because of this.

"As a general surgeon, I'm passionate about improving the care of emergency conditions and this condition has outcomes that could be improved. Emergency care is under great pressure at the moment, so working to improve what we can is vitally important."

Further recommendations from the report include monitoring pain levels throughout a patients' admission, undertaking a review from a nutrition team on diagnosis, and ensuring that special measures are taken for patients with a high level of frailty.

Credit: 
University of Sheffield

McGill researchers lay foundation for next generation aortic grafts

A new study by researchers at McGill University has measured the dynamic physical properties of the human aorta, laying the foundation for the development of grafts capable of mimicking the native behaviour of the human body's largest artery.

Marco Amabili, a Canada Research Chair professor in McGill's Department of Mechanical Engineering and his team used their experimental design to establish how Dacron grafts, used as vascular prostheses to replace faulty aortas, measure up to real ones. The polyester grafts, they found, are extremely rigid and don't expand when the heart pushes blood through them.

"Because the grafts don't expand at all, they induce several cardiovascular problems for patients," Amabili said. "It's the equivalent of implanting a sick aorta instead of a healthy one."

The researchers used lasers to measure the dynamic displacement of human aortas - obtained from hearts harvested for transplants - attached to a model circulatory loop designed to mimic the pulsing flow of blood generated by heartbeats.

The results, recently published in the journal Physical Review X, showed that the expansion capacity of an aorta greatly varies with age - aortas of younger donors could expand to about 10 % of their circumference while those of older donors could only expand up to 2 %. The expansion has a slight delay with respect to the pulsating pressure, which makes the blood flow more uniform; this delay reduces with age.

"The dynamic behaviour of the human aorta was poorly understood. What we did know was obtained using invasive catheters to gather ultrasound measurements of the aorta's motion in humans while having their blood pressure measured so the data was limited to resting states," said Amabili, who is also the study's senior author. "Our experiments were able to simulate the effects of blood pressure and flow on the aorta so as to understand how it reacts in both a resting state or during heavy exercise."

The study will provide crucial information about the materials needed to design a new generation of aortic prostheses with similar biomechanical properties to that of human aortas.

"This research could greatly improve patients' quality of life, especially for those who have grafts implanted at a young age because they will undergo subsequent surgery throughout their lives to replace the grafts once they start to fail," explained Isabella Bozzo, a former master's student in Amabili's lab and co-author on the paper. "These surgeries are extremely invasive and the recovery is painful, so we want to develop grafts that will give them the best chance of success, by minimizing future surgery and reproducing the hemodynamics of healthy aortas."

Expanding knowledge on the dynamics of the human aorta should also provide invaluable clues in understanding the development and progression of numerous vascular pathologies such as atherosclerotic plaque, aortic aneurysms and dissections.

Credit: 
McGill University

Hemp 'goes hot' due to genetics, not growing conditions

image: Horticulture professor Larry Smart examines industrial hemp plants growing in a greenhouse at Cornell AgriTech in Geneva, New York.

Image: 
Justin James Muir/Cornell University

ITHACA, N.Y. - As the hemp industry grows, producers face the risk of cultivating a crop that can become unusable - and illegal - if it develops too much of the psychoactive chemical THC. Cornell University researchers have determined that a hemp plant's propensity to 'go hot' - become too high in THC - is determined by genetics, not as a stress response to growing conditions, contrary to popular belief.

"[People thought] there was something about how the farmer grew the plant, something about the soil, the weather got too hot, his field was droughted, something went wrong with the growing conditions," said Larry Smart, horticulture professor and senior author of the study. "But our evidence from this paper is that fields go hot because of genetics, not because of environmental conditions."

Smart and his team conducted field trials at two sites, studying the genetics and chemistry of 217 hemp plants. They found that differences in growing conditions between the sites had no significant influence on which chemicals the plants produced. But when they compared the CBD (cannabidiol) and THC levels of each of the plants against their genomes, they found very high correlation between their genetics and the chemicals they produced.

Jacob Toth, first author of the paper and a doctoral student in Smart's lab, developed a molecular diagnostic to demonstrate that the hemp plants in the study fell into one of three genetic categories: plants with two THC-producing genes; plants with two CBD-producing genes; or plants with one gene each for CBD and THC.

To minimize the risk of plants going hot, hemp growers ideally want plants with two CBD-producing genes.

While conducting the research, the team also discovered that as many as two-thirds of the seeds they obtained of one hemp variety - which were all supposed to be low-THC hemp - produced THC above legal limits.

The researchers hope their work will help address this problem by providing breeders with easy-to-use genetic markers that can be utilized much earlier on seedlings and both sexes of plants.

The study was published in Global Change Biology-Bioenergy.

Credit: 
Cornell University

Immune response in brain, spinal cord could offer clues to treating neurological diseases

image: University of Alberta neuroscientist Jason Plemel was part of a team of Canadian researchers who discovered that immune cells in the brain and spinal cord behave differently from blood immune cells in their response to nerve damage.

Image: 
Ryan O'Byrne

An unexpected research finding is providing new information that could lead to new treatments of certain neurological diseases and disorders, including multiple sclerosis, Alzheimer's disease and spinal cord injury.

University of Alberta medical researcher Jason Plemel and key collaborators Joanne Stratton from McGill University, and Wee Yong and Jeff Biernaskie from the University of Calgary, found that immune cells in our brain and central nervous system, called microglia, interfere with blood immune cells called macrophages.

This discovery suggests that the immune cells in our brain and central nervous system are preventing the movement of the blood immune cells.

"We expected the macrophages would be present in the area of injury, but what surprised us was that microglia actually encapsulated those macrophages and surrounded them--almost like police at a riot. It seemed like the microglia were preventing them from dispersing into areas they shouldn't be," said Plemel.

"We're not sure why this happens. More research is required to answer that question," he added.

The central nervous system contains both white and grey matter. White matter is composed of nerve fibres covered by myelin, which speeds up the signals between the cells and allows the brain to quickly send and receive messages. In various neurological diseases and disorders, the myelin becomes damaged, exposing the nerves to deterioration.

"We found that both the immune cells that protect the central nervous system, microglia, and the immune cells of the peripheral immune system, macrophages, are present early after demyelination, and microglia continue to accumulate at the expense of macrophages.

"When we removed the microglia to understand what their role was, the macrophages entered into uninjured tissue," explained Plemel, who is also a member of the Neuroscience and Mental Health Institute.

"This suggests that when there is injury, the microglia interfere with the macrophages in our central nervous system and act as a barrier preventing their movement."

An opposite effect happens when a nerve is injured elsewhere in the body. For example, when a nerve is injured in your leg, the macrophages accumulate but the other resident immune cells do not, making the microglia's response in the central nervous system unique.

While there are several differences in the operation and origin of microglia and macrophages, it has historically been impossible to tell the two types of cells apart.

It is this ability to differentiate between the two that may lead to an increased understanding of how each specific type of immune cell responds to demyelination, and as a result, lead to the development of new techniques and treatments that can combat and repair the damage being caused.

Using the same technique, Plemel and his collaborators also discovered there was more than one type of microglia responding to demyelination.

"The indication of at least two different populations of microglia is an exciting confirmation for us," said Plemel. "We are continuing to study these populations and hopefully, in time, we can learn what makes them unique in terms of function. The more we know, the closer we get to understanding what is going on (or wrong) when there is neurodegeneration or injury, and being able to hypothesize treatment and prevention strategies."

Credit: 
University of Alberta Faculty of Medicine & Dentistry

Research zeroing in on electronic nose for monitoring air quality, diagnosing disease

image: Depiction of a gas sensor array composed of microscale balances coated with thin films of nanoporous materials called metal-organic frameworks.

Image: 
Arni Sturluson, Melanie Huynh, OSU College of Engineering

CORVALLIS, Ore. - Research at Oregon State University has pushed science closer to developing an electronic nose for monitoring air quality, detecting safety threats and diagnosing diseases by measuring gases in a patient's breath.

Recently published research led by Cory Simon, assistant professor of chemical engineering in the OSU College of Engineering, in collaboration with chemical engineering professor Chih-Hung Chang focused on materials known as metal-organic frameworks, or MOFs.

The research took aim at a critical yet understudied hurdle in using MOFs as gas sensors: Out of the billions of possible MOFs, how do you determine the right ones for building the optimal electronic nose?

MOFs have nanosized pores and selectively adsorb gases, similar to a sponge. They are ideal for use in sensor arrays because of their tunability, enabling engineers to use a diverse set of materials that allows an array of MOF-based sensors to deliver detailed information.

Depending on which components make up a gas, different amounts of the gas will adsorb in each MOF. That means the composition of a gas can be inferred by measuring the adsorbed gas in the array of MOFs using micro-scale balances.

The challenge is that all MOFs adsorb all gases - not to the same extent, but nevertheless the absence of perfect selectivity prevents an engineer from simply saying, "let's just dedicate this MOF to carbon dioxide, that one to sulfur dioxide, and another one to nitrogen dioxide."

"Curating MOFs for gas sensor arrays is not that simple because each MOF in the array will appreciably adsorb all three of those gases," Simon said.

Human noses navigate this same problem by relying on about 400 different types of olfactory receptors. Much like the MOFs, each olfactory receptor is activated by many different odors, and each odor activates many different receptors; the brain parses the response pattern, allowing people to distinguish a multitude of different odors.

"In our research, we created a mathematical framework that allows us, based on the adsorption properties of MOFs, to decide which combination of MOFs is optimal for a gas sensor array," Simon said. "There will inevitably be some small errors in the measurements of the mass of adsorbed gas, and those errors will corrupt the prediction of the gas composition based on the sensor array response. Our model assesses how well a given combination of MOFs will prevent those small errors from corrupting the estimate of the gas composition."

Though the research was primarily mathematical modeling, the scientists used experimental adsorption data in real MOFs as input, Simon said, adding that Chang is an experimentalist "who we are working with to make a real-life electronic nose to detect air pollutants."

"We are currently seeking external funding together to bring this novel concept into physical realization," Simon said. "Because of this paper, we now have a rational method to computationally design the sensory array, which encompasses simulating gas adsorption in the MOFs with molecular models and simulations to predict their adsorption properties, then using our mathematical method to screen the various combinations of MOFs for the most accurate sensor array."

Meaning that instead of an experimental trial-and-error approach to decide which MOFs to use in a sensor array, engineers can use computational power to curate the best collection of MOFs for an electronic nose.

Another exciting application of such a nose could be diagnosing disease. The volatile organic compounds humans emit, such as through our breath, are filled with biomarkers for multiple diseases, and studies have shown that dogs -- which have twice the number of different olfactory receptors as humans -- can detect diseases with their nose.

Marvelous though they are, however, dogs' noses aren't as practical for widespread diagnostic use as a carefully crafted and manufactured sensor array would be.

Credit: 
Oregon State University

If it takes a hike, riders won't go for bike sharing

ITHACA, N.Y. - Even a relatively short walk to find the nearest bicycle is enough to deter many potential users of bike sharing systems, new Cornell research suggests.

"If a docking station is more than two or three blocks away, they just won't go there," said Karan Girotra, professor of operations, technology and innovation at Cornell Tech and the Cornell SC Johnson College of Business. "And if they encounter a station without bikes, it's very unlikely they will go to the next station."

Girotra co-authored "Bike-Share Systems: Accessibility and Availability," published in November by Management Science, with Elena Belavina, associate professor at the School of Hotel Administration in the SC Johnson College, and Ashish Kabra, assistant professor at the University of Maryland's Robert H. Smith School of Business.

Their findings imply that, outside of a few big stations at major transit hubs, cities and bike-share operators should strive to create denser networks with many smaller stations, Girotra and Belavina said, and keep them stocked.

"It's no surprise that people want stations close to them, but it's much closer than most planners and bike-share systems thought they needed," Belavina said. "Most systems are nowhere close to their optimal density."

Bike sharing systems hold the potential to reduce traffic and pollution in dense, flat cities such as London, New York, Paris and Shanghai, the researchers noted. They encourage and enhance public transit use by providing "last mile" connections to bus and train stations.

But "their promise of urban transformation is far from being fully realized," according to the paper. Many systems were established quickly, sometimes through public-private partnerships, and with less rigorous planning than higher-cost transit systems, Girotra said.

"There was perhaps an opportunity to put a little more thought into how a bike-share system can be introduced in a city," he said.

To that end, the research team built a model to produce the first estimates of how station proximity and bike availability influence bike-share operations.

The structural demand model analyzed data from Paris' Vélib' system - the largest outside China with roughly 17,000 bikes and 950 stations - during four months of 2013, a period that included nearly 4.4 million trips. The data provided snapshots of system usage every two minutes, showing how stations changed throughout each day.

The researchers blended that information with data about population density in different city districts, metro ridership, attendance at top tourist destinations and weather conditions. The team also logged the locations of thousands of points of interest such as transit stations, parks, libraries, hotels, grocery stores, restaurants and cafes.

"Put together," Belavina said, "that gave us some ability to disentangle what guides people's decisions in choosing bike sharing and different bike-share stations."

The model determined that someone roughly 300 meters (nearly 1,000 feet) from a docking station is 60% less likely to use the service than someone very near the station. The odds decrease slightly with every additional meter, such that someone 500 meters away - about one-third of a mile - is "highly unlikely to use the system."

But a 10% increase in bike availability - the likelihood of finding a bicycle at a station - would grow ridership by roughly 12%, the study found, thanks to fewer lost sales at out-of-stock stations and improved expectations of the system.

Among the various points of interest, placing stations near grocery stores provides the most benefit, the model showed.

Generating the study's findings required methodological advances to adapt demand modeling to a bike-share context, the researchers said.

Models have long predicted shifts in usage patterns when considering new locations for transit stations, retail outlets or bank ATMs. But bike-share demand in a major city, with hundreds of stations changing inventory throughout each day, involved studying a more dynamic system with much finer resolution, Girotra said.

The team's huge volume of data might have required completing more than a quadrillion calculations to generate the best estimates, likely taking over a year, according to the paper. Instead, the researchers developed new computational techniques, Belavina said, to condense some data and make the process more manageable.

The resulting model, according to the co-authors, can be applied not only to bike-share systems but other micro-mobility services: scooters, powered bikes, local food delivery and ride-sharing. The researchers plan to look more broadly at micro-mobility in a future study partnering with London's transit agency.

Regarding bike sharing, the study's advice was clear: "Make bikes and stations more available," Girotra said. "People don't like walking to access a bike-share system."

Credit: 
Cornell University

People may lie to appear honest

WASHINGTON - People may lie to appear honest if events that turned out in their favor seem too good to be true, according to new research published by the American Psychological Association.

"Many people care greatly about their reputation and how they will be judged by others, and a concern about appearing honest may outweigh our desire to actually be honest, even in situations where it will cost us money to lie," said lead researcher Shoham Choshen-Hillel, PhD, a senior lecturer at the School of Business Administration and Center for the Study of Rationality at The Hebrew University of Jerusalem. "Our findings suggest that when people obtain extremely favorable outcomes, they anticipate other people's suspicious reactions and prefer lying and appearing honest over telling the truth and appearing as selfish liars."

The study found similar findings about lying to appear honest in a series of experiments conducted with lawyers and college students in Israel, as well as online participants in the United States and United Kingdom. The research was published online in the Journal of Experimental Psychology: General.

In one experiment with 115 lawyers in Israel, the participants were told to imagine a scenario where they told a client that a case would cost between 60 and 90 billable hours. The lawyer would be working in an office where the client wouldn't know how many hours were truly spent on the case. Half of the participants were told they had worked 60 hours on the case while the other half were told they worked 90 hours. Then they were asked how many hours they would bill the client. In the 60-hour group, the lawyers reported an average of 62.5 hours, with 17% of the group lying to inflate their hours. In the 90-hour group, the lawyers reported an average of 88 hours, with 18% of the group lying to report fewer hours than they had actually worked.

When asked for an explanation for the hours they billed, some lawyers in the 90-hour group said they worried that the client would think he had been cheated because the lawyer had lied about the number of billable hours.

In another experiment, 149 undergraduate students at an Israeli university played online dice-rolling and coin-flipping games in private and then reported their scores to a researcher. The participants received approximately 15 cents for each successful coin flip or dice roll they reported. The computer program was manipulated for half of the students so they received perfect scores in the games, while the other group had random outcomes based on chance. In the perfect-score group, 24% underreported their number of wins even though it cost them money, compared with 4% in the random-outcome group.

"Some participants overcame their aversion toward lying and the monetary costs involved just to appear honest to a single person who was conducting the experiment," Choshen-Hillel said.

In another online experiment with 201 adults from the United States, participants were told to imagine a scenario where they drove on many work trips for a company that had a maximum monthly compensation of 400 miles. They were told that most employees reported 280 to 320 miles per month.

Half of the participants were told they had driven 300 miles in a month while the other half were told they drove 400 miles. When the participants were asked how many miles they would report, the 300-mile group told the truth and reported an average of 301 miles. For the 400-mile group, the participants reported an average of 384 miles, with 12% lying and underreporting their mileage. There were similar findings in another online experiment with 544 participants in the United Kingdom.

Choshen-Hillel said she believes the study findings would apply in the real world, but there could be situations where the amount of money or other high stakes would lead people to tell the truth even if they might appear dishonest.

"While our findings may seem ironic or counterintuitive, I think most people will recognize a time in their lives when they were motivated to tell a lie to appear honest," she said.

Credit: 
American Psychological Association

Biological diversity as a factor of production

image: Biodiversity and ecosystem functions rarely form a steadily rising curve. Rather, scientists under the leadership of TUM found empirical and theoretical evidence for strictly concave or strictly convex relationships between biodiversity and economic value.

Image: 
K. Baumeister / TUM

The main question addressed by the study is: Does greater biodiversity increase the economic value of managed ecosystems? "We have found that the possible relationships between economic value and biodiversity are varied," says Professor Thomas Knoke, Head of the Institute of Forest Management at the TUM School of Life Sciences Weihenstephan.

It all depends on the purpose

Even a layman can guess the main purpose of single-species timber plantations: economic benefit through the sale of wood. But forests have a number of functions. They serve as home to a variety of animal and plant species, function as a source of wood as a raw material, have a protective function such as protecting the soil and helping combat global warming and serve recreational purposes as well.

It is common ecological knowledge that the more biodiverse a forest is, the higher the productivity will be. However, the researchers found that "after you have reached a certain mix of trees, adding new species no longer produces significant economic benefits to people." What counts here are the characteristics of the species of trees inhabiting the forest as not every tree has the same value.

"The different functions of an ecosystem never stand to an equal degree in positive relation to biodiversity," explains Carola Paul, University of Göttingen, who until recently was a member of Thomas Knoke's team. If you were to compile all functions of an ecosystem, you would find a mathematical maximum in terms of its value.

The team found that, "maximizing biodiversity at the level of the ecosystem does not maximize economic value in most cases." This particularly holds true if compromises have to be made between different purposes or different economic yields and risks. In such cases, applying a medium level of biological diversity proves most beneficial.

Where biodiversity pays off

The more diverse the plants in an ecosystem are, the better the situation is in terms of risk diversification. This affects the variability of cash value of the ecosystem. The research shows that risk premiums can be lowered just by making a minor change to the level of biodiversity. Risk premium is the reward that a risk-averse person requires to accept a higher risk.

The researchers identified high value potential in biodiversity particularly in connection with the avoidance of social costs. These costs are borne by the public such as diseases caused by air pollution. In its mathematical calculations of these social costs, the study argues that more diverse, mixed agriculture and forest management systems pay off. "Biodiverse ecosystems require less pesticides and fertilizer," explains Thomas Knoke.

A medium degree of biodiversity often creates the best value

"Based on theoretical considerations and empirical evidence, we have found that ecosystems with several, but in all actuality relatively few, plant species can produce more economic benefits than those with only one species as well as those with a large number of species," the scientist summarizes.

According to the research, biodiversity and ecosystem functionality rarely create a consistent upward curve. Instead, the team found empirical and theoretical evidence of strictly concave or strictly convex relationships between biodiversity and economic value.

These findings in no way indicate that mega biodiverse ecosystems are not worth protecting. Instead they show that economic arguments alone are not sufficient when talking about these biodiversity "hot spots."

What the relationships do highlight are the economic benefits that even a minor increase in biodiversity could have in the agricultural sector. When it comes to forests, the study shows that it is possible to manage a stable forest that serves a variety of functions with four to five species of trees. The relationships identified in the study can therefore be of considerable value in land use planning going forward.

Credit: 
Technical University of Munich (TUM)

Computer servers now able to retrieve data much faster

Computer scientists at the University of Waterloo have found a novel approach that significantly improves the storage efficiency and output speed of computer systems.

Current data storage systems use only one storage server to process information, making them slow to retrieve information to display for the user. A backup server only becomes active if the main storage server fails.

The new approach, called FLAIR, optimizes data storage systems by using all the servers within a given network. Therefore, when a user makes a data request, if the main server is full, another server automatically activates to fill it.

"The key enabler for FLAIR is the recent introduction of programmable networks," said Samer Al-Kiswany, a professor in Waterloo's David R. Cheriton School of Computer Science and co-author of the study introducing the FLAIR technique. "Since the invention of computers, networks that connect storage servers in any system were rigid and inflexible. FLAIR leverages a new cutting-edge networking technology to build a smart network layer that can find the fastest way to fulfil information retrieval requests. Our evaluation shows that this approach can fulfil requests up to 2.5 times faster, compared to classical designs."

In developing the new protocol, the researchers first had to prove its correctness and formally verify it to ensure the approach will not return bad results. They were able to test FLAIR with real workloads on campus, as Waterloo is one of the few universities that have a cluster with the new programmable network.

Al-Kiswany and his team found that FLAIR increased retrieval speeds by anywhere from 35 to 97 percent.

"This will lead to a whole range of applications as this type of system is the core building block of a wide range of applications," said Ibrahim Kettaneh, the graduate student leading the FLAIR development. "FLAIR can significantly improve the performance of databases and data processing engines, which are the backends for health systems, banking systems and financial transactions. It will also be applicable to any modern computer application hosted on the cloud, such as online documents, social networks and emails."

The study, FLAIR: Accelerating Reads with Consistency-Aware Network Routing, authored by Waterloo's Faculty of Mathematics' Al-Kiswany and his graduate students; Kettaneh, Ahmed Alquraan and Hatem Takruri, will be presented at the USENIX Symposium on Networked Systems Design Implementation to be held in Santa Clara, USA from February 25-27.

Credit: 
University of Waterloo

Cells' springy coils pump bursts of RNA

image: Models by Rice University chemists calculate the chemical and mechanical energies involved in "bursty" RNA production in cells. Their models show how RNA polymerases create supercoils of DNA that allow production of RNA that goes on to produce proteins.

Image: 
Alena Klindziuk/Rice University

HOUSTON - (Jan. 30, 2020) - In your cells, it's almost always spring. Or at least springy.

Bioscientists have known for some time that chromosomes tend to express their protein products in bursts, rather than in a steady manner. A new theoretical study by Rice University scientists, detailed in the Biophysical Journal, aims to better explain the process that combines chemical reactions and mechanical forces.

Rice chemist Anatoly Kolomeisky and applied physics graduate student and lead author Alena Klindziuk built the first simplified analytical model of "bursting" to show how pressure from a RNA polymerase enzyme triggers the rush of RNA production, but only to the degree that it can push a coil of DNA.

As it compresses like a spring, that DNA "supercoil" continues to express RNA -- which goes on to make the proteins themselves -- until the enzyme can push no more. It isn't until another enzyme, a gyrase, comes along to release the tension that production can start anew.

"With advances in experimental techniques, people are able to measure how much RNA you are producing, and so it was a naive expectation that the speed of production was more or less constant," said Kolomeisky, a chemist by title whose group has long been interested in how biochemical reactions work to power biological mechanisms, and vice versa.

"It was surprising when we found it actually doesn't work this way," he said. "A lot of RNA is produced and then there's a period of silence. RNA is produced in a very bursty behavior, but the molecular details have been lacking."

He said the way RNA polymerase aligns with the double-helical DNA coils it in the process. "It rotates, putting mechanical constraints on DNA," Kolomeisky said. "A spring is an excellent example. The more you push a spring, the harder it becomes to push.

"We think the RNA polymerase coils DNA to start RNA production," he said. "At the beginning of the process, you get a burst, but the process slows down as it squeezes the spring. Then gyrases come in; they untangle this supercoil so that normal production can begin again." At the same time, he said gyrases also relieve negative stress created on the other side of the polymerase.

"Essentially, we created the first quantitative energetic model that explains this burstiness," Kolomeisky said. "We are able to interpret the experimental data (gathered in experiments on bacteria) and found this supercoil exists."

He said the calculations show the DNA "spring" is relatively weak. "That has biological significance because it means we can more easily regulate the process by regulating gyrases," he said.

Klindziuk noted there are many other players in the process that ultimately need to be accounted for. "We could have added many effects, like transcriptional and other epigenetic factors," she said. "We want to make a model where there are multiple polymerases on DNA. In this model, we only had one, but in reality, there are many polymerases. There might be effects from polymerase traffic, like they're bumping into each other and stopping and resuming their activity."

"This is an example of advanced experimentation that led us to seek a significant theoretical solution," Kolomeisky said. "It usually happens in the opposite direction, but this time experiments were able to visualize the process, and that led us to think about and start to explain it."

Credit: 
Rice University

KU Leuven researchers discover new piece of the puzzle for Parkinson's disease

image: Professor Peter Vangheluwe (KU Leuven)

Image: 
KU Leuven - Rob Stevens

Biomedical scientists at KU Leuven have discovered that a defect in the ATP13A2 gene causes cell death by disrupting the cellular transport of polyamines. When this happens in the part of the brain that controls body movement, it can lead to Parkinson's disease.

With more than six million patients around the world, Parkinson's disease is one of the most common neurodegenerative disorders. Around twenty genetic defects have already been linked to the disease, but for several of these genes, we don't know what function they fulfil. The ATP13A2 gene used to be one of these genes, but researchers at KU Leuven have now discovered its function in the cell, explaining how a defect in the gene can cause Parkinson's disease.

"We found that ATP13A2 transports polyamines and is crucial for their uptake into the cell," explains senior author Peter Vangheluwe from the Laboratory of Cellular Transport Systems at KU Leuven. "Polyamines are essential molecules that support many cell functions and protect cells in stress conditions. But how polyamines are taken up and transported in human cells was still a mystery. Our study reveals that ATP13A2 plays a vital role in that process."

"Our experiments showed that polyamines enter the cell via lysosomes and that ATP13A2 transfers polyamines from the lysosome to the cell interior. This transport process is essential for lysosomes to function properly as the 'waste disposal system' of the cell where obsolete cell material is broken down and recycled."

"However, mutations in the ATP13A2 gene disrupt this transport process, so that polyamines build up in lysosomes. As a result, the lysosomes swell and eventually burst, causing the cells to die. When this happens in the part of the brain that controls body movement, this process may trigger the motion problems and tremors related to Parkinson's disease."

Unravelling the role of ATP13A2 is an important step forward in Parkinson's research and sheds new light on what causes the disease, but a lot of work remains to be done. Professor Peter Vangheluwe: "We now have to investigate how deficient polyamine transport is linked to other defects in Parkinson's disease such as the accumulation of plaques in the brain and malfunctioning of the mitochondria, the 'energy factories' of the cell. We need to examine how these mechanisms influence each other."

"The discovery of the polyamine transport system in animals has implications beyond Parkinson's disease as well, because polyamine transporters also play a role in other age-related conditions, including cancer, cardiovascular diseases, and several neurological disorders."

"Now that we have unravelled the role of ATP13A2, we can start searching for molecules that influence its function. Our lab is already collaborating with the Centre for Drug Design and Discovery - a tech transfer platform established by KU Leuven and the European Investment Fund - and receives support from the Michael J. Fox Foundation."

Credit: 
KU Leuven

Those who believe that the economic system is fair are less troubled by poverty, homelessness, and extreme wealth

We react less negatively to extreme manifestations of economic disparity, such as homelessness, if we think the economic system is fair and legitimate, and these differences in reactivity are even detectable at the physiological level, finds a team of psychology researchers. The research, which appears in the journal Nature Communications, offers new insights into why we have varying reactions to inequality.

"Research has shown that people generally have an aversion to unequal distributions of resources, an example of which may be a person we see sleeping on a grate or lacking access to basic necessities, healthcare, and education," explains Shahrzad Goudarzi, the paper's lead author and a doctoral candidate in New York University's Department of Psychology. "Yet many people either pay little attention to or are otherwise unbothered by rising economic disparities--responses that some may have difficulty understanding. This research begins to explain such differences: beliefs that legitimize and justify the economic system diminish our deep-seated aversion to inequality, buffering us against negative emotions in response to it."

Previous research has shown that humans, and some other primates, have developed an evolutionary aversion towards inequality in distribution of goods and resources. For instance, children as young as six years old have been found to refuse items if it meant having more than their peers. Nonetheless, public opinion data suggest that a large percentage of Americans are not bothered by economic inequality. For example, a 2018 Gallup Poll showed that one-third of Americans are satisfied with the existing distribution of income and wealth. Such acceptance, despite general preferences for greater equality, raises the question of how people manage such contradictions.

To address this, the scientists in the Nature Communications study conducted a series of six experiments. Two of these (Studies 1 and 2) were done using participants from Amazon's "Mechanical Turk" and Prolific Academic, tools in which individuals are compensated for completing small tasks and which are frequently used in running behavioral science studies. Four others (Studies 3-6) involved college undergraduates.

In Studies 1 and 2, participants were asked their views of the American economic system by registering their agreement with statements such as the following: "Economic positions are legitimate reflections of people's achievements" and "If people work hard, they almost always get what they want." A week later, some viewed a video in which a homeless interviewee described their circumstances, recounting their routines and struggles. Separate control groups viewed mundane videos, depicting interviews about fishing and producing coffee.

Those who believed the American economic system was fair, legitimate, and justified ("system justifiers"), compared with those who did not, reported feeling less negative emotions after watching videos depicting homelessness.

Studies 3-5 replicated these steps, then added a new component: participants' physiological responses were measured by gauging their skin conductance levels and subtle facial muscle movements. This method affords a deeper accounting of our responses because it captures involuntary reactions to stimuli--negative arousal and emotional distress. Here, economic system justifiers showed comparatively low levels of negative affect and arousal while viewing people experiencing homelessness. By contrast, economic system justification was not associated with emotional reactions to the control videos.

Study 6 went a step further--it was aimed at capturing emotions in the context of people's daily lives. In this study, undergraduates received four text messages a day for nine consecutive days, prompting them to complete a short survey using their smartphones. Two of the daily surveys were designed to measure reactions to inequality, with one survey asking participants to indicate whether they had encountered someone they considered very poor and another whether they had encountered someone very rich compared with themselves; the order of these surveys was randomized across days. Regardless of whether participants reported such an encounter, they were asked about their emotions--either in light of the encounter (if one was reported) or over the preceding two hours (if no encounter was reported).

Consistent with the previous studies, those identified as "system justifiers" reported less negative emotion after their everyday exposure to rich and poor people than did people who were more critical of the existing economic system.

"These results provide the strongest evidence to date that system-justifying beliefs diminish aversion to inequality in economic contexts," observes Eric Knowles, an associate professor of psychology at NYU and one of the paper's co-authors.

Credit: 
New York University

SUTD's novel approach allows 3D printing of finer, more complex microfluidic networks

image: 2D and 3D fluidic networks by modularized stereolithography.

Image: 
SUTD

First introduced in the 1980s, stereolithography (SL) is an additive manufacturing process that prints 3D objects by the selective curing of liquid polymer resin using an ultra-violet (UV) light source in a layer-by-layer fashion. The polymer employed undergoes a photochemical reaction which turns it from liquid to solid when exposed to UV illumination. Today, SL is touted as one of the most accurate forms of 3D printing that is accessible to consumers, with desktop models (e.g., liquid crystal display variants) costing as little as USD 300.

SL is an attractive option for researchers in the field of microfluidics. Not only does it have the ability to fabricate microfluidic devices in a single step from a computer-generated model, but it also allows the fabrication of truly 3D structures that would otherwise have been challenging, if not impossible, with the existing fabrication approaches.

However, when employing SL printers in printing microfluidic channels, two representative problems occur. Firstly, inadvertent polymerization of uncured resin in channel void. During the print, the liquid resin is trapped within the channel void. Illumination from subsequent layers may inadvertently cure the trapped liquid resin, which will result in a channel clog.

Secondly, in the event where inadvertent polymerization of resin does not occur, the evacuation of the trapped resin within the channel void can still be a challenge. This is because existing liquid resin is viscous (i.e., consistency like honey), making the evacuation of narrow channels or networks with multiple branches challenging. These two challenges limit the attainability of channel dimensions and complexity in fluidic networks printed by SL.

To tackle these limitations, researchers from the Singapore Unversity of Technology and Design (SUTD) in collaboration with Assistant Professor Toh Yi-Chin's research group from the National University of Singapore, developed a design approach that can improve the attainable channel dimensions and complexity of networks with existing SL (refer to image).

"The conventional way of printing microfluidic devices with SL printers is to print the entire device as a monolithic entity. However, issues like inadvertent polymerization of channel void and difficulty in evacuating channel void arises from printing as a monolithic entity," explained principal investigator Assistant Professor Michinao Hashimoto from Engineering Product Development, SUTD.

Instead, the researchers took a modularization approach - where they spatially deconstructed a microfluidic channel into simpler subunits, printed them separately, and subsequently assembled them to form microfluidic networks. By applying this approach, they were able to print microfluidic networks with greater intricacy (such as hierarchical branching) and smaller channel dimensions.

"By design, each subunit is spatially deconstructed to have simple geometries that would not result in inadvertent polymerization. The simple geometries also facilitated the evacuation of uncured resin," said lead author Terry Ching, a graduate student from SUTD.

The team was able to fabricate a range of fluidic networks that were challenging to print using conventional methods. Their demonstration includes hierarchical branching networks, rectilinear lattice networks, helical networks, etc. They were also able to demonstrate the efficacy of their approach by showing a substantial improvement channel dimensions (i.e., channel w = 75 μm and h = 90 μm) when compared to using the conventional 'monolithic' printing approach.

One obvious use case is the application of this approach to fabricate fluidic networks using hydrogel to mimic native vasculature. To date, the variety of SL printable hydrogels is limited, and they often lack mechanical properties necessary for an accurate print or biocompatibility required for the incorporation of living cells. By simplifying the geometries of each subunit, the team used hydrogel to fabricate intricate fluidic networks, mimicking native vasculature.

"Simplifying the geometries of the subunits also reduces the use of additives that may be harmful to biological cells," added Ching.

All in all, this is a general design approach that can circumvent some of the biggest challenges in SL printed microfluidics - by applying this approach, existing SL printers can now fabricate microfluidics with finer channel dimensions, and more branching intricacies. This research paper has been published in Advanced Engineering Materials.

Credit: 
Singapore University of Technology and Design