Culture

Genome sequencing paves the way for more sustainable herring fishery

An international team of Swedish, Norwegian, Danish and Irish scientists has used whole genome sequencing to characterise 53 herring populations from the Atlantic Ocean and the Baltic Sea. They have developed genetic markers that make it possible to better monitor herring populations and avoid overfishing. The study is published in the journal eLife.

"This project provides a 'toolbox' in the form of genetic markers for cost-effective screenings that can be applied to monitor herring stocks throughout their life history from the larval stage to the adult stage," concludes Professor Arild Folkvord of Bergen University, who led the GENSINC project, which this study is part of. "It will now be possible to distinguish different stocks when they are mixed on the feeding grounds, for instance, which will help set fishing quotas that harness sustainable exploitation of genetically defined stocks."

The Atlantic herring is one of the most abundant vertebrates on Earth. It has been estimated that the total breeding stock of herring in the Atlantic Ocean and adjacent waters amounts to about one trillion fish.

Herring constitute this enormous biomass because they feed on plankton. They in turn are an important food resource for other fish, seabirds and sea mammals like the fin whale. Herring fishery has been an important food resource since humans colonised Northern Europe. Herring are schooling fish and therefore susceptible to overfishing because many tonnes of herring can be caught in a single haul during fishing, and in the past several stocks of herring have collapsed due to overfishing.

A grand challenge for the future is to avoid overfishing and maintain viable stocks of marine fish exploited in marine fishery. Stocks of herring are defined by where and when they spawn, but until now no efficient genetic markers have been available for distinguishing different stocks.

When Professor Leif Andersson of Uppsala University, who has led the genetic analysis, first started to study the Atlantic herring in the late 1970s, only a handful of genetic markers could be used. To their surprise, scientists found that all the markers they analysed occurred at the same frequency across all populations of herring. The reason for this lack of genetic differences for most genes is that the population size is huge and there is gene flow between populations, making the frequencies of gene variants stable over time and space.

"In the present study we have sequenced the entire genome and studied millions of genetic variants," explains Dr Fan Han, a former PhD student at Uppsala University and first author on the article. "Now our resolution is completely different, we find very clear genetic differences for a limited number of genes that appear to distinguish all major stocks of Atlantic herring."

The researchers found gene variants for a few hundred genes that are particularly important for the genetic adaptation to factors such as differences in spawning season, salinity and water temperature at spawning. These are the gene variants that are most useful to distinguish different stocks.

"The results of this study make Atlantic herring particularly well suited for studies on the impact of global warming on fish populations," explains Leif Andersson. "Some of the detected gene variants are strongly associated with water temperature at spawning. The gene variants that occur at a very high frequency in the waters surrounding Ireland and Great Britain, which are the warmest waters where herring reproduce, are expected to become more common further north as the seawater gets warmer."

Credit: 
Uppsala University

Sights set on curbing gun crime

image: Flinders University Strategic Professor in Criminal Justice Andrew Goldsmith

Image: 
Flinders University

A community or sub-culture encouraging young men's exposure and obsession with guns - as well as ready access to firearms and drugs - can make gun violence 'all too easy', with Flinders University experts promoting a new direction on managing the global problem.

Flinders criminologists conclude that the need to 'dematerialise' the attraction to gun has "never been greater" than "in a post-COVID-19 world in which guns have gained greater salience in many countries".

The Flinders University study published in the international journal Criminology & Criminal Justice, argues that guns and drugs need to be more difficult to acquire and importantly less valued in popular culture to make them less attractive for criminals.

Flinders University Strategic Professor in Criminal Justice Andrew Goldsmith says broad social changes are needed to ensure guns are less attractive and less necessary for criminals seeking to instill fear and seem invincible while carrying out illegal drug related activities.

"Attempts to reduce the harms arising from the incidence, accessibility and use of guns in serious crime could pay more attention to how we might move to 'dematerialise'.

"In essence, the importance and attractiveness of guns in everyday life needs to be reduced.

"It's generally accepted that access to guns increases the levels of violence, particularly in cases of murder, domestic violence, and suicide attempts. Aside from their role in the infliction of violence, guns more often induce a sense of fear and intimidate audiences by their sheer presence as well as being shown to elevate aggressive thinking and hostility."

As well, if there was less illicit drug trafficking, there would be less need for guns among those dealing in drugs, Professor Goldsmith adds.

The Flinders researchers analysed data from in-depth interviews with 75 offenders convicted of serious crimes involving guns to determine how possession of weapons during crimes affected their sense of power and how this lifelong affiliation with weapons supported their drug trafficking activities.

"Most of our 75 interviewees imprisoned for serious gun-related crimes had been deeply mired in illicit drug trafficking," he adds.

The study outlines the power weapons have over victims for criminals which is often coupled with an ongoing social attachment to guns among some criminals that ensures guns are attractive, or prized assets that are often considered necessary among marginalised groups.

"An important part of reducing the appeal of crime guns also relates to tackling the sense of marginalisation and economic precarity that can promote early gun attachment for these groups," Professor Goldsmith says.

"In terms of government policy where effective gun supply restrictions already exist, gun buyback schemes may offer some possibility of reducing gun possession through the exchange of cash or other resources for the guns surrendered, whether those guns are legal or illegal in status," the experts from the Flinders Centre for Crime Policy and Research conclude.

Credit: 
Flinders University

Quantifying effects of non-pharmaceutical interventions on SARS-CoV-2 transmission with modeling

Limiting gatherings to fewer than 10 people and closing educational institutions were among the most effective nonpharmaceutical interventions at reducing transmission of SARS-CoV-2, a new modeling study finds. The results, informed by data from 34 European and several non-European countries, may guide policy decisions on nonpharmaceutical interventions (NPIs) to implement in current and future waves of infections. As the COVID-19 pandemic continues, governments are attempting to control it with NPIs. However, the effectiveness of different NPIs is poorly understood; better understanding which are most capable of reducing transmission could help governments more efficiently control the epidemic, while minimizing the social and economic costs. Jan Brauner and colleagues gathered chronological data on the implementation of NPIs for 41 European and non-European countries between January and the end of May 2020, each of which implemented different sets of NPIs, at different times, in different orders. Using a model that linked NPI implementation dates to national case and death counts, they estimated the effectiveness of a range of intervention types (excluding testing, tracing, and case isolation, for which they did not have data). Following a series of sensitivity analyses that varied conditions of their modeling approach, they report that closing both schools and universities (excluding preschools and nurseries) routinely contributed considerably to reducing transmission, as did limiting gatherings to 10 people or fewer. Closing businesses was effective but closing most nonessential businesses had limited benefit over targeted closures of high-risk businesses. "Issuing a stay-at-home order had a small effect when a country had already closed educational institutions, closed nonessential businesses, and banned gatherings," they say. The authors note several limitations of their study, including that their results cannot be used without qualification to predict the effect of lifting NPIs. "[C]losing schools and universities in conjunction seems to have greatly reduced transmission, but this does not mean that re-opening them will necessarily cause infections to soar." They conclude, "Our work offers insights into which areas of public life are most in need of virus containment measures so that activities can continue as the pandemic develops; however, our estimates should not be taken as the final word on NPI effectiveness." Readers can interactively explore the effects of sets of NPIs with the study's online mitigation calculator: http://epidemicforecasting.org/calc.

Credit: 
American Association for the Advancement of Science (AAAS)

Flexible working time as an opportunity to save costs and increase productivity

image: Professor Aaro Hazak

Image: 
TalTech

The Covid-19 pandemic has turned flexible working arrangements a new reality, but differences in employees' preferences and the financial implications for companies still require unravelling.

At the time, when the companies are in the quest for competitive advantages in the global economy, cost reductions and productivity gains are becoming increasingly important - well-designed working time arrangements and efficient use of human resources can contribute significantly to both. A key challenge is how to manage to strike the balance between optimal division of the employees' time between work and leisure and the employer's goal of profitability.

It appears from Raul Ruubel's doctoral thesis "Working Schedules and Efficient Time-Use in R&D Work" defended at TalTech Department of Economics and Finance that provision of flexible working options may help achieve productivity gains and reductions in personnel costs if it can be assured that the individual characteristics of the employees are taken into account.

The supervisor of the doctoral thesis, Professor at TalTech Department of Economics and Finance Aaro Hazak says, "A key source of distortion in use of flexitime comes from individual heterogeneities in preferences for work arrangement - many employees prefer a working week which is concentrated in 3-4 days, others prefer a working week which is spread over 6-7 days and some prefer the standard five-day working week. Employees' preferences for daily working schedules are also different."

If employees are required to work during hours that they would prefer not to allocate to their work, they may wish to get a wage premium for the inconvenience. Moreover, using labour resources like this is inefficient because work outcomes or work commitment may be impaired. This in turn may lead to wages being suboptimal. If an employee could work at the time convenient for him or her and maximise efficiency, the employee could receive higher wages. There may be cases where wages paid to an employee are not proportional to the employee's work performance. If the employer pays an employee remuneration for optimal performance, but the employee's actual work output is smaller or of a lower quality, it will decrease the company's profitability and reduce its competitive advantages. Consequently, work schedules can have potentially large financial implications for companies, on both the revenue side and the cost side.

An employer can achieve cost savings through lower absenteeism because of employees' better health and lower staff turnover because of higher job satisfaction. However, coordinating employees working at different times can potentially entail additional costs to the employer and inefficient use of resources. Flexible working time arrangements also require better self-management skills in planning working time.

It was found in the doctoral thesis that employees often appreciate the possibility of flexible working schedules, regardless of the fact that despite of allowed flexibility in work hours they often work according to the standard nine-to-five work schedule. This may be due to the social norms for normal working hours, e.g. the operating hours of schools, kindergartens, shops, service facilities, public transport, etc.

One of the key general messages from the thesis is that considering the inherent and behavioural individual differences between employees is one of the solutions that can significantly improve the efficiency targets of working time arrangements. In addition, the results of the thesis reveal considerable disparities between the actual, contractually agreed and desired working schedules of employees and that the amount of overtime work is dependent on certain individual characteristics and work-related factors.

"It is important that employers consider the option of flexible working schedules as a possibility to support the company's competitiveness," Professor Aaro Hazak says.

However, the risks involved in flexible working arrangements include loss of control over overtime hours and being assigned work tasks at random times. To understand the different perspectives related to flexible working time, a broader discussion in society is needed to promote the idea that work schedules are not just a matter of formality and regulation, and that the potentially large financial consequences for companies warrant the alignment of actual work schedules with desired working time so that both the intellectual capacity and the time of employees can be used as efficiently as possible.

Credit: 
Estonian Research Council

Molecule holds promise to reprogram white blood cells for better cancer treatment

image: Pictured L-R: Caitlin Brandle, research assistant, Dr. Gang Zhou, Timothy Kim, undergraduate student, Nada S. Aboelella, PhD, graduate student, Dr. Zhi-Chun Ding, assistant professor

Image: 
Kim Ratliff, AU photographer

Cancer immunotherapy using "designer" immune cells has revolutionized cancer treatment in recent years. In this type of therapy, T cells, a type of white blood cell, are collected from a patient's blood and subjected to genetic engineering to produce T cells carrying a synthetic molecule termed chimeric antigen receptor (CAR) that is designed to enable T cells to recognize and destroy cancer cells. Then these genetically modified CAR T cells are expanded to large quantity and infused back to the patient.

CAR T cell-based immunotherapies have seen remarkable outcomes in some patients with certain types of cancer, but more work is needed to improve the persistence and function of CAR T cells so that more patients can benefit from this type of therapy. A group of scientists at the Georgia Cancer Center of Augusta University recently reported that CAR T cells can stay active longer and mediate tumor killing more effectively when STAT5, a key signaling molecule, is kept in an active form within CAR T cells.

"Our study shows that expressing an active form of STAT5 in T cells can markedly improve the therapeutic outcome of CAR T cell therapy in mouse B-cell lymphoma models," said Dr. Gang Zhou, a Professor in the Department of Medicine at the Medical College of Georgia and a faculty member of the Cancer Immunology, Inflammation and Tolerance program at the Georgia Cancer Center. "One particular highlight of our study is that it reveals how STAT5 optimizes the function of CD4+ T cells, a T cell subset that plays a critical role in orchestrating effective antitumor immune responses."

Zhou and his team published their findings in a report titled Persistent STAT5 activation reprograms the epigenetic landscape in CD4+ T cells to drive polyfunctionality and antitumor immunity in the Science Immunology journal in late October. The study continues Zhou's previous work on T cell response to interlukin-7 (IL-7), a T-cell growth factor which can trigger the activation of STAT5 in T cells. Zhou and others have shown that activation of the IL-7/STAT5 signaling pathway is beneficial to cancer immunotherapy.

"When a patient receives CAR T cells, a number of obstacles may derail the therapy," Zhou said. "The CAR T cells may be blocked from entering the tumor sites, or they may be short-lived or become dysfunctional after encountering cancer cells. Our research shows that by adding an activated STAT5 molecule, these T cells can find their way to the tumor and can stay active after they kill the cancer cells. This STAT5 molecule can also help CAR T cells survive longer in the body to prevent cancer cells coming back."

While Zhou's work is currently confined to his lab inside the Georgia Cancer Center's M. Bert Storey Research Building, he hopes someday the application can make its way to the Outpatient Services clinic across the Collaborative Connector suspended above the Laney Walker Boulevard. However, there is still more research to do in the laboratory setting before the STAT5 activation approach can be applied to humans. This includes the development of a human form of active STAT5 molecule similar to what is being used in mouse models. Moreover, the safety and toxicity profiles of this application need to be evaluated more vigorously.

"We are hopeful that some recent technological advances in T cell engineering, such as the inducible suicide gene system, can be used together with the STAT5-engineering strategy to ensure effective and safe application of CAR T cell immunotherapies," Zhou said.

Credit: 
Medical College of Georgia at Augusta University

Researchers reveal how our brains know when something's different

image: NIH scientists discovered how a set of high frequency brain waves may help us unconsciously set expectations of the world around us and know when something's different by comparing memories of the past with present experiences.

Image: 
Courtesy of Zaghloul lab, NIH/NINDS.

Imagine you are sitting on the couch in your living room reading. You do it almost every night. But then, suddenly, when you look up you notice this time something is different. Your favorite picture hanging on the wall is tilted ever so slightly. In a study involving epilepsy patients, National Institutes of Health scientists discovered how a set of high frequency brain waves may help us spot these kinds of differences between the past and the present.

"Our results suggest that every experience we store into memory can be used to set our expectations and predictions for the future," Kareem Zaghloul, M.D., Ph.D., principal investigator at the NIH's National Institute of Neurological Disorders and Stroke (NINDS), and senior author of the study published in Nature Communications. "This study shows how the brain uses certain neural activity patterns to compare our expectations with the present. Ultimately, we hope that these results will help us better understand how the brain portrays reality under healthy and disease conditions."

The study was led by Rafi Haque, an M.D., Ph.D. student at Emory University School of Medicine, Atlanta, who was completing his dissertation work with Dr. Zaghloul. His primary research goal was to test out whether a theory called predictive coding can be applied to how our brains remember past experiences, known as episodic memories.

"Predictive coding basically states that the brain optimizes neural activity for processing information. In other words, the theory forecasts that the brain uses more neural activity to process new information than it does for things that we are familiar with," said Dr. Haque. "Years of research has shown that over time this is how we learn to expect what common sights, like green grass, looks like or everyday noises, such as certain bird chirps, sound like. We wanted to know whether the brain uses a similar process to manage our experiences."

To test this idea, the team worked with 14 patients with drug-resistant types of epilepsy whose brains had been surgically implanted with grids of electrodes as part of an NIH Clinical Center trial aimed at diagnosing and treating their seizures.

The experiment began when the patients were shown and asked to memorize a series of four natural scenes displayed on a computer screen. For example, one of the scenes was of a brown bicycle leaning upright on a kickstand in front of a green bush. A few seconds later they were shown a new set of images and asked whether they recognized the scene or noticed something different. Some images were the same as before while others were slightly modified by adding or removing something, such as a red bird, from the scene.

On average, the patients successfully recognized 88% of the repeat scenes, 68% of scenes that were missing something, and 65% of the ones in which something was added. In each case, it took them about two and a half seconds to notice.

Further analysis of a subset of the patients showed that they successfully located 82% of additions and 70% of removals. Curiously, their eyes fixated often (83%) on additions but barely at all (34%) on areas in the scene where something was removed.

"Overall, these results suggest it takes just one moment to not only remember a new experience but also to use memories of that experience to set future expectations," said Dr. Zaghloul.

Meanwhile, electrical recordings uncovered differences in brain wave activity between the times the patients successfully remembered repeat scenes and the times they spotted changes to a scene.

In both situations, the appearance of a scene on the computer screen triggered a rise in the strength of high frequency waves of neural activity in the lateral occipital cortex, a visual processing center in the back of the brain. The surge flowed forward arriving a few milliseconds later at a memory center called the medial temporal lobe.

Also, in both situations, the patients' brains appeared to replay neural activity patterns observed when they first witnessed the scenes.

"These results support the idea that memories of visual experiences follow a certain pathway in the brain," said Dr. Haque.

The difference though was that the surge in activity was stronger when the patients recognized a change to a scene.

In addition, during these moments, a second, lower frequency wave appeared to synchronously rumble through the lateral occipital cortex and the medial temporal lobe.

"Our data supports the idea that our expectations of visual experiences are controlled by a feedback loop between the visual cortex and the medial temporal lobe," said Dr. Zaghloul. "High frequency waves of neural activity appear to carry an error message when we see something that does not match our expectations, while the lower frequency waves may be updating our memories."

Credit: 
NIH/National Institute of Neurological Disorders and Stroke

Efforts to combat COVID-19 perceived as morally right

image: With his colleagues, Fan Xuan Chen, a doctoral student in psychology at the U. of I., found that people in the U.S. and New Zealand tend to moralize COVID-19 restrictions over other efforts to safeguard public health and safety.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- According to new research, people tend to moralize COVID-19-control efforts and are more willing to endorse human costs emerging from COVID-19-related restrictions than to accept costs resulting from other restraints meant to prevent injury or death. The level of support - and resulting outrage in response to perceived violations of this moral ideal - differs between liberals and conservatives.

Reported in the Journal of Experimental Social Psychology, the study also finds that people are more tolerant of authorities who abuse their power to enforce COVID-19 health restrictions than they are of other abuses for the sake of public health and safety.

"Efforts aimed at eliminating COVID-19 have become moralized to the extent that people tend to overlook the associated costs," said Fan Xuan Chen, a doctoral student in psychology at the University of Illinois Urbana-Champaign who conducted the study with Maja Graso, a senior lecturer at the University of Otago in Dunedin, New Zealand; and Tania Reynolds, a psychology professor at the University of New Mexico.

"Because COVID-19-related health strategies are so moralized, people really tolerate a wide range of restrictions and are willing to impose penalties on those who violate those strategies or even just speak out in favor of other approaches," Chen said.

Conducted with participants in the U.S. and New Zealand, the research was designed to better understand how moralizing public health issues influences people's perceptions of human suffering. The researchers say they do not advocate for any particular COVID-19-control policy.

"However, our results suggest that in our quest to combat COVID-19, we may overlook the collateral damage from these pursuits," Chen said.

In two experiments, the researchers asked American participants to evaluate the competence of public health experts and to rate their own moral outrage and willingness to shame or punish scientists who made mistakes or advocated for or against COVID-19 protective measures in the interest of saving lives.

In another experiment, Americans evaluated the harms resulting from a police officer who abused their authority to enforce COVID-19 restrictions or to stop people from speeding in traffic.

"In both cases, the degree of human suffering or cost was held constant, such that the officer cited and detained the same number of people to reduce the same number of deaths," Chen said. In each instance, participants decided whether to demote or reduce the pay of the officer - and by how much - and to rate the severity of harm inflicted by the officer.

In a separate experiment, New Zealanders were randomly assigned to evaluate either of two research proposals, one of which asked whether COVID-19-control efforts could cause more human suffering than not trying to control the spread of the disease, and one that asked whether the opposite could be true. Participants were asked to evaluate the quality of the proposal, the societal value of the research, the scientists' prestige and other factors related to the research.

"American participants evaluated the same costs - including public shaming, deaths and illnesses, and police abuse of power - as more acceptable when they resulted from efforts to minimize COVID-19's health impacts than when they challenged such efforts," Chen said. "New Zealanders were more favorably disposed to a research proposal that supported COVID-19-elimination efforts than to one that challenged those efforts, even when the methodological information and evidence supporting both proposals were equivalent."

The willingness to punish or shame a scientist for arguing against COVID-19-related restrictions or for a researcher who accidentally underestimated the severity of the pandemic increased with a participant's own level of concern about the risks associated with COVID-19.

"This pattern suggests that those who feel most vulnerable to COVID-19 could be especially likely to overlook the collateral costs of elimination efforts," Chen said.

In general, participants' willingness to punish a police officer who abused their position to catch people speeding was significantly greater than their desire to punish an officer who violated people's rights to enforce COVID-19-related restrictions.

The researchers also asked participants to indicate whether they were high or low in political conservatism.

"When we looked at the data, we saw that, compared with liberals, people who identify as very conservative have lower moral outrage in response to arguments against COVID-19 restrictions," Chen said. Yet, conservatives also became more outraged than liberals when a scientist challenged the state's decision to keep businesses open.

"So, there's a totally opposite trend in each condition, depending on which political ideology you are affiliated with," Chen said. "In psychology, this kind of crossover effect is very rare, and such a strong crossover effect is even more rare. These patterns suggest liberals and conservatives are weighing the costs and benefits of elimination efforts quite differently."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

How can we make sure people get the second COVID-19 vaccine dose?

The light at the end of the pandemic tunnel is getting brighter. This week, the first health care workers will receive the first doses of an FDA-approved coronavirus vaccine. Soon, so will other front-line workers in health care and beyond, and residents of long-term care facilities.

The availability of COVID-19 vaccines, however, will not necessarily result in people getting fully vaccinated.

For the first vaccines that will reach the public, everyone who gets a first dose must have a second dose within a few weeks to get full protection against severe COVID-19.

While Mark Fendrick, M.D. hails the rapid development of vaccines as a scientific breakthrough, he is anxious.

For decades, the University of Michigan primary care physician and researcher has studied what it takes to make sure that Americans get the essential preventive services that can help them stay healthy.

So what's his worry?

"There are several factors and behaviors that prevent many well-intentioned people from completing a two-step process, like that recommended for the COVID-19 vaccines," he says. "We need to provide everything necessary to support those who receive the first shot to make sure they complete their second dose."

This lack of completion has been well established for other two-dose vaccines, like those that prevent less contagious and less lethal conditions, such as shingles, human papilloma virus (HPV), and hepatitis B. Fendrick worked on studies of the latter vaccine early in his career.

"On the positive side, out-of-pocket costs - one of the most significant barriers for vaccine uptake -- has been removed for COVID-19 vaccines, thanks to federal action," says Fendrick, a general internist at Michigan Medicine and director of the Center for Value-Based Insurance Design, which has fostered research and policy initiatives to enhance access to preventive care.

"But vaccines that require more than one dose create additional behavioral and environmental challenges, including reports of side effects, false claims regarding vaccine safety, logistical barriers, and the politicization of the program, that may deter people from getting vaccinated or returning for their second dose," he says. "Studies of other high-value vaccines and medications to manage chronic conditions show that even when provided at no cost, patients take them half the time."

What might help?

Vaccination kits will include a card that health providers can distribute when giving first doses, to help educate patients about the vaccine and to encourage the pre-scheduling of second-dose appointments.

While useful, Fendrick has advocated that a smartphone-based vaccine adherence support program, built on research by his team and others, be added to optimize vaccine uptake.

"We have the technology," he points out. "Smartphone apps and wearables already succeed in getting people to take their medicine, check their blood pressure or blood sugar, or even measure their heart rhythm."

Beyond automated reminders, a quick call or email from someone at a trusted source could do wonders, he says.

"Just knowing that someone cares and is willing to reach out has worked wonders in getting more people engaged in their health care behaviors," says Fendrick. "This is particularly true for under-served groups that are at the greatest risk from COVID-19, like the elderly, and those with low incomes and without stable housing."

The tailored messaging elements of a COVID-19 vaccine support program would include:

Education about how the vaccine works

Information about vaccine side effects and their treatment

Scheduling and reminders of second dose appointment

Updates and easily understandable and accessible information to dispel misinformation and rumors that could give someone doubts about getting a second dose

Transportation options to second dose appointment

This last one is especially crucial, Fendrick notes. Lack of access to transportation has stood in the way of other preventive health services. Making sure that people know where to turn if they need help to get to their second-dose appointment, and making it possible for them to get the second dose at a convenient location, will be critical.

"Given the enormous stakes, we should aim for the same high level of customer service used the most effective online retailers, where they regularly check in after you visit their site and buy something," Fendrick says. "They know what works to make consumers feel appreciated. This same personalized approach should be used to let each person who needs to return for a second vaccine dose feel like the most important person in the world - which, in my opinion, they are."

What about cash incentives?

In addition to customized messaging, Fendrick feels that small financial rewards like a $50 gift card would further increase vaccine uptake.

While some experts have suggested paying people to get the first vaccine dose, Fendrick lands on the side of enrolling people in a no cost adherence support program after they receive their first dose, and but paying the financial reward after they complete the two-dose regimen.

"We need to focus the rewards for those who have made the effort to get both doses and for fulfilling their broader societal role in reducing the disease's impact."

What can providers and employers do?

Having surmounted incredible hurdles to develop, test, produce and distribute the vaccines, hospitals, clinics, pharmacies and state and local health agencies are making a similarly massive push to get the vaccine to everyone who can take it.

Employers, who have a big stake in the ability of their employees to avoid COVID-19 and safely return to work, may also want to consider offering support programs that include monetary incentives or other perks such as a charitable donation to encourage full vaccination.

Whether or not the vaccine is mandated, this would reward employees for reducing the spread of the virus and contributing to the opening of the economy.

Those who administer the vaccine also play a key role to make sure people complete the two-shot regimen.

"The good news is that the federal government has put in place a financial incentive for those who dispense the vaccine by paying a higher reimbursement rate for the second dose than for the first," Fendrick says. "Why not also give incentives to patients?"

Fendrick has long supported the idea of aligning patient- and provider-focused tactics to increase use of essential medical care.

Published research has shown that when both patient and provider are given incentives to complete a specific behavior, like taking medicines to lower cholesterol, the impact is greater than the individual parts. He is quite confident that alignment of incentives will result in higher vaccine uptake.

"We can't let the last leg of the remarkable COVID-19 vaccine journey - the second-dose problem - stop us from completing this quest to end the worst pandemic in modern history," Fendrick says. "We've gone 98 yards down the field in record time. We have to do everything possible get over the goal line, so we can all get our lives back."

Credit: 
Michigan Medicine - University of Michigan

USC study: Young adults who identify as Republicans eschew COVID safety precautions

Young Californians who identify themselves as Republicans are less likely to follow social distancing guidelines that prevent coronavirus transmission than those who identify as Democrats or Independents, according to new USC study published today in JAMA Internal Medicine.

The findings among 18- to 25-year-olds mirror what many have observed about America's politicized response to COVID-19, and are a source of alarm for public health experts. The United States is now averaging 207,000 new cases and 2,319 deaths per day, as of Friday.

"You might expect middle-aged or older adults to have established ideologies that affect their health behavior, but to see it in young adults who have historically been less politically inclined is unexpected," said Adam Leventhal, director of the USC Institute for Addiction Science. "Regardless of age, we would never hope to find results like this. Public health practices should not correlate with politics."

The study was conducted during the summer of 2020 via an online survey that was completed by 2,065 18- to 25-year-olds living predominately in Los Angeles County. The participants were initially recruited as ninth-grade high school students as part of the USC Happiness & Health Project, which has been surveying this group about their health behaviors every six months since 2013.

Of the young adults contacted, 891 identified as Democrat, 148 as Republican, 320 as "Independent or Other," and 706 declined to answer or said they didn't know what political party they identify with.

Researchers found that 24.3% of Republican young adults said they don't frequently social distance from others, compared with just 5.2% of Democrats.

Differences in social distancing practices were also found when Republicans were compared to Independents and young adults who did not report a political party affiliation. Researchers discovered that Republicans versus other groups were more likely to visit public indoor venues such as malls, restaurants, bars or clubs, or attend or host parties with 10 people or more.

Throughout most of the COVID-19 pandemic, California has recommended that all residents practice social distancing and wear a mask when outside the home. Current restrictions prohibit private gatherings of any size.

Leventhal noted that when his team statistically adjusted for 21 factors that could explain the difference in social distancing across political party groups, including propensity for risk-taking behaviors, Republicans were 4 times more likely than the others to be infrequent social distancers.

He also said that the "blue county within a blue state" setting for the study underscores that the link between political party affiliation and social distancing cannot simply be reduced to an issue of urban vs. rural differences.

Credit: 
University of Southern California

One-step method to generate mice for vaccine research

BOSTON -- To develop vaccines and investigate human immune responses, scientists rely on a variety of animal models, including mice that can produce human antibodies through genetically engineered B cell receptors, which are specialized antibodies bound to the B cell membrane. These mice, however, often take several years to develop, requiring a complicated process of genetic modification and careful breeding.

"The time it takes to generate these specialized mice has been a major factor in delaying vaccine development," says Facundo Batista, PhD, associate director of the Ragon Institute of MGH, MIT and Harvard. "With the recent advances in gene editing technology like CRISPR/Cas9, we knew there had to be a way to speed up this process significantly."

Batista's group has developed a new method for generating mouse lines for pre-clinical vaccine evaluation that dramatically shortens this timeline. In a study published recently in the journal EMBO, this one-step method, which uses CRISPR/Cas9 technology, can produce mice with genetically engineered human B cell receptors in just a few weeks.

To test this technology, the researchers engineered mice to have human B cell receptors that are precursors to what are called broadly neutralizing HIV antibodies. These antibodies are known to be effective in combating HIV, but they are difficult to stimulate through vaccination. The precursors responded to an antigen currently being used in clinical HIV trials by generating broadly neutralizing antibody-like mutations. The ability to quickly evaluate the ability of different antigens to active these precursors has the potential to significantly accelerate vaccine development.

The engineered B cells were not just capable of making high-quality antibodies; some became a specialized form of B cell known as memory B cells, which are used to maintain long-lasting immunity once antibodies are produced against a pathogen. This means the mice can likely be used to quickly validate good candidate vaccines for HIV and other pathogens.

"This new technique may allow scientists studying vaccines and antibody evolution to tremendously speed up their research," says Ragon research fellow Xuesong Wang, PhD, co-first author on the paper.

Rashmi Ray, PhD, also co-first author and a Ragon research fellow, agrees: "It will allow researchers to respond much more quickly and flexibly to new developments in the field."

Credit: 
Massachusetts General Hospital

Sheets of carbon nanotubes come in a rainbow of colors

image: Esko Kauppinen is a professor of applied physics at Aalto University.

Image: 
Photo courtesy of Aalto University

HOUSTON - (Dec. 14, 2020) - Nanomaterials researchers in Finland, the United States and China have created a color atlas for 466 unique varieties of single-walled carbon nanotubes.

The nanotube color atlas is detailed in a study in Advanced Materials about a new method to predict the specific colors of thin films made by combining any of the 466 varieties. The research was conducted by researchers from Aalto University in Finland, Rice University and Peking University in China.

"Carbon, which we see as black, can appear transparent or take on any color of the rainbow," said Aalto physicist Esko Kauppinen, the corresponding author of the study. "The sheet appears black if light is completely absorbed by carbon nanotubes in the sheet. If less than about half of the light is absorbed in the nanotubes, the sheet looks transparent. When the atomic structure of the nanotubes causes only certain colors of light, or wavelengths, to be absorbed, the wavelengths that are not absorbed are reflected as visible colors."

Carbon nanotubes are long, hollow carbon molecules, similar in shape to a garden hose but with sides just one atom thick and diameters about 50,000 times smaller than a human hair. The outer walls of nanotubes are made of rolled graphene. And the wrapping angle of the graphene can vary, much like the angle of a roll of holiday gift wrap paper. If the gift wrap is rolled carefully, at zero angle, the ends of the paper will align with each side of the gift wrap tube. If the paper is wound carelessly, at an angle, the paper will overhang on one end of the tube.

The atomic structure and electronic behavior of each carbon nanotube is dictated by its wrapping angle, or chirality, and its diameter. The two traits are represented in a "(n,m)" numbering system that catalogs 466 varieties of nanotubes, each with a characteristic combination of chirality and diameter. Each (n,m) type of nanotube has a characteristic color.

Kauppinen's research group has studied carbon nanotubes and nanotube thin films for years, and it previously succeeded in mastering the fabrication of colored nanotube thin films that appeared green, brown and silver-grey.

In the new study, Kauppinen's team examined the relationship between the spectrum of absorbed light and the visual color of various thicknesses of dry nanotube films and developed a quantitative model that can unambiguously identify the coloration mechanism for nanotube films and predict the specific colors of films that combine tubes with different inherent colors and (n,m) designations.

Rice engineer and physicist Junichiro Kono, whose lab solved the mystery of colorful armchair nanotubes in 2012, provided films made solely of (6,5) nanotubes that were used to calibrate and verify the Aalto model. Researchers from Aalto and Peking universities used the model to calculate the absorption of the Rice film and its visual color. Experiments showed that the measured color of the film corresponded quite closely to the color forecast by the model.

The Aalto model shows that the thickness of a nanotube film, as well as the color of nanotubes it contains, affects the film's absorption of light. Aalto's atlas of 466 colors of nanotube films comes from combining different tubes. The research showed that the thinnest and most colorful tubes affect visible light more than those with larger diameters and faded colors.

"Esko's group did an excellent job in theoretically explaining the colors, quantitatively, which really differentiates this work from previous studies on nanotube fluorescence and coloration," Kono said.

Since 2013, Kono's lab has pioneered a method for making highly ordered 2D nanotube films. Kono said he had hoped to supply Kauppinen's team with highly ordered 2D crystalline films of nanotubes of a single chirality.

"That was the original idea, but unfortunately, we did not have appropriate single-chirality aligned films at that time," Kono said. "In the future, our collaboration plans to extend this work to study polarization-dependent colors in highly ordered 2D crystalline films."

The experimental method the Aalto researchers used to grow nanotubes for their films was the same as in their previous studies: Nanotubes grow from carbon monoxide gas and iron catalysts in a reactor that is heated to more than 850 degrees Celsius. The growth of nanotubes with different colors and (n,m) designations is regulated with the help of carbon dioxide that is added to the reactor.

"Since the previous study, we have pondered how we might explain the emergence of the colors of the nanotubes," said Nan Wei, an assistant research professor at Peking University who previously worked as a postdoctoral researcher at Aalto. "Of the allotropes of carbon, graphite and charcoal are black, and pure diamonds are colorless to the human eye. However, now we noticed that single-walled carbon nanotubes can take on any color: for example, red, blue, green or brown."

Kauppinen said colored thin films of nanotubes are pliable and ductile and could be useful in colored electronics structures and in solar cells.

"The color of a screen could be modified with the help of a tactile sensor in mobile phones, other touch screens or on top of window glass, for example," he said.

Kauppinen said the research can also provide a foundation for new kinds of environmentally friendly dyes.

Credit: 
Rice University

Artificial intelligence sets sights on the sun

image: Solar observations with image quality decreasing from left to right

Image: 
Kanzelhöhe Observatory for Solar and Environmental Research, Austria.

Scientists from the University of Graz and the Kanzelhöhe Solar Observatory (Austria) and their colleagues from the Skolkovo Institute of Science and Technology (Skoltech) developed a new method based on deep learning for stable classification and quantification of image quality in ground-based full-disk solar images. The research results were published in the journal Astronomy & Astrophysics and are available in open access.

The Sun is the only star where we can discern surface details and study plasma under extreme conditions. The solar surface and atmospheric layers are strongly influenced by the emerging magnetic field. Features such as sunspots, filaments, coronal loops, and plage regions are a direct consequence of the distribution of enhanced magnetic fields on the Sun, which challenges our current understanding of these phenomena. Solar flares and coronal mass ejections result from a sudden release of free magnetic energy stored in the strong fields associated with sunspots. They are the most energetic events in our solar system and have a direct impact on the Sun-Earth system called "space weather". Modern society strongly relies on space and ground-based technology which is highly vulnerable to hazardous space weather events. Continuous monitoring of the Sun is essential for better understanding and predicting solar phenomena and the interaction of solar eruptions with the Earth's magnetosphere and atmosphere. In recent decades, solar physics has entered the era of big data, and the large amounts of data constantly produced by ground- and space-based observatories can no longer be analyzed by human observers alone.

Ground-based telescopes are positioned around the globe to provide continuous monitoring of the Sun independently of the day-night schedule and local weather conditions. Earth's atmosphere imposes the strongest limitations on solar observations since clouds can occult the solar disk and air fluctuations can cause image blurring. In order to select the best images from multiple simultaneous observations and detect local quality degradations, objective image quality assessment is required.

"As humans, we assess the quality of a real image by comparing it to an ideal reference image of the Sun. For instance, an image with a cloud in front of the solar disk ? a major deviation from our imaginary perfect image ? would be tagged as a very low-quality image, while minor fluctuations are not that critical when it comes to quality. Conventional quality metrics struggle to provide a quality score independent of solar features and typically do not account for clouds," says Tatiana Podladchikova, an assistant professor at the Skoltech Space Center (SSC) and a research co-author.

In their recent study, the researchers used artificial intelligence (AI) to achieve quality assessment that is similar to human interpretation. They employed a neural network to learn the characteristics of high-quality images and estimate the deviation of real observations from an ideal reference.

The paper describes an approach based on Generative Adversarial Networks (GAN) that are commonly used to obtain synthetic images, for example, to generate realistic human faces or translate street maps into satellite images. This is achieved by approximating the distribution of real images and picking samples from it. The content of the generated image can be either random or defined by a conditional description of the image. The scientists used the GAN to generate high-quality images from the content description of the same image: the network first extracted the important characteristics of the high-quality image, such as the position and appearance of solar features, and then generated the original image from this compressed description. When this procedure is applied to lower quality images, the network re-encodes the image content, while omitting low-quality features in the reconstructed image. This is a consequence of the approximated image distribution by the GAN which can only generate images of high quality. The difference between a low-quality image and the envisioned high-quality reference of the neural network provides the basis for an image quality metric and is used to identify the position of quality degrading effects in the image.

"In our study, we applied the method to observations from the Kanzelhöhe Observatory for Solar and Environmental Research and showed that it agrees with human observations in 98.5% of cases. From the application to unfiltered full observing days, we found that the neural network correctly identifies all strong quality degradations and allows us to select the best images, which results in a more reliable observation series. This is also important for future network telescopes, where observations from multiple sites need to be filtered and combined in real-time," says Robert Jarolim, a research scientist at the University of Graz and the first author of the study.

"In the 17th century, Galileo Galilei was the first to dare look at the Sun through his telescope, while in the 21st century, dozens of space and ground observatories continuously track the Sun, providing us with a wealth of solar data. With the launch of the Solar Dynamics Observatory (SDO) 10 years ago, the amount of solar data and images transmitted to Earth soared to 1.5 terabytes per day, which is equivalent to downloading half a million songs daily. The Daniel K. Inouye Solar Telescope, the world's largest ground-based solar telescope with a 4-meter aperture, took the first detailed images of the Sun in December 2019 and is expected to provide six petabytes of data per year. Solar data delivery is the biggest project of our times in terms of total information produced. With the recent launches of groundbreaking solar missions, Parker Solar Probe and Solar Orbiter, we will be getting ever-increasing amounts of data offering new valuable insights. There are no beaten paths in our research. With so much new information coming in daily, we simply must invent novel efficient AI-aided data processing methods to deal with the biggest challenges facing humankind. And whatever storms may rage, we wish everyone good weather in space," Podladchikova says.

The new method was developed with the support of Skoltech's high-performance cluster for the anticipated Solar Physics Research Integrated Network Group (SPRING) that will provide autonomous monitoring of the Sun using cutting-edge technology of observational solar physics. SPRING is pursued within the SOLARNET project, which is dedicated to the European Solar Telescope (EST) initiative supported by the EU research and innovation funding program Horizon 2020. Skoltech represents Russia in the SOLARNET consortium of 35 international partners.

Currently, the authors are further elaborating their image processing methods to provide a continuous data stream of the highest possible quality and developing automated detection software for continuous tracking of solar activity.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Chance played a major role in keeping Earth fit for life

A study by the University of Southampton gives a new perspective on why our planet has managed to stay habitable for billions of years - concluding it is almost certainly due, at least in part, to luck. The research suggests this may lengthen the odds of finding life on so-called 'twin-Earths' in the Universe.

The research, published in the Nature journal Communications Earth & Environment, involved conducting the first ever simulation of climate evolution on thousands of randomly generated planets.

Geological data demonstrate that Earth's climate has remained continuously habitable for more than three billion years. However, it has been precariously balanced, with the potential to rapidly deteriorate to deep-frozen or intolerably hot conditions causing planet-wide sterility.

Professor Toby Tyrrell, a specialist in Earth System Science at the University of Southampton, explains: "A continuously stable and habitable climate on Earth is quite puzzling. Our neighbours, Mars and Venus, do not have habitable temperatures, even though Mars once did. Earth not only has a habitable temperature today, but has kept this at all times across three to four billion years - an extraordinary span of geological time."

Many events can threaten the continuous stability of a planet - asteroid impacts, solar flares and major geological events, such as eruptions of supervolcanoes. Indeed, an asteroid which hit the Earth 66 million years ago caused the extinction of more than 75 per cent of all species, killing off the dinosaurs along with many other species.

Previous computer modelling work on Earth habitability has involved modelling a single planet: Earth. But, inspired by discoveries of exoplanets (those outside of our solar system) that reveal that there are billions of Earth-like planets in our galaxy alone, a Southampton scientist took a novel approach to investigating a big question: what has led Earth to remain life-sustaining for so long?

To explore this, Professor Tyrrell tapped into the power of the University of Southampton's Iridis supercomputing facility to run simulations looking at how 100,000 randomly different planets responded to random climate-altering events spread out across three billion years, until they reached a point where they lost their habitability. Each planet was simulated 100 times, with different random events each time.

Having accrued a vast set of results, he then looked to see whether habitability persistence was restricted to just a few planets which were always capable of sustaining life for three billion years, or instead was spread around many different planets, each of which only sometimes stayed habitable for this period.

The results of the simulation were very clear. Most of those planets which remained life-sustaining throughout the three billion year period only had a probability, not a certainty, of staying habitable. Many instances were of planets which usually failed in the simulations and only occasionally remained habitable. Out of a total population of 100,000 planets, nine percent (8,700) were successful at least once - of those, nearly all (about 8,000) were successful fewer than 50 times out of 100 and most (about 4,500) were successful fewer than 10 times out of 100.

The study results suggest chance is a major factor in determining whether planets, such as Earth, can continue to nurture life over billions of years. Professor Tyrrell concludes: "We can now understand that Earth stayed suitable for life for so long due, at least in part, to luck. For instance, if a slightly larger asteroid had hit Earth, or had done so at a different time, then Earth may have lost its habitability altogether.

"To put it another way, if an intelligent observer had been present on the early Earth as life first evolved, and was able to calculate the chances of the planet staying habitable for the next several billion years, the calculation may well have revealed very poor odds."

Given these seemingly poor odds, the study speculates that elsewhere in the Universe there should be Earth-like planets which had similar initial prospects but which, due to chance events, at one point became too hot or too cold and consequently lost the life upon them. As techniques to investigate exoplanets improve, and what seem at first to be 'twin Earths' are discovered and analysed, it seems likely that most will be found to be uninhabitable.

Credit: 
University of Southampton

Create a realistic VR experience using a normal 360-degree camera

video: Animation explaining Omniphotos.

Image: 
Christian Richardt

Scientists at the University of Bath have developed a quick and easy approach for capturing 360° VR photography without using expensive specialist cameras. The system uses a commercially available 360° camera on a rotating selfie stick to capture video footage and create an immersive VR experience.

Virtual reality headsets are becoming increasingly popular for gaming, and with the global pandemic restricting our ability to travel, this system could also be a cheap and easy way to create virtual tours for tourist destinations.

Conventional 360° photography stitches together thousands of shots as you move around one spot. However, it doesn't retain depth perception, so the scene is distorted and the images look flat.

Whilst state-of-the-art VR photography, which includes depth perception, is available to professional photographers, it requires expensive equipment, as well as time to process the thousands of photos needed to create a fully immersive VR environment.

Dr Christian Richardt and his team at CAMERA, the University of Bath's motion capture research centre, have created a new type of 360° VR photography accessible to amateur photographers called OmniPhotos.

This is a fast, easy and robust system that recreates high quality motion parallax, so that as the VR user moves their head, the objects in the foreground move faster than the background.

This mimics how your eyes view the real world, creating a more immersive experience.

OmniPhotos can be captured quickly and easily using a commercially available 360° video camera on a rotating selfie stick.

Using a 360° video camera also unlocks a significantly larger range of head motions.

OmniPhotos are built on an image-based representation, with optical flow and scene adaptive geometry reconstruction, which is tailored for real time 360° VR rendering.

Dr Richardt and his team presented the new system at the international SIGGRAPH Asia conference on Sunday 13th December 2020.

He said: "Until now, VR photography that uses realistic motion parallax has been the preserve of professional VR photographers, using expensive equipment and requiring complex software and computing power to process the images.

"OmniPhotos simplifies this process so that you can use it with a commercially available 360° camera that only costs a few hundred pounds.

"This opens up VR photography to a whole new set of applications, from estate agent's virtual tours of houses to immersive VR journeys at remote tourist destinations. With the pandemic stopping many people from travelling on holiday this year, this is a way of virtually visiting places that are currently inaccessible."

Credit: 
University of Bath

International research project investigates photosensitive carbon nanoparticles

An international team of researchers, including researchers from Friedrich-Alexander Universität Erlangen-Nürnberg (FAU) headed by Prof. Dr. Dirk M. Guldi have now managed to identify the fundamental problems relating to the photophysics and photochemistry of carbon nanocolloids (CNC), and ascertain possible approaches for research into these readily available, non-toxic and adaptable nanomaterials.

Light is not only the primary source of energy for life on Earth, it is also hugely important for a number of technical applications. Nanomaterials such as carbon nanocolloids (CNC) which can be used to tailor light-material interactions will have an important role to play in the technology of the future. As a sustainable product, they will help to avoid toxic waste and excessive consumption of resources. However, their range of application has been rather limited to date as their heterogenity hindered researchers in their attempts to find a uniform way of describing CNCs in an excited state. An international team of researchers, including researchers from FAU headed by Prof. Dr. Dirk M. Guldi from the Chair of Physical Chemistry I have now managed to identify the fundamental problems relating to the photophysics and photochemistry of carbon nanocolloids (CNC), and ascertain possible approaches for research into these readily available, non-toxic and adaptable nanomaterials. The researchers have published their results in the journal Chem, in an article titled 'Optical processes in carbon nanocolloids'.

Carbon nanocolloids are highly heterogeneous materials. They are tiny carbon-based particles of less than 10 nanometres in diameter. The lack of a common description of their properties in an excited state makes it difficult for them to be used in technological, ecological and biomedical applications. However, one of their most interesting features is their photoluminescence, in other words the emission of light after the absorption of photons, which make them a promising candidate for technological or biomedical applications. The researchers believe that adding a solution will encourage the luminescence of CNC after irradiation, a process also known as phosphoresence. The findings of the international team will serve as the starting point for making CNCs available for technological applications.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg