Tech

Conservative boards more likely to dismiss CEO

SMU Office of Research & Tech Transfer - When considering an organisation's long-term survival, existential threats that come to mind include external competition or technological disruption. But a potential threat lurks within the firm itself - organisational misconduct among employees can destroy billions of dollars of market value and result in lasting reputational damage.

In cases of financial misconduct, organisations have been known to take strong actions such as dismissing the CEO. Curiously, some boards prefer to take limited or no action toward the CEO, even if it results in criticism that they are deficient in their oversight role.

While the past literature on corporate governance covered board independence and loyalty to the CEO, none examined how the political ideology - a set of beliefs and values - held by board members can also influence the process of CEO dismissal. This gap in knowledge piqued the interest of David Gomulya, Assistant Professor of Strategic Management at the Singapore Management University (SMU) Lee Kong Chian School of Business.

"The earliest trigger for my interest in this research topic was the Great Recession that started in 2007 which affected many people," he said. "It was puzzling why not all CEOs are replaced if their firms restated their earnings downward because, at least symbolically, CEOs are usually replaced to signal that the firm is serious in correcting for its mistakes."

In an article published in Strategic Management Journal, titled "Political ideology of the board and CEO dismissal following financial misconduct", Professor Gomulya and colleagues examined US S&P 1500 firms that were charged with financial misconduct and required by the Securities and Exchange Commission to restate their earnings within the ten-year period from 2003 to 2012.

To assess a board's political ideology, the researchers used a scoring system for each board director that ranged from -1 to +1. A score of -1 indicated that all donations were made to the Democratic Party while a score of +1 indicated that all donations were made to the Republican Party.

According to their study, politically conservative boards were more likely to respond by dismissing the CEO than would liberal boards. "Our findings suggest that the ideology of board members can influence critical actions that they take," he said.

By incorporating an important but often overlooked aspect of corporate governance, Professor Gomulya hopes that his study on political ideology and CEO dismissal will complement existing literature on organisation-level behaviours and outcomes.

"We hope this study spurs future research to better understand the interaction between the beliefs of key decision makers and their actions," he said. "Recent trends in politics and business seem to suggest that such factors, although perhaps irrational, may play a larger role than we probably give credit for."

Credit: 
Singapore Management University

Bio-inspired hydrogel can rapidly switch to rigid plastic

image: The gel is soft and transparent at 25°C and cannot support a 10 kg weight (top panels) but it quickly becomes rigid and opaque when heated to 60°C, becoming strong enough to support the weight (bottom panels). (Nonoyama T. et al., Advanced Materials, November 18, 2019)

Image: 
Nonoyama T. et al., Advanced Materials, November 18, 2019

A new material that stiffens 1,800-fold when exposed to heat could protect motorcyclists and racecar drivers during accidents.

Hokkaido University researchers have developed a hydrogel that does the opposite of what polymer-based materials, like plastic bottles, normally do: their material hardens when heated and softens when cooled. Their findings, published in the journal Advanced Materials, could lead to the fabrication of protective clothing items for traffic and sports-related accidents.

Takayuki Nonoyama and Jian Ping Gong of Hokkaido University and their colleagues were inspired by how proteins remain stable inside organisms that survive within extreme-heat environments, like hot springs and deep sea thermal vents. Normally, heat "denatures" proteins, altering their structure and breaking their bonds. But the proteins within thermophiles remain stable with heat thanks to enhanced electrostatic interactions such as ionic bonds.

The team developed an inexpensive, non-toxic polyacrylic gel based on this concept. A gel composed of polyelectrolyte poly (acrylic acid) (PAAc) was immersed in a calcium acetate aqueous solution. PAAc on its own acts like any other polymer-based material and softens when heated. But when calcium acetate is added, PAAc's side residues interact with the calcium acetate molecules, in a way similar to what happens inside thermophile proteins, causing PAAc to act very differently.

The team found that their originally uniform gel separates into a polymer dense "phase" and a polymer sparse one as the temperature rises. When it reaches to a critical temperature, in this case around 60°C, the dense phase undergoes significant dehydration which strengthens ionic bonds and hydrophobic interactions between polymer molecules. This causes the material to rapidly transform from a soft, transparent hydrogel to a rigid, opaque plastic.

The heated material was 1,800 times stiffer, 80 times stronger, and 20 times tougher than the original hydrogel. The soft-to-rigid switching was completely reversible by alternatively heating and cooling the material. Moreover, the scientists could fine-tune the switching temperature by adjusting the concentration of the ingredients.

They then demonstrated a possible application of the material by combining it with a woven glass fabric. This new fabric was soft at room temperature, but when it was pulled against an asphalt surface for five seconds at a speed of 80 km/hour, the heat generated by the friction hardened the material with only minor abrasions forming on the contact surface.

Takayuki Nonoyama says "Clothing made from similar fabric could be used to protect people during traffic or sports-related accidents, for example. Our material could also be used as a heat-absorbent window coating to keep indoor environments cooler."

"This polymer gel can be easily made from versatile, inexpensive and non-toxic raw materials that are commonly found in daily life. Specifically, the polyacrylic acids are used in disposable diapers and calcium acetates are used in food additives," Jian Ping Gong added. "Our study contributes to basic research on new temperature-responsive polymers, and to applied research on temperature-responsive smart materials."

Credit: 
Hokkaido University

New expert findings seek to protect national parks from invasive animal species

image: A volunteer participates in a BioBlitz at Rocky Mountain National Park. BioBlitz events (an intensive field study in an area usually over 24 hours) can be helpful for identifying the presence of invasive species in national parks.

Image: 
Photo courtesy of the National Park Service

More than half of America's national parks are facing a grave and immediate threat: the ongoing presence and spread of invasive animal species. The National Park Service has taken the first step in combatting this invasion by asking a group of experts to help chart a course that will ensure the survival of these national treasures.

The experts' findings were recently published in the journal Biological Invasions. According to lead author Ashley Dayer, assistant professor of wildlife conservation in the College of Natural Resources and Environment, "As Americans, we value national parks for the natural habitats and wildlife they protect, but because of invasive species, some of our native species are struggling or unable to survive, even with the protection of our park system."

More invaders are likely to arrive and flourish because, currently, the National Park Service has no comprehensive program to reverse or halt the trend. Coordinated action and a financial commitment by the NPS and others will be critical. According to Dayer, "If we don't take action, native species will continue to struggle due to the invasives. But taking action is no small feat; it requires the commitment and resources of the National Park Service, neighboring lands, and the public."

Dayer received the opportunity to address this complex problem when she accepted an invitation from the National Park Service to serve on a panel of experts to address the threat of invasive animal species and suggest solutions. As a conservation social scientist, her work in the Department of Fish and Wildlife Conservation focuses on understanding how to best engage people in wildlife conservation issues. Other panelists were selected for their expertise in such areas as parks management, invasive species management, emerging technologies, economics, or decision support.

As to why the agency chose this particular time to act and form the panel, Elaine Leslie, former chief of the NPS Biological Resource Management Division, said, "The NPS is very concerned about nonnative and invasive species across the landscape within and outside of national park units and their impacts on native biodiversity, especially at-risk species and their habitats. . . . Nationally and internationally, the world is losing native biodiversity at an alarming rate. Threats from invasive species play a critical part in this loss."

Dayer and the team of experts have been grappling with this complex issue for three years. Their primary finding is that the presence of invasive animals undermines the mission of the NPS. These invaders can cause the loss of park wildlife, lessen visitors' enjoyment of parks, introduce diseases, and have huge economic impacts due to the cost of control measures.

Yet invasive animal species can be found in more than half of all national parks. Of the 1,409 reported populations of 311 invasive animal species in national parks, there are management plans for 23 percent and only 11 percent are being contained. The invaders include mammals, such as rats, cats, and feral pigs; aquatic species like lake trout and the quagga mussel; and reptiles, including the Burmese python.

Everglades National Park has been well-known for its invasive animal issues since pythons were found to be thriving and reproducing there in 2000. Local and national media, as well as documentary producers, quickly found an audience in the general public for their works featuring these snakes, which can reach up to 23 feet in length. Researchers have also been attentive to what is happening in the Everglades, reporting huge declines in native mammals like raccoons and opossums.

In Virginia, the hemlock woolly adelgid has infested hemlocks along the Blue Ridge Parkway and in Shenandoah National Park. Hemlocks help maintain the cool habitats needed by other species to thrive, such as native trout. Although hemlocks can live up to 600 years, a woolly adelgid infestation can kill a tree in just three to 10 years.

The second finding of the panel is that coordinated action is required to meet the challenge of invasive species. The four additional findings carry the same mandate for collaboration: partnering is essential for success; public engagement, cooperation, and support are critical; decision support across all levels must be strategic; and emerging technologies, when appropriately used, would be beneficial.

According to Mark Schwartz, a fellow panelist and professor of conservation science at the University of California-Davis, it is the complex nature of this problem that calls for such a coordinated and widespread effort. "Our national parks face a suite of wicked management problems, with the invasive species standing out for the sheer diversity of species, the geographic spread of their impact, the magnitude of the threat, and the complexity of solutions."

Both Schwartz and Dayer, as well as their other panelists, agree not only that national coordination is the way forward, but also that this will be a major challenge, an idea that is expressed in their findings. Schwartz said, "In addition to national coordination on invasive animals, a better means to fully integrate managing invasive animals across the full suite of challenges facing individual parks is needed."

Organizational change is possible, Dayer believes. As an affiliate of the Global Change Center housed in Virginia Tech's Fralin Life Sciences Institute, she sees good examples of progress through cross-jurisdictional efforts like the National Invasive Species Council and the Invasive Species Advisory Committee, as well as through regional collaborations that have engaged national park units.

Schwartz also sees promise in some recent park successes: "After a false start, Yellowstone regrouped, sought broad public input, and now has an effective program to manage invasive lake trout. Working with the Everglades Cooperative Invasive Species Management Area, the NPS has coordinated with other agencies, tribes, and private parties to control the invasive sacred ibis. More such collaborative efforts are needed."

Elaine Leslie believes that a coordinated effort as well as additional funding will be critical to success. "This issue is also one of economic importance," she stressed. "If we can take national steps, as other countries have, to prevent and eradicate invasive species, we can make a difference -- but it has to be a priority and well-coordinated."

Another important group of people that is referenced in the findings and could pave the way for long-lasting change is the public. "The public can play a key role in helping the parks detect or remove invasive species, pushing for new governmental policies and funding allocations, or assisting through philanthropy efforts," Dayer said. "In order to make headway, it is critical that the people of the U.S. are engaged fully in determining and implementing the solution to this challenge."

Along with the other panelists, Dayer will continue to tackle this complex issue by making sure that the findings are disseminated, promoting action from the NPS, and encouraging people to buy into and participate in efforts to protect our national parks. All of this matters because, as she firmly states, "The national parks are not the National Park Service's parks; they belong to the U.S. public and serve as conservation models nationally and internationally."

Credit: 
Virginia Tech

Frequency of worship, not location, matters more when it comes to being good neighbors

Americans travel farther on average to their worship places than they did a decade ago. But while those who belong to a congregation in their neighborhood attend more often, "worshipping local" does not make them feel closer to their neighbors or more satisfied with the neighborhood, according to a new study by researchers at Baylor University and Calvin University.
Instead, frequent attendance -- whether "worshipping local" or traveling farther -- is associated with higher commitment to the neighborhood where the congregant lives.
"How often people worship predicts neighborliness better than where people worship," said lead author Kevin D. Dougherty, Ph.D., associate professor of sociology at Baylor University.
The study -- "Worshipping Local? Congregation Proximity, Attendance, and Neighborhood Commitment" -- is published in Review of Religious Research, the journal of the Religious Research Association.
For the study, researchers analyzed data from the 2017 Baylor Religion Survey, a national instrument administered by the Gallup Organization. A total of 1,501 respondents returned completed surveys.
Changes in residential patterns in the United States have implications for congregations, said co-author Mark T. Mulder, Ph.D., professor of sociology at Calvin University in Grand Rapids, Michigan. After World War II, many Americans moved from cities to sprawling suburbs. Commuting became a way of life. Today, there is renewed interest in urban centers and an emphasis on local environments.
"While most religious Americans can reach their place of worship within 15 minutes, a growing segment is commuting more than 30 minutes," Mulder said. The most pronounced jump in travel time occurred from 2009 to 2017, with more than one third commuting more than 16 minutes to their place of worship - up from one quarter in 2001 and 2009.
The abundance of congregations available in the United States is one explanation. Religious people engage in "church shopping" to find a place of worship that meets their preferences for preaching, music, programs or other characteristics. Another partial explanation may be the attraction of large congregations, which pull members from a broader geographic area. In the past decade, American congregants increasingly became concentrated in larger congregations, Dougherty said.
"The more specialized a congregation by theology or ethnicity, such as being the only Jewish synagogue in an area, the farther people may be willing to drive to attend it," Mulder said.
The study found that people who live more than 15 minutes from their place of worship attend religious services less frequently than religiously affiliated individuals who live within five minutes of their congregation. Getting to a place of worship that is 20 to 30 minutes away can make it difficult to attend often for busy adults.
Based on their research, Dougherty and Mulder made three recommendations:
* Religious leaders, especially those engaged in start-up congregations, should pay more attention to place. Neighbors represent a valuable pool of potential participants, since minimal travel (whether by vehicle or walking) translates into more participation.
* Because the pull of suburbia is so pervasive, religious leaders will have to work harder to involve those living farthest away.
* Congregation leaders should recognize places of worship as neighbor-making spaces with a larger reach than one block or one census tract. One way to achieve a feeling of community is through small, home-based groups that meet weekly or every other week. "These small groups can link people to others within their neighborhood, even if the physical address of the congregation is miles away," Dougherty said. "As a result, home-based small groups are one of many ways congregations can operate as hubs from which people go out to be good neighbors, wherever they live."

Credit: 
Baylor University

Social influencers: What can we learn from animals?

Research from Oxford University calls us to reconsider how behaviours may spread through societies of wild animals, and how this might provide new insights into human social networks.

Our social connections to one another, whether it be online or in real life, give rise to our 'social networks'. Previously, it has often been assumed that the individuals with the most social connections are the primary 'social influencers' and most likely to acquire, and spread, new behaviours. Behaviours were viewed to spread simply based on the amount of exposure to others, just like contracting a contagious disease might depend on exposure to infected individuals. This viewpoint has not only been applied to humans, but also a range of different animal species too.

However, a new study from Oxford University suggests our understanding of animal behaviours can be enhanced by drawing on the latest findings in human systems, which show that the most influential individuals are not necessarily the most social ones. Instead, the most important individuals often tend to be those occurring in tight knit friendship circles. Even though these individuals may have relatively few social connections, they wield high influence within their cliques and promote the rapid spread of new behaviours.

The new study, published today in the journal Trends in Ecology and Evolution, shows how these recent insights, coming from contexts as varied as how new technologies are adopted, how political movements occur, and even how social media hashtags spread, can now be harnessed for furthering our understanding of animal societies too. The study presents examples showing how even in the most basic systems, small changes in how behaviours spread can enormously affect which animals might adopt a behaviour, and which might be important to spreading it.

The author of the study, Dr Josh Firth, said: "Just like in humans, various animal species are known to be capable of social considerations, such as when to adopt a behaviour, or who to learn from. These choices mean that behaviours don't spread like diseases."

The study also draws on recent examples that are already providing new insights into animals' social lives, and how this might inform our understanding of our own social networks. For example, fish appear to make fine-scale judgements about when to copy their shoal-mates' behaviour, and birds may 'follow the majority' when learning to acquire new food. By carefully considering how these social choices affect the spread of behaviour, animal systems may provide a new way of examining how different types of behaviours, such as foraging or mating behaviours, might spread differently, and which factors determine which individuals hold the most social influence.

Dr Firth added: "Studying wild animal populations holds exceptional advantages, such as the ability to experimentally manipulate natural social networks, and to track individuals over long time periods and many generations."

As such, animal populations now provide new ways of investigating the fundamental science behind how behaviours spread, which may be beneficial for understanding social systems generally. So, while all the intensive research on human social networks may be transforming the way we think about animal's social lives, it is likely that animal behaviour can now teach us about the workings of our own societies too.

Credit: 
University of Oxford

Concerns over regulation of oral powders or gels sold as medical devices in Europe

Oral powders or gels, sold as medical devices in the European Union (EU), aren't regulated to the same safety standards as those applied to medicines, reveals research published online in the Archives of Disease in Childhood.

As a result, these products, which look like medicines, can be marketed with very limited clinical data and accompanied by poor quality product information.

This is of particular concern for children taking them, say the researchers, who call for the regulation of these products to be revised.

To test the impact of the regulatory assessment and monitoring processes for MedDevs before and after launch, the researchers compared the product leaflet information for three soluble powder and barrier gel MedDevs for the treatment of digestive problems, including in children, with that provided for prescription drugs.

They scrutinised the information on product composition and ingredients; use/indications; clinical effectiveness; interactions with other drugs/foodstuffs; toxicity; and long term safety.

And they assessed the quality of the published clinical evidence available at the time and used to inform the launch of: gelatin tannate; gelatin tannate plus tyndalised probiotics (for diarrhoea); and a hyaluronic acid and chondroitin sulfate gel (for acid reflux).

They found that the product information--which is used by clinicians and patients--for these three MedDevs fell short of the quality required for medicines.

For example, there was insufficient or no information on: the derivation of the products; the ratio of the relevant constituents; toxicity; factors affecting absorption and potential interactions with other drugs; maximum safe doses; and potential long term harms.

Although no evidence of side effects associated with the three products has been published, there is no proof of safety either, note the researchers.

No age limit was specified for any of the three MedDevs, meaning that they could all be used in children from birth onwards, despite little or no published evidence of their safety and clinical effectiveness in children.

MedDev product information leaflets don't mention side effects, yet there are safety concerns associated with the active ingredients in each of the products, say the researchers.

MedDevs for use in Europe are regulated by a business arm of the EU called GROW, rather than the European Medicines Agency, and require only certification with a 'CE' (quality) kitemark before the product can be marketed.

This process doesn't require evidence of efficacy or safety from high quality clinical trials, as is the case for medicines.

It also means that these products can automatically be sold without a prescription across the EU, and actively marketed to patients and clinicians. Medicines are usually prescription only when launched, and can only be marketed directly to patients if and when they become 'over the counter' products.

And the three MedDevs reviewed could easily be mistaken for medicines, helped in no small part by references to them as "paediatric drugs" or "drug treatments," suggest the researchers.

"We believe that that these products [MedDevs] could be perceived as medicines, likely because of their indication, formulation and repeated mode of administration similar as for a medicinal drug," they write.

A tougher monitoring system for MedDevs is under development, they acknowledge, but it still isn't as stringent as that applied to medicines.

"In conclusion, this analysis indicates relevant differences in the leaflets and standards used to certify MedDevs for oral use in children in the EU, when compared with medicines," write the researchers.

"We found that oral MedDevs requiring repeated ingestion to treat a medical condition (similarly as required for medicines) are hardly or not evaluated in children. This is likely because the regulatory requirements of MedDevs differ significantly from the registration standards for medicines.

"In our opinion, MedDev regulations need revision, excluding all substances for repeated oral intake," they say.

Credit: 
BMJ Group

Researchers use genomics to discover potential new treatment for parasite disease

image: When a mosquito bites a person who has lymphatic filariasis, microscopic worms circulating in the person's blood enter and infect the mosquito. When the infected mosquito bites another person, the microscopic worms pass from the mosquito through the skin, and travel to the lymph vessels. In the lymph vessels they grow into adults. An adult worm lives for about 5-7 years. The adult worms mate and release millions of microscopic worms, called microfilariae, into the blood. People with the worms in their blood can give the infection to others through mosquitoes.

Image: 
University of Maryland School of Medicine

Using innovative RNA sequencing techniques, researchers at the University of Maryland School of Medicine (UMSOM) Institute for Genome Sciences identified a promising novel treatment for lymphatic filariasis, a disabling parasitic disease that is difficult to treat. The potential new therapy is an experimental cancer drug called JQ1 and targets proteins found prominently in the worm's genome; it appears to effectively kill the adult worms in a laboratory setting, according to the study which was published today in the journal mSystems.

Lymphatic filariasis affects over 120 million people worldwide, mostly in the tropics of Asia, Western Pacific and Africa and parts of the Caribbean and South America. Those in the U.S. who have the disease, which is spread by mosquitoes that carry the baby worms, were infected while traveling abroad. The disease leads to dysfunction of the lymph system, causing swelling in the limbs (lymphedema) and hardening of the arms and legs (elephantiasis) or swelling of the scrotum in men (hydrocele). Lymphatic filariasis is a leading cause of permanent disability worldwide, according to the Centers for Disease Control and Prevention.

While current treatments can reduce the risk of the disease being transmitted to other people, the CDC says it does not do much to alleviate symptoms. Studies suggest the antibiotic doxycycline can help manage mild to moderate lymphedema by killing the adult worms, but it must be given for four to six weeks in order to have any effectiveness.

"The drug JQ1 works by inhibiting bromodomain-containing proteins that are necessary for the adult worms to live," said study author Julie Dunning Hotopp, PhD, Professor of Microbiology and Immunology at the Institute of Genome Sciences at the University of Maryland School of Medicine. "Based on our observations in the lab, we believe that this drug could be more effective than standard treatments at killing adult worms and may need to be administered only once."

Working alongside Matthew Chung, PhD, a postdoctoral fellow in the Institute for Genome Sciences and co-author of the study, Dr. Dunning Hotopp used an RNA sequencing technique to identify genes in the parasitic worms that cause lymphatic filariasis. They identified an overrepresentation of genes that encode for bromodomain proteins. Based on those findings, they and their colleagues at the University of Wisconsin Oshkosh and at New England Biolabs decided to test JQ1, a bromodomain inhibitor, to see whether it would kill the adult worms, and they discovered that it was effective at killing them in the laboratory.

Their next step is to conduct preclinical studies to test JQ1 in rodents infected with the parasitic worms to see whether the drug can wipe out the infection. If successful, the drug could then be tested in human trials.

"While this research is still in its early stages, it highlights the ability of transcriptomics to identify potential new therapeutics," said UMSOM Dean E. Albert Reece, MD, PhD, MBA, University Executive Vice President for Medical Affairs and the John Z. and Akiko K. Bowers Distinguished Professor. "I am eager to see the results of future research studies on this since patients with this neglected tropical disease are in dire need of more effective treatments."

Credit: 
University of Maryland School of Medicine

How to improve water quality in Europe

image: The Danube is Europe's second largest river and is used intensively by people. It is one of six rivers that formed the focus of the EU project SOLUTIONS.

Image: 
UFZ/André Künzelmann

The EU Water Framework Directive (WFD) adopted in 2000 aims to protect Europe's water resources. By 2027, EU Member States are required to bring all water bodies into a "good ecological" and "good chemical state". There's still a long way to go. This is due, for example, to the fact that a few existing substances, for which there are currently no suitable possibilities for reducing pollution, lead to environmental quality standards being exceeded across the board in Germany and Europe - and thus to poor water quality. "What's more, the complex mixtures of pesticides, medicines and industrial chemicals that are released daily and pose a considerable risk for humans and the environment are not taken into account when establishing the chemical status of our water bodies," says UFZ Environmental Chemist Dr Werner Brack, who coordinated the SOLUTIONS project that drew to a close last year. The current WFD indicator system does not differentiate between rivers with differing pollution nor does it demonstrate any actual improvements in water quality as a result of any measures implemented. This is why it urgently needs to be developed further. Otherwise, according to Brack, the objectives of the WFD cannot be achieved.

For the past five years, European scientists have carried out research as part of the SOLUTIONS project, which received EUR twelve million from the EU. "It has been shown that the current practice of limiting the assessment of chemical pollution to a few substances defined as priorities throughout Europe and certain river-basin-specific pollutants is not sufficient for recording pollution as a whole," summarises Werner Brack. At present, the WFD only lists 45 priority pollutants that are not allowed to occur or occur only to a limited extent in water bodies categorized as water bodies of good quality. However, more than 100,000 chemical substances end up in the environment and water bodies. The indicators currently used to assess water quality cannot be used to identify pollution hotspots or initiate appropriate management measures. The SOLUTIONS project has therefore developed new concepts and tools for monitoring and reducing exposure to complex mixtures.

In a total of 15 policy briefs, SOLUTIONS researchers have set out how policy makers can implement these concepts and tools. For example, scientists recommend that substances in toxic mixtures should also be taken into account when prioritising chemicals under the WFD. Until now, prioritising chemicals and defining EU-wide priority and river-basin-specific substances have only been based on individual chemicals. In another policy brief, they describe how users can use the RiBaTox toolbox developed as part of the SOLUTIONS project to solve problems related to the monitoring, modelling, impact assessment and management of chemical mixtures in surface waters. Monitoring methods should be used to target the complex mixtures, i.e. effect-based methods that involve representative aquatic organisms such as algae, small crustaceans, fish embryos and suitable cell systems demonstrating how toxic each chemical cocktail is. This would allow toxic loads to be determined, even if the underlying chemicals are unknown or below the detection limit for analysis. These methods should be complemented by chemical screening techniques using high-resolution mass spectrometry to see which substances the mixtures contain, to detect emerging chemicals and to monitor pollution trends in the aquatic environment. This way, valuable information can also be collected on the occurrence of substances that are now detectable but cannot yet be identified. To be able to use this extensive data on hundreds and thousands of substances in water to assess the risk of chemical cocktails, the authors also suggest establishing a European data infrastructure. This will help gather data and make it accessible to the world of science and the authorities so it can be evaluated and shared.

"The policy briefs are intended to make it easier for decision-makers to access the scientific information needed to protect Europe's water resources," says Werner Brack. This is an important basis for people's health across Europe and for healthy ecosystems that provide the population with key services.

Credit: 
Helmholtz Centre for Environmental Research - UFZ

Social media could be a force for good in tackling depression but for privacy concerns

image: Dr Elizabeth Ford, Senior Lecturer in Primary Care Research at BSMS and study lead.

Image: 
Brighton and Sussex Medical School

Social media has been identified by a number of studies as being a significant factor in mental health problems, especially in young people. But imagine if the power of Twitter, Facebook and Instagram could also be harnessed to identify those with depression symptoms and signpost them to support services.

By analysing social media (SM) content using machine-learning techniques, it may be possible to identify which SM users are currently experiencing low mood, and then use this to show adverts for mental health services to people who need them.

But a new study led by researchers at Brighton and Sussex Medical School (BSMS) shows that while social media users could see the benefits of this kind of analysis for depression in principle, they did not believe benefits outweighed the risks to privacy.

More than 180 people, of whom 62% had previously experienced depression, completed a questionnaire to understand their reactions and view to their SM content being profiled for depression. Respondents were uneasy with the concept, and were concerned that using SM in such a way would increase stigmatisation, lead to people being "outed" as having depression or identify people who struggle to seek help in real life.

While a majority supported the idea that analysis of Facebook content could improve targeting of charitable mental health care services, less than half would give consent for their own SM to be analysed, and even fewer would be comfortable without first giving explicit consent - despite the fact that profiling of social media users' demographics and certain content happens without explicit consent already, for targeting advertising within news feeds and across search engines.

Of particular concern for social media users was the potential for the data that was harvested to be sold on to companies with untrustworthy motives. Some respondents were worried the software could be over-sensitive or misread a poster's humour and potentially labelling those who are not suffering with depression with a diagnosis.

Commenting on the study, Dr Elizabeth Ford, Senior Lecturer in Primary Care Research at BSMS and study lead, said: "Some respondents to our survey felt that advertising on social media was targeted to users anyway, profiling users' content for a beneficial purpose such as improving access to mental health services, would be a good thing. However, other users felt there were too many ways in which the profiling of users' mental health could be abused, and few trusted social media companies such as Facebook to be transparent and honest about how their data was being used.

"Another possible problem is that our respondents did not feel their SM posts truly reflected their mood when they were depressed, and many of them said they posted less often when their mood was low. So, predictive tools trying to identify depression may not be very accurate."

For teams aiming to develop this kind of technology, Dr Ford has clear advice: "Our view is that with all technology development relating to people's health, researchers and developers should work with the end users as key stakeholders, helping them design and work out the trajectory of their project. As the results suggest a low level of trust in social media platforms, developers should check with SM users at all stages of development before implementing this kind of tool."

Credit: 
University of Sussex

Towards high quality ZnO quantum dots prospective for biomedical applications

image: Researchers from IPC PAS, WUT and IRIG compared the structures of the organic layers that stabilize ZnO QDs prepared by both methods (i.e. the commonly used sol-gel method and OSSOM approach developed in Warsaw). We tried to present the essence of our research as a) chaotically arranged, differently-coloured hands - characteristic for sol-gel derived ZnO QDs, and b) hands arranged in pairs, very regularly around the core, which is characteristic for ZnO QDs prepared by the OSSOM method.

Image: 
IPC PAS, G.Krzyzewski

Nanocrystalline zinc oxide (ZnO) is currently one of the most commonly used semiconductor metal oxide nanomaterials due to its unique catalytic and electro-optical characteristics. The inherent and distinctive physicochemical properties of ZnO nanostructures are dependent on a variety of factors that are determined by the applied synthetic procedure and the character of the resulting nanocrystal-ligand interface. Thus, the preparation of stable ZnO nanostructures, especially nanoparticles with sizes below 10 nm, i.e. quantum dots (QDs), with desired physicochemical properties still remains a huge challenge for chemists. Recently, scientists from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) and Warsaw University of Technology (WUT) in cooperation with the Interdisciplinary Research Institute of Grenoble (IRIG) used dynamic nuclear polarization (DNP)-enhanced solid state nuclear magnetic resonance (NMR) spectroscopy for detailed characterization of the organic-inorganic interfaces of ZnO QDs prepared by the traditional sol-gel process and the recently developed one-pot self-supporting organometallic (OSSOM) procedure. In parallel, investigations were carried out on the design and preparation of bio-stable ZnO QDs along with the determination of their structure-biological activity relationship. These studies were published in the high-impact journals "Angewandte Chemie" and "Scientific Reports".

"We wanted to unambiguously confirm that ZnO QDs prepared in our laboratory using the OSSOM approach are of unprecedentedly high quality," recounts co-author of both papers, Dr. Ma?gorzata Wolska-Pietkiewicz. "Up to now, ZnO QDs have been commonly produced by a sol-gel process. However, the main disadvantage of this traditional method is the low reproducibility, which likely inhibits both the uniformity of particle morphology and organic ligand shell composition. Consequently, the resulting nanostructures are essentially unstable and tend to aggregate. In my opinion, this has significantly limited potential applications of nanocrystalline ZnO in various technologies," adds Dr. Wolska-Pietkiewicz.

"An alternative to the omnipresent sol-gel method are highly promising wet-organometallic approaches. Recently developed in our laboratory, the OSSOM procedure is based on the controlled exposition of a well-defined organozinc precursor to air. The OSSOM process is thermodynamically controlled and occurs at room temperature," says Professor Janusz Lewi?ski. To highlight the superiority of the organometallic approach for the preparation of ZnO QDs, both the procedure-driven properties as well as the structures of the organic ligand shells of QDs prepared by both the OSSOM approach and the sol-gel procedure were compared. For this purpose scientists applied the DNP-NMR method that is being developed in the group of Dr Gaël De Paëpe (IRIG). "This NMR technique allows us to study nanomaterials' interfaces with atomic precision and thus to demonstrate the difference between tested materials," continues Dr. Daniel Lee and adds that the ability to determine the exact nature and structure of the interface gives a valuable insight into future designs for new and fully stable functional nanomaterials. In addition, DNP-NMR measurements are relatively fast and take only a few hours. This really isn't much, especially compared to conventional NMR spectroscopy, which (in the case of measurements with comparable resolution) would require ... about a year.

"The OSSOM method leads to the formation of ZnO QDs coated with strongly anchored and highly-ordered organic coatings. Contrastingly, on the surface of sol-gel derived ZnO nanostructures, coating ligand molecules are randomly distributed," Dr. Wolska-Pietkiewicz points out. What is more, ligands could be easily removed from the surface of QDs derived from sol-gel process, changing the properties of the resulting nanomaterial. "In our method, the surface is super-protected, and QDs are stable. As a result, the OSSOM approach affords high-quality ZnO QDs with unique physicochemical properties, which are prospective for biological applications," adds Dr. Wolska-Pietkiewicz.

Why it is so important?

"This preliminary study has only just scratched the surface (pun intended) of what can be achieved." - says Dr. Lee. "We have shown that being able to study nanomaterials' surface stability at an atomic scale enables the understanding of how to provide their stability, which is extremely important from the point of view of subsequent applications: from sensors and optical devices to targeted drug delivery and nanomedicines."

"In the near future, we could design, for example, safe and effective drug nanocarriers for cancer therapies, in which we would be able to deposit appropriately selected, active molecules within our ordered organic layer. Positioning is important especially for targeted therapies, e.g. photodynamic therapy, because it allows the drug to be released evenly in a particular environment and at the right speed. In addition, owing to the achieved ligands ordering, we are able to pack a lot of active drug particles on a small carrier" adds Professor Lewinski.

Credit: 
Institute of Physical Chemistry of the Polish Academy of Sciences

An alloy that retains its memory at high temperatures

image: Alexander Paulsen (right) and Alberto Ferrari have brought theory and practice together.

Image: 
© RUB, Marquard

Using computer simulation, Alberto Ferrari calculated a design proposal for a shape memory alloy that retains its efficiency for a long time even at high temperatures. Alexander Paulsen manufactured it and experimentally confirmed the prediction. The alloy of titanium, tantalum and scandium is more than just a new high-temperature shape memory alloy. Rather, the research team from the Interdisciplinary Centre for Advanced Materials Simulation (Icams) and the Institute for Materials at Ruhr-Universität Bochum (RUB) has also demonstrated how theoretical predictions can be used to produce new materials more quickly. The group published its report in the journal Physical Review Materials from 21 October 2019. Their work was showcased as Editor's suggestion.

Avoiding the unwanted phase

Shape memory alloys can re-establish their original shape after deformation when the temperature changes. This phenomenon is based on a transformation of the crystal lattice in which the atoms of the metals are arranged. Researchers refer to is as phase transformation. "In addition to the desired phases, there are also others that form permanently and considerably weaken or even completely destroy the shape memory effect," explains Dr. Jan Frenzel from the Institute for Materials. The so-called omega phase occurs at a specific temperature, depending on the composition of the material. To date, many shape memory alloys for the high temperature range would withstand only a few deformations before they became unusable once the omega phase set in.

Promising shape memory alloys for high temperature applications are based on a mixture of titanium and tantalum. By changing the proportions of these metals in the alloy, researchers can determine the temperature at which the omega phase occurs. "However, while we can move this temperature upward, the temperature of the desired phase transformation is unfortunately lowered in the process," says Jan Frenzel.

Admixture alters properties

The RUB researchers attempted to understand the mechanisms of the onset of the omega phase in detail, in order to find ways to improve the performance of shape memory alloys for the high-temperature range. To this end, Alberto Ferrari, PhD researcher at Icams, calculated the stability of the respective phases as a function of temperature for different compositions of titanium and tantalum. "He was able to use it to confirm the results of experiments," points out Dr. Jutta Rogal from Icams.

In the next step, Alberto Ferrari simulated small amounts of third elements being added to the shape memory alloy of titanium and tantalum. He selected the candidates according to specific criteria, for example they should be as non-toxic as possible. It emerged that an admixture of a few percent of scandium would have to result in the alloy functioning for a long time even at high temperatures. "Even though scandium belongs to the rare earths and is, consequently, expensive, we only need very little of it, which is why it's worth using anyway", explains Jan Frenzel.

Prediction is accurate

Alexander Paulsen then produced the alloy calculated by Alberto Ferrari at the Institute for Materials and tested its properties in an experiment: the results confirmed the calculations. A microscopic examination of the samples later proved that even after many deformations no omega phase was found in the crystal lattice of the alloy. "We have thus expanded our basic knowledge of titanium-based shape memory alloys and developed possible new high-temperature shape memory alloys," says Jan Frenzel. "Moreover, it's great that the computer simulation predictions are so accurate." Since the production of such alloys is very complex, the implementation of computer-aided design proposals for new materials promises much faster success.

Credit: 
Ruhr-University Bochum

Diamonds in your devices: Powering the next generation of energy storage

image: In a breakthrough study, scientists from Japan use nanodiamonds to construct supercapacitors that can be widely used as a more efficient alternative to conventional energy-storage devices.

Image: 
Tokyo University of Science

Our use of battery-operated devices and appliances has been increasing steadily, bringing with it the need for safe, efficient, and high-performing power sources. To this end, a type of electrical energy storage device called the supercapacitor has recently begun to be considered as a feasible, and sometimes even better, alternative to conventional widely used energy-storage devices such as Li-ion batteries. Supercapacitors can charge and discharge much more rapidly than conventional batteries and also continue to do so for much longer. This makes them suitable for a range of applications such as regenerative braking in vehicles, wearable electronic devices, and so on. "If a high-performance supercapacitor using a non-flammable, non-toxic, and safe aqueous electrolyte can be created, it can be incorporated into wearable devices and other devices, contributing to a boom in the Internet of Things," Dr Takeshi Kondo, who is the lead scientist in a recent breakthrough study in the field, says.

Yet, despite their potential, supercapacitors, at present, have certain drawbacks that are preventing their widespread use. One major issue is that they have low energy density; that is, they store insufficient energy per unit area of their space. Scientists first attempted to solve this problem by using organic solvents as the electrolyte--the conducting medium--inside supercapacitors to raise the generated voltage (note that the square of the voltage is directly proportional to energy density in energy storage devices). But organic solvents are costly and have low conductivity. So, perhaps, an aqueous electrolyte would be better, the scientists thought.

Thus, the development of supercapacitor components that would be effective with aqueous electrolytes became a central research topic in the field.

In the aforementioned recent study, published in Scientific Reports, Dr Kondo and group from the Tokyo University of Science and Daicel Corporation in Japan explored the possibility of using a novel material, the boron-doped nanodiamond, as electrode in the supercapacitors--electrodes are the conducting materials in a battery or capacitor that connect the electrolyte with external wires, to transport current out of the system. This research group's choice of electrode material was based on the knowledge that boron-doped diamonds have a wide potential window, a feature that enables a high-energy storage device to remain stable over time. "We thought that water-based supercapacitors producing a large voltage could be realized if conductive diamond is used as an electrode material," Dr Kondo says.

The scientists used a technique called the microwave plasma-assisted chemical vapor deposition, MPCVD, to manufacture these electrodes and examined their performance by testing their properties. They found that in a basic two-electrode system with an aqueous sulfuric acid electrolyte, these electrodes produced a much higher voltage than did conventional cells, resulting in much higher energy and power densities for the supercapacitor. Further, they saw that even after 10,000 cycles of charging and discharging, the electrode remained very stable. The boron-doped nanodiamond had proven its worth.

Armed with this success, the scientists then ventured to explore whether this electrode material would show the same results if the electrolyte were changed to saturated sodium perchlorate solution, which is known to enable production of a higher voltage than what is possible with conventional sulfuric acid electrolyte. Indeed, the already high voltage generated expanded considerably in this setup.

Thus, as Dr Kondo has said, "the boron-doped nanodiamond electrodes are useful for aqueous supercapacitors, which function as high-energy storage devices suitable for high-speed charging and discharging."

Looks like diamonds could be driving our electronic and physical lives in the near future!

Credit: 
Tokyo University of Science

Co-combustion of wood and oil-shale reduces carbon emissions

image: Head of Laboratory of Fuel and Air Emission Analysis in TalTech, Professor Alar Konist.

Image: 
TalTech

Utilization of fossil fuels, which represents an increasing environmental risk, can be made more environmentally friendly by adding wood - as concluded based on the preliminary results of the year-long study carried out by thermal engineers of Tallinn University of Technology. In search of less polluting ways of energy production, increasing the amount of biomass as a source of raw materials offers a good way to use fossil fuels and reduce emissions.

The Head of TalTech's Laboratory of Fuel and Air Emission Analysis, Professor Alar Konist who leads the research says, "We used thermogravimetric analysis in our research. In modern laboratory conditions, the use of a high-speed furnace for thermogravimetric analysis allows determining reactivity of wood at different temperatures and mass percentages. Our goal was to study the kinetics of combustion of biomass - in this case wood and oil shale, with the aim to maximise the amount of biomass."

One of the possibilities to reduce notorious carbon dioxide (CO2) emissions in thermal engineering is to reduce the share of oil shale fuel and replace it with renewable raw materials. In their study, the researchers of Tallinn University of Technology used one of our most common renewable natural resources - wood - to analyze the co-combustion of wood and oil shale. Mixtures containing up to 40 mass percent of wood were analysed

"Today we can say that use of the mixture of fossil fuel and biomass in the Estonian CFB boilers is the least damaging to the environment. For example, the efficiency of our most well-known modern green electricity producer, the Auvere Power Plant is 40%, while in other cogeneration plants that produce electricity in addition to heat the efficiency still remains below 30%," Professor Konist says.

Burning fossil fuels emits carbon dioxide into the biosphere circulation unlike burning of solid biofuels, since solid biofuels are part of the biogeochemical cycle. There are two main environmental concerns related to fossil fuels. First, flue gases contain various pollutants emitted into the atmosphere in addition to carbon dioxide and, secondly, ash is produced during combustion.

Professor Alar Konist says, "The results of our research show that the emission concentration of pollutants in the flue gas can be controlled at the lowest optimal combustion temperatures of 700-800°C. The formation of the other harmful factor- ash - can be reduced by almost 50% by adding wood to the oil shale. Such ash has added value: due to added wood, the quality of the ash is suitable e.g. for use (or more accurate to say "for re-use") as a raw material for production of green cement".

Oil shale pyrolysis, gasification, carbon capture and utilization (CCUS technologies) - these are topics of the future, which will be tackled by heat and power engineers in the short term. Since climate targets are constantly becoming stricter, Europe is facing circumstances where the consumption volumes have increased, but the production volumes have decreased substantially. This means that Europe is forced to import energy, regardless of its strict environmental regulations. This, in turn, leads to the conclusion that cheaper energy produced in compliance with lower environmental requirements can be sold at a lower price. This means that the energy produced by cleaner technologies is unfortunately not competitive in Europe in current market circumstances.

"I am convinced that as long as our energy production continues to be market-based, we cannot unfortunately rely on major solutions that value the environment, such as implementation of carbon capture technologies, etc." Professor Konist says.

Credit: 
Estonian Research Council

Study sheds light on the peculiar 'normal' phase of high-temperature superconductors

image: An illustration shows how the normal state of a superconducting cuprate abruptly changes when the density of free-flowing electrons is tweaked in a process known as doping. Particle-like excitations that are characteristic of a conventional metal (right) disappear as the 'strange metallic' state (left) takes over.

Image: 
(Greg Stewart/SLAC National Accelerator Laboratory)

Every character has a back story, and so do high-temperature superconductors, which conduct electricity with no loss at much higher temperatures than scientists once thought possible. To figure out how they work, researchers need to understand their "normal" state, which gives rise to superconductivity when the material is cooled below a critical transition temperature and the density of free-flowing electrons is tweaked in a process known as "doping."

Even in their normal state, these materials are pretty peculiar. Now, an experiment at the Department of Energy's SLAC National Accelerator Laboratory has probed the normal state more accurately than ever before, and discovered an abrupt shift in the behavior of electrons in which they suddenly give up their individuality and behave like an electron soup.

A research team from SLAC and Stanford University described the results in Science.

"The abnormality of this normal state is suspected to be the reason why these superconductors are such good superconductors," says Dirk Van Der Marel, a researcher at the University of Geneva who was not involved in the study.

"This study has essentially overthrown a very popular and hotly debated theory, called quantum critical point theory, that is thought to underlie superconductivity not only in this material, but in other materials as well. This is a disruptive finding, but it's a step forward, because it frees our minds to explore other ideas."

Exploring a well-known cuprate

The study was carried out on a compound called Bi2212, one of the most thoroughly studied high-temperature superconductors. As a copper oxide, or cuprate, it's part of a family of compounds where high-temperature superconductivity was first discovered more than 30 years ago.

Scientists across the world have been working ever since to understand how these materials function, with a goal of finding superconductors that operate at close to room temperature for applications like perfectly efficient power lines.

One of the most important tools for studying these materials is angle-resolved photoemission spectroscopy (ARPES). It uses light - in this case a beam of ultraviolet light from SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) - to kick electrons out of the material and measure their energy and momentum. This reveals how the electrons inside the material behave, which in turn determines its properties.

In superconductivity, for instance, electrons overcome their mutual repulsion and form a sort of collective soup in which they can pair up and flow past obstacles without losing any of their energy.

Frustrated electrons

Earlier generations of so-called conventional superconductors, which operate only at extremely low temperatures, are conventional metals in their normal state, where their electrons act independently, as they do in most materials.

But in cuprates the picture is very different. Even in their normal, non-superconducting state, electrons seem to recognize each other and act collectively, as if they were dragging each other around, in what's known as "strange metal" and even "incoherent strange metal" behavior.

"In a way you can think about these electrons as being frustrated," said Zhi-Xun Shen, a professor at Stanford and SLAC and investigator with the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC who led the study. "In other words, the electrons have sort of lost their individual identity and become part of the soup. This is a really interesting, challenging state to describe in theoretical ways."

It's been hard to explore these fascinating normal states at the warm temperatures where they occur, said Su-Di Chen, a Stanford graduate student who performed the experiments with SLAC postdoctoral researcher Yu He, Stanford postdoc Jun-Feng He and SSRL scientist Makoto Hashimoto. The theoretical part of the study at SLAC was led by SIMES Director Thomas Devereaux.

A surprisingly sharp boundary

In ARPES experiments, samples are usually placed in a cold environment inside a vacuum chamber to minimize contamination of the surface, Chen said: "But even if you put them in an ultra-high vacuum, residual gas molecules can still attach to the sample surface and affect the quality of our measurement. This problem gets worse when you warm the environment around the sample to the temperatures where the normal states exist."

To get around this, Hashimoto said, the team found a way to warm the sample, which is about the size of the tip of a ballpoint pen, by warming just the part of the setup that holds it while keeping everything else cold. This allowed them to examine the electrons' behavior across a range of temperatures and doping levels.

"What we saw was that as you increase the level of doping, there's a very sharp boundary," Hashimoto said. "On one side the electrons are jammed, or frustrated. Then, as more electrons are added, they suddenly start moving smoothly, an indication that the material is now a conventional metal. This transition was known to happen, but the fact that it was so sharp was a real surprise."

A challenge for theory

The results pose a challenge for theorists who still struggle to explain how high-temperature superconductors work, said paper co-author Jan Zaanen, a theoretical physicist at the University of Leiden in The Netherlands.

Current theory predicts that because changes in the nature of Bi2212 are gradual at very low, superconducting temperatures, they should also be gradual at the higher temperatures where the material is in a normal state, he said. Instead the high-temperature changes are abrupt, like what happens when a pot of water starts to boil: You can see either water or bubbles of steam in the roiling pot, but nothing in between.

"There are quite a number of reasons to believe that the strange metal in the normal state may be an example of densely entangled matter," Zaanen said. "Entanglement is the property of the quantum world that sharply distinguishes it from anything classical. We have no theoretical machines, be it classical computers or the available mathematics, that can describe it!

"But quantum computers are designed to handle such densely entangled stuff," he said. "My dream is that these results will eventually land on the top of the list of benchmark problems for the quantum computing community to solve."

Credit: 
DOE/SLAC National Accelerator Laboratory

Neuro interface adds tactile dimension to screen images

Researchers from Duke University and HSE University have succeeded in creating artificial tactile perception in monkeys through direct brain stimulation. This breakthrough can be used to create upper-limb neuroprostheses, capable of delivering a tactile sensation. The study's results were recently published in the Proceedings of the National Academy of Sciences.

Most of today's prosthetics exchange information with the remaining nerves in an amputated limb, rather than directly with the brain. Neuroprostheses connect to the brain directly and can help restore limb function even if there is a complete failure of the peripheral nervous system is completely damaged, such as from a spinal cord injury or paralysis. In addition, when a prosthesis user gets tactile feedback, they can control its movements not only visually. This will increase the precision of movement and make control more natural and easier for humans, since in everyday life, we don't usually monitor our hand movement visually.

Electric stimulation of sections of the somatosensory cortex can produce percepts, which can mimic somatic sensation in the body parts connected to these parts of the cortex. Meanwhile, tactile perception includes a wide range of various sensations, such as the ability to distinguish a subject's temperature, weight, pressure or texture. To imitate tactile perception completely, each of these sensations must be studied.

Researchers from Duke University and HSE University decided to find out whether it is possible to mimic the sensation of a surface while engaged in active tactile exploration, with the application of somatosensory cortex stimulation.

Two rhesus monkeys were implanted with electrodes in parts of their somatosensory cortex. According to Mikhail Lebedev, Academic Supervisor of the HSE Centre for Bioelectric Interfaces, one of the monkeys had the electrode implanted in order to stimulate the area responsible for tactile perception in its finger; and the other one - in its toe.

The animals were seated before displays and given joysticks, which they used to control a cursor that looked like a realistic upper-limb avatar. The display showed two grey rectangles with a 'tactile' texture - vertical ridges that were invisible but could be 'felt' with the cursor. When the cursor crossed a ridge, the monkey's somatosensory cortex was stimulated with electrodes.

At first, the monkeys used the joystick to move the cursor, and at the next stage, the cursor-joystick connection was disabled, and the trial subjects were connected to the virtual finger via a 'brain-computer-brain' interface: the signals controlling the cursor were transcribed from their brains directly. The monkeys were rewarded each time they chose the most 'rugged' rectangle.

The researchers were particularly interested in whether the monkeys would maintain their ability to compare the textures of surfaces at different speeds of exploration: this would mean that their movement control is 'synchronized' with the feedback received from the cursor.

Both monkeys, even after the first experimental session, performed the task better than simply guessing the correct rectangle. The speed of their exploration (i.e., the speed of their virtual hand movement over quasi-textured objects) did not affect their overall performance. This means that they were really able to feel the texture of different rectangles.

The researchers are currently carrying out their next experiment with the participation of volunteers. As Mr Lebedev explains, 'Volunteers are now involved in the same experiment as the monkeys were, but now the electrode is placed on their finger and stimulates the finger directly. People can already tell us what exactly they are feeling.

Credit: 
National Research University Higher School of Economics