Culture

Perovskite solar cells developed by NTU Singapore scientists record highest power conversion

image: Perovskites can be used to create light-weight flexible and semi-transparent solar cells ideal for applications in buildings and a variety of urban spaces.

Image: 
NTU Singapore

A team of researchers at the Nanyang Technological University, Singapore (NTU Singapore) has created a perovskite solar mini module that has recorded the highest power conversion efficiency of any perovskite-based device larger than 10 cm2.

Perovskites are new materials that have emerged as promising alternatives to silicon in solar cell applications. The material offers power conversion efficiencies similar to silicon solar cells but can also be used to create light-weight flexible and semi-transparent cells ideal for applications in buildings and a variety of urban spaces. Perovskite technologies are progressing rapidly towards industrialisation, with stability and scalability to larger sizes seen by researchers as the last hurdles to overcome.

Now NTU researchers report that they have adopted a common industrial coating technique called 'thermal co-evaporation' and found that it can fabricate solar cell modules of 21 cm2 size with record power conversion efficiencies of 18.1 per cent. These are the highest recorded values reported for scalable perovskite solar cells.

Thermal evaporation is an established coating technique currently used to produce electronics including Organic Light Emitting Diode (OLED) TVs.

Dr Annalisa Bruno, lead author of the research findings published on the cover page of scientific journal Joule, and Senior Scientist at the Energy Research Institute @ NTU (ERI@N) explained the roadblock in the large-scale adoption of perovskite solar modules.

"The best-performing perovskite solar cells have so far been realised in the laboratory at sizes much smaller than 1 cm2, using a solution-based technique, called 'spin-coating'. However, when used on a large surface, the method results in perovskite solar cells with lower power conversion efficiencies. This is due to the intrinsic limitations that include defects and lack of uniformity over large areas, making it challenging for industrial fabrication methods" she said.

"By using thermal evaporation to form the perovskite layer, our team successfully developed perovskite solar cells with the highest recorded power conversion efficiency reported for modules larger than 10 cm2.

"Our work demonstrates the compatibility of perovskite technology with industrial processes, and its potential for market entry. This is good news for Singapore, which is looking to ramp up the use of solar energy for its power needs."

First author and research fellow at ERI@N, Dr Li Jia said, "We have demonstrated the excellent scalability of co-evaporated perovskite solar cells for the first time. This step will accelerate the transition of this technology from laboratory to industry."

More surface areas to harness sunlight with coloured perovskite solar cells

Utilising the same technique, the researchers then fabricated coloured semi-transparent versions of the perovskite solar cells and mini modules, which achieved similar measures of power conversion efficiency across a whole range of different colours.

These results demonstrate the versatility of the thermal evaporation method in producing a variety of perovskite-based solar devices for a variety of optoelectronic applications.

NTU Associate Vice President (Strategy & Partnerships), Professor Subodh Mhaisalkar, who is the co-lead author of the paper, said the findings open doors for Singapore and urban environments in other countries to harness the power of sunlight more efficiently than ever before.

"The solar mini modules can be used on facades and windows in skyscrapers, which is not possible with current silicon solar panels as they are opaque and block light. Building owners will be able to incorporate semi-transparent coloured solar cells in the architectural designs to harvest even more solar energy without compromising the aesthetic qualities of their buildings" said Prof Mhaisalkar who is also Executive Director of the Energy Research Institute @ NTU (ERI@N).

Associate Professor Nripan Mathews, who is co-lead author and from the School of Materials Science & Engineering at NTU said, "This work highlights the breadth and depth of perovskite research at NTU. There is no other team in the world which pursues the various possibilities that perovskites provide under one roof. From large area solar cells for buildings, high efficiency perovskite-silicon tandem devices, to light emitting diodes - our team is inspired to tackle the key challenges involved to expedite technological deployment."

Providing an independent view, Professor Armin Aberle, Chief Executive Officer of the Solar Energy Research Institute of Singapore (SERIS) at the National University of Singapore (NUS) said, "This work represents the first demonstration of highly efficient large-area perovskite solar cells fabricated by an industrially compatible process. We are working closely with NTU in the future development of 30% efficient perovskite-on-silicon tandem solar cells in Singapore."

The NTU team is now looking at integrating perovskite and silicon solar cells to create a tandem solar cell. Such a configuration fabricated using cost-effective and scalable processes can substantially increase the solar electricity production per unit area while keeping production costs low.

Credit: 
Nanyang Technological University

A study on maternal mortality in Mozambique detects a major diagnostic error in a high percentage of deaths

Barcelona, July 14, 2020-. An analysis of a series of maternal deaths in Maputo's central hospital, in Mozambique, reveals a major diagnostic error in almost 40% of the deaths. The results, published in Lancet Global Health, show there has been scarce improvement over the last ten years. The study was led by the Barcelona Institute for Global Health (ISGlobal), an institution supported by the "la Caixa" Foundation, in collaboration with Manhiça Health Research Centre (CISM) in Mozambique.

Although the number of women in low- and middle-income countries that give birth in health centres has increased over the last years, maternal mortality remains extremely high in these countries: more than 300,000 women die during pregnancy, delivery or puerperium every year worldwide, and 99% of these deaths occur in poor countries.

"It is not only about increasing access to health services, but also increasing the quality of the care provided," says Clara Menéndez, director of the Maternal, Child and Reproductive Health Initiative at ISGlobal and first author of the study. "A key - and often neglected- element for improving healthcare quality is the correct diagnosis of diseases that can lead to death in pregnant women," she adds.

In this study, the team led by Jaume Ordi, ISGlobal researcher and pathologist at the Hospital Clinic, retrospectively analysed a series of deaths that occurred in Maputo's central hospital, in southern Mozambique, between November 2013 and March 2015. They compared the clinical diagnosis with diagnosis by complete autopsy and they observed that in almost 40% of the deaths there was a major diagnostic error where, if a correct diagnosis had been made, death could possibly have been avoided.

"If we compare these results with a similar study done ten years earlier, we can see that the diagnostic capacity has barely improved and has even worsened for some pathologies such as puerperal infections," says Ordi.

These results highlight the need to improve diagnostic capacities through the access to better diagnostic tests and strengthening clinical skills among the healthcare workers. "The practice of autopsies and the joint analysis of the diagnostic discrepancies with clinicians and pathologists could be of great help for the medical staff providing care for pregnant women," concludes Menéndez.

Credit: 
Barcelona Institute for Global Health (ISGlobal)

Loss of a co-twin linked to heightened psychiatric risk

The death of a twin, especially earlier in life, can increase the risk of their surviving twin being diagnosed with a psychiatric disorder, finds a new study published today in eLife.

Losing a loved one is always difficult but losing a twin may be particularly so. By virtue of being the same age, twins share many common experiences and may have strong emotional bonds. The new study suggests those who lose a co-twin may require extra support in both the short and longer term.

"Losing a co-twin by death may be a particularly devastating life stressor with considerable health implications for surviving twins, yet there have been few studies on this type of bereavement," says lead author Huan Song, a senior researcher at West China Hospital, Sichuan University, China, and also at the University of Iceland and Karolinska Institute, Sweden.

Using the Swedish health registers and the Swedish Twin Registry, Song and colleagues identified all Swedish twins who experienced the death of a co-twin between 1973 and 2013. They then compared the rates of psychiatric diagnoses in these bereaved twins with their non-twin siblings, and with 22,640 twins whose co-twin was still alive.

"We showed that the risk of being diagnosed with a psychiatric disorder increased by 55% to 65% after the death of a co-twin," Song says. This risk was highest in cases where a co-twin had died during childhood or young adulthood.

Surviving twins were most likely to receive a new psychiatric diagnosis in the first month after the death, when their risk of such a diagnosis was sevenfold higher than non-bereaved twins. But they continued to have a higher risk for more than 10 years after the loss.

The findings also revealed that the risk of being diagnosed with a psychiatric disorder after a co-twin's death was particularly high for identical twins, who share all the same genes. These individuals had about a 2.5-times higher risk compared to their non-twin siblings. Surviving fraternal twins, who are as genetically similar to their twin as their non-twin siblings, had about a 30% higher risk of a psychiatric diagnosis after the death of their twin than their non-twin siblings.

Senior author Unnur Valdimarsdóttir, Professor of Epidemiology at the University of Iceland explains that because of their genetic similarities and shared experiences, twins often develop a sense of shared identity, which may compound their grief after the loss of their co-twin.

"Our results suggest that both genetic similarity and early-life attachment may contribute to the subsequent risk of psychiatric disorders among surviving twins after the death of their co-twin," Valdimarsdóttir concludes.

Credit: 
eLife

Towards prosperous public goods with freedom of choice

image: Without a possibility to prioritize their contributions (left), say, between a local park, a library, or an environmental initiative, people are less likely to participate in public-goods provision. Low participation rates, in turn, threaten the feasibility of public goods; the park may get overgrown with weeds, library construction may get canceled, and the environment may get polluted. Ultimately, there is less benefit from public goods relative to the situation when prioritizing is possible (right).

Image: 
Tokyo Tech

From climate and biodiversity to public health and law enforcement, public goods benefit all. They are produced or maintained through widespread participation in public-goods provision that is vulnerable to low participation rates. Avoiding this vulnerability has spurred a continuing search for better ways to promote participation.

Now, a study by an international group of researches shows that the ability to freely choose preferred public goods adds to their value by increasing participation rates. The findings offer surprising insights into human decision making, while also suggesting that societies may profit from bottom-up approaches to public-goods provision.

Decades of experiments on human behavior and public goods games have consistently confirmed that initial participation rates hover around 50%, but then decrease due to free riding (the act of piggybacking on the goodwill of others). Recent theoretical research suggests that social networks are instrumental in offsetting free riding but so far, large-scale experiments have failed to support these theoretical predictions.

To investigate factors affecting public-goods provision, a research team coordinated by Marko Jusup from Tokyo Institute of Technology (Tokyo Tech) in Japan and Zhen Wang from Northwestern Polytechnical University in China conducted a social-dilemma experiment designed specifically to reveal what drives participation rates. Is it global characteristics of social networks or local circumstances of each individual?

The team organized a game experiment played by 596 students who were equally distributed across three social-network configurations and two experimental conditions. Under control conditions, players could only decide whether to participate in public goods provision or not. A decision to participate implied contributing one unit of wealth to each public good within their reach. The total contribution would then be multiplied by an interest rate and divided equally not only between actual contributors, but also free riders who could have contributed, but chose not to. Free riders could thus piggyback on the effort of contributors to gain benefits without sharing costs. Players under treatment conditions could additionally decide how much to contribute to each of the public goods within their reach.

A player with access to five different public goods would, by opting to participate under control conditions, contribute one unit of wealth to each of the public goods for a total contribution of five units. The same player under treatment conditions would also contribute a total of five units of wealth, but with a caveat that how much goes to each of the five public goods is subject to free will.

The study found that local circumstances are more important than the global characteristics of social networks. Changing the network configuration does not appear to affect player decisions in any significant way, whereas letting players distribute their wealth freely increases participation in public goods provision, motivates better provisioning, and thus adds value to public goods.

Remarkably, treatment conditions jump-started participation from the very beginning to form a cooperative milieu that is independent of social-network characteristics. Jusup comments: "This is surprising! We expected initial participation to be similar under both control and treatment conditions. Only later in the game did we expect gradual learning and optimization from players who could choose freely. We observed that increased participation in public-goods provision happens from the very first round of the game, as if the players could feel that extra freedoms weaken the underlying dilemma of whether to participate or not. Over time, more participation leads to more wealth, generating something akin to a free lunch for players under treatment conditions."

The study identified three behavioral types that account for the results: prosocial, antisocial, and conditional cooperators. Prosocial players participate almost unconditionally, antisocial players mostly forgo participation, and conditional cooperators refuse participation when there are no other participators around. Notably, freedom of choice seems to foster conditional cooperation, as evidenced by the fact that conditional cooperators are mostly absent under control conditions but predominate under treatment conditions. This occurs because in the latter case, players receive much clearer signals from their surroundings, and can then better gauge the overall cooperativeness of their neighbors.

There are many interesting implications for socio-economic settings. "Policymakers, for instance, could facilitate raising residential taxes by offering a portfolio of public goods for taxpayers to choose from," write the researchers in their study published in Proceedings of the National Academy of Sciences.

"And by doing so, voters could decouple long-term, life-improving, public-goods projects from the whims and fancies of political election cycles," adds Ivan Romi?, a co-author of the study. "Going beyond the politics, private companies might be able to motivate customers to pay premium product prices if the premium could be directed toward a public good of customers' choosing, thus stepping up the corporate social responsibility while remaining profitable."

Credit: 
Tokyo Institute of Technology

Scientists find new link between delirium and brain energy disruption

Scientists from Trinity College Dublin have discovered a new link between impaired brain energy metabolism and delirium - a disorienting and distressing disorder particularly common in the elderly and one that is currently occurring in a large proportion of patients hospitalised with COVID-19 [15th of July 2020].

While much of the research was conducted in mice, additional work suggests overlapping mechanisms are at play in humans because cerebrospinal fluid (CSF) collected from patients suffering from delirium also contained tell-tale markers of altered brain glucose metabolism.

Collectively, the research, which has just been published in the Journal of Neuroscience, suggests that therapies focusing on brain energy metabolism may offer new routes to mitigating delirium.

Delirium

When the body experiences high levels of inflammation - such as during bacterial or viral infections - the way our brains function changes, which in turn affects our mood and motivation. In older patients such acute inflammation can produce a profound disturbance of brain function known as delirium. Despite the disorder being relatively common, the mechanisms by which it arises are poorly understood.

In the new research the scientists found that artificially inducing peripheral inflammation in mice triggered sudden onset cognitive dysfunction, and that this is mediated by a disturbance to energy metabolism.

In these experiments, inflammation left the mice with lower levels of blood sugar (glucose), which the brain requires for maintaining normal function. When the animals were supplemented with glucose, their cognitive performance returned towards normal, despite the continued inflammation.

Professor Colm Cunningham, who leads the Trinity Biomedical Science Institute lab where the work was performed, said: "An important feature of these experiments was that mice with early stages of pre-existing neurodegenerative disease were far more susceptible to dysfunction when these metabolic changes occurred.

"Our collaborators in Oslo also detected evidence of altered brain glucose metabolism in cerebrospinal fluid taken from people experiencing delirium, which argues for overlapping mechanisms in humans and mice. In other words, the signs are that similar processes are at work in people."

Dr Wes Ely, a critical care physician from Vanderbilt University, who wasn't involved with the study, added:

"The finding that the neurodegenerative animals are less resilient to this disturbance of energy metabolism really resonates with what we see in our intensive care unit patients with delirium."

Given the frequency of delirium during hospitalised members of the elderly population and, given that these episodes can accelerate the progress of underlying dementia treatments are desperately needed.

Professor Cunningham added: "Simply providing glucose to patients is not likely to treat delirium in most cases but collectively our data emphasise that an appropriate supply of both oxygen and glucose to the brain becomes especially important in older patients and in those with existing dementia. Therefore, we believe that focusing on brain energy metabolism may offer routes to mitigating delirium."

Credit: 
Trinity College Dublin

Links between video games and gambling run deeper than previously thought, study reveals

A range of video game practices have potentially dangerous links to problem gambling, a study has revealed.

Building on previous research by the same author, which exposed a link between problem gambling and video game loot boxes, the new study suggests that a number of other practices in video games, such as token wagering, real-money gaming, and social casino spending, are also significantly linked to problem gambling.

The research provides evidence that players who engage in these practices are also more likely to suffer from disordered gaming - a condition where persistent and repeated engagement with video games causes an individual significant impairment or distress.

Author of the study, Dr David Zendle from the Department of Computer Science at the University of York, said: "These findings suggest that the relationship between gaming and problem gambling is more complex than many people think."

"When we go beyond loot boxes, we can see that there are multiple novel practices in gaming that incorporate elements of gambling. All of them are linked to problem gambling, and all seem prevalent. This may pose an important public health risk. Further research is urgently needed"

For the study, a group of just under 1,100 participants were quota-sampled to represent the UK population in terms of age, gender, and ethnicity. They were then asked about their gaming and gambling habits.

The study revealed that a significant proportion (18.5%) of the participants had engaged in some behaviour that related to both gaming and gambling, such as playing a social casino game or spending money on a loot box.

Dr Zendle added: "There are currently loopholes that mean some gambling related elements of video games avoid regulation. For example social casinos are 'video games' that are basically a simulation of gambling: you can spend real money in them, and the only thing that stops them being regulated as proper gambling is that winnings cannot be converted into cash.

"We need to have regulations in place that address all of the similarities between gambling and video games. Loot boxes aren't the only element of video games that overlaps with gambling: They're just a tiny symptom of this broader convergence"

Last year, University of York academics, including Dr David Zendle, contributed to a House of Commons select committee inquiry whose report called for video game loot boxes to be regulated under gambling law and for their sale to children to be banned. Dr Zendle also provided key evidence to the recent House of Lords select committee inquiry that likewise produced a report recommending the regulation of loot boxes as gambling.

Credit: 
University of York

Particulate plutonium released from the Fukushima Daiichi meltdowns

image: (a)Electron imaging of a CsMP with elemental maps. (b) Synchrotron micro-focus X-ray fluorescence (μXRF) elemental maps. (c) Image of uranium dioxide inclusion in the CsMP. (d) Uranium L3-edge X-ray absorption near-edge structure (XANES) of a discrete point, indicated by the red arrow in (b). The spectrum is plotted alongside U(IV) and U(VI) oxide standards. (e) Discrete-area Pu L3-edge XANES collected from the point indicated by the red arrow.

Image: 
Kyushu University

Small amounts of plutonium (Pu) were released from the damaged Fukushima Daiichi Nuclear Power Plant (FDNPP) reactors into the environment during the site's 2011 nuclear disaster. However, the physical, chemical, and isotopic form of the released Pu has remained unknown.

Now, recent work published in the journal Science of the Total Environment has shown that Pu was included inside cesium-rich microparticles (CsMPs) that were emitted from the site. CsMPs are microscopic radioactive particles that formed inside the Fukushima reactors when the melting nuclear fuel interacted with the reactor's structural concrete. Due to loss of containment in the reactors, the particles were released into the atmosphere; many were then deposited across Japan.

Studies have shown that the CsMPs are incredibly radioactive and that they are primarily composed of glass (with silica from the concrete) and radio-cesium (a volatile fission product formed in the reactors). Whilst the environmental impact and distribution of the CsMPs is still an active subject of debate, learning about the chemical composition of the CsMPs has been shown to offer a much-needed insight into the nature and extent of the FDNPP meltdowns.

The study published in Science of the Total Environment, involving scientists from Japan, Finland, France, Switzerland, the UK, and USA, was led by Dr. Satoshi Utsunomiya and graduate student Eitaro Kurihara (Department of Chemistry, Kyushu University). The team used a combination of advanced analytical techniques (synchrotron-based micro-X-ray analysis, secondary ion mass spectrometry, and high-resolution transmission electron microscopy) to find and characterize the Pu that was present in the CsMP samples.

The researchers initially discovered incredibly small uranium-dioxide inclusions, of less than 10 nanometers in diameter, inside the CsMPs; this indicated possible inclusion of nuclear fuel inside the particles. Detailed analysis then revealed, for the first-time, that Pu-oxide concentrates were associated with the uranium, and that the isotopic composition of the U and Pu matched that calculated for the FDNPP irradiated fuel inventory.

Dr Utsunomiya stated "these results strongly suggest that the nano-scale heterogeneity that is common in normal nuclear fuels is still present in the fuel debris that remains inside the site's damaged reactors. This is important information as it tells us about the extent / severity of the melt-down. Further, this is important information for the eventual decommissioning of the damaged reactors and the long-term management of their wastes."

With regards environmental impact, Dr Utsunomiya states "that as we already know that the CsMPs were distributed over a wide region in Japan (up to 230 km from the FDNPP), small amounts of Pu were likely dispersed in the same way."

Professor Gareth Law, a co-author on the paper from the University of Helsinki, indicated that the team "will continue to characterize and experiment with the CsMPs, in an effort to better understand their long-term behavior and environmental impact. It is clear that CsMPs are an important vector of radioactive contamination from nuclear accidents."

Professor Bernd Grambow, a co-author from Nantes/France, states that "while the Pu released from the damaged reactors is low compared to that of Cs; the investigation provides crucial information for studying the associated health impact."

Professor Rod Ewing at Stanford University emphasized that "the study used an extraordinary array of analytical techniques in order to complete the description of the particles at the atomic-scale. This is the type of information required to describe the mobility of plutonium in the environment."

Utsunomiya concluded "It took a long time to publish results on particulate Pu from Fukushima. I would like to emphasize that this is a great achievement of international collaboration. It's been almost ten years since the nuclear disaster at Fukushima," he continued "but research on Fukushima's environmental impact and its decommissioning are a long way from being over."

Credit: 
University of Helsinki

Messages sent by osteoblasts to osteoclasts are enclosed in an extracellular vesicle

image: Osteoclasts and osteoblasts in zebrafish scales were fluorescently labeled with GFP (green) and mCherry (red), respectively. A nucleus is shown in gray. Osteoclasts receive messages from osteoblasts via extracellular vesicles (blue arrows) and fuse to form a multinucleated giant cell in the bone tissue.

Image: 
Kanazawa University

Kanazawa, Japan - In a study published recently in Communications Biology, researchers led by Kanazawa University explain how a tiny fish could help end painful metabolic and genetic bone diseases.

While pain, swelling, and immobility are the most obvious aftereffects of a broken bone, there is also a flurry of activity going on at the cellular level in an effort to repair the damage. And because bones are usually hidden away under layers of muscle, fat, and skin, it has been difficult for researchers to study in real time how bones regenerate.

"To provide insight into the process of fracture repair, we needed bone that was easily accessible and a model species in which bone growth mimicked that of humans," says lead author Jingjing Kobayashi-Sun. "With transgenic and mutant lines already available, zebrafish scales ticked all the boxes."

Zebrafish scales, although much simpler than mammalian bones, respond and develop in almost the same way as their more complex counterparts, and are conveniently located on the outside of the body. Most importantly though, they contain both osteoclasts and osteoblasts, the cells that respectively clear away broken bone fragments and generate new bone after a fracture.

Explains Kobayashi-Sun, "Recent studies have suggested that molecular packages, called extracellular vesicles (EVs), derived from osteoblasts may deliver signaling molecules to immature osteoclasts, triggering their differentiation into mature, active cells. However, given the difficulty of live imaging in bone, no one has been able to prove this process actually occurs in vivo."

To visualize osteoblasts and osteoclasts in zebrafish scales, the researchers generated a double transgenic line in which the two cell types expressed different color fluorescent labels. After making small cuts in the scales of anesthetized fish, the researchers could identify the different cell types in the healing bone using a fluorescent microscope or flow cytometry.

"We observed that a large number of the green fluorescent osteoclasts at the fracture site contained red fluorescent osteoblast-derived particles in the cytoplasm, confirming the uptake of EVs by immature osteoclasts," says senior author Isao Kobayashi. "In addition, mature osteoclasts were abundant in the damaged scales, indicating that EV uptake triggers the differentiation of osteoclasts."

The EVs contained high levels of a cellular differentiation-associated signaling molecule called RANKL. Knocking out the corresponding gene resulted in a significant reduction in the number of osteoclasts, suggesting that osteoblasts control osteoclast formation via Rankl signaling.

Given that many bone disorders result from a breakdown in communication between osteoblasts and osteoclasts, these findings provide valuable information for the development of novel drug therapies.

Credit: 
Kanazawa University

More than one cognition: A call for change in the field of comparative psychology

image: Reviewing 40 years of research, a new paper challenges hypotheses and calls for a more biocentric understanding of cognitive evolution.

Image: 
Clockwise from top left: Simone Pika, Juliane Braeuer, and Natalie Uomini.

What makes a species "smart" and how do strategies for processing information evolve? What goes on in the minds of non-human animals and which cognitive skills can we claim as hallmarks of our species? These are some of the questions addressed by the field of comparative psychology, but a recent review in the Journal of Intelligence joins a growing body of literature that argues that studies of cognition are hampered by anthropocentrism and missing the bigger picture of cognitive evolution.

Based on 40 years of scientific literature and case studies of three non-human animals, the current paper identifies two main problems hindering research in comparative psychology.

First of which is the assumption that human cognition is the standard by which animal cognition should be measured. Human cognition is generally believed to be the most flexible, adaptable form of intelligence, with the abilities of other species evaluated in accordance to the extent they match human cognitive skills. Such an approach tends to overrate human-like cognitive skills and may overlook cognitive skills that play only a small part, or no part at all, in human psychology.

"This approach, whether implicit or explicit, can only produce a restrictive, anthropocentric view of cognitive evolution that ignores the incredible diversity of cognitive skills present in the world," says Juliane Bräuer, leader of the DogLab at the Max Planck Institute for the Science of Human History. Instead, research into the evolution of cognition should take a biocentric approach, considering each species investigated in its own right.

"Applying Darwinian thinking to comparative psychology and removing the 'benchmark' of human intelligence allows us to reveal the evolutionary, developmental and environmental conditions that foster the growth of certain unique abilities and the convergence of skills shared among a species," adds Natalie Uomini, the main co-author of the paper.

To further address this anthropocentric view, the authors also argue for increased focus on cognitive abilities in which animals outperform humans and discuss cases in which various species demonstrate better-than-human abilities in delayed gratification, navigation, communication, pattern recognition and statistical reasoning.

The second problem addressed is the assumption that cognition evolves as a package of skills similar to those apparent in humans, skills which taken together constitute "one cognition." The authors survey various major hypotheses from psychology, including Social Intelligence Hypothesis, Domestication Hypothesis and Cooperative Breeding Hypothesis, and argue that while each has evidence to support its claims, none account for the whole picture of cognition.

Instead of a cluster of linked skills originating from a single evolutionary pressure, the paper provides a framework for understanding cognitive arrays as the result of species-typical adaptations to the entire ecological and social environment.

"If we want to account for the fascinating variety of animal minds, comparative scientists should focus on skills that are ecologically relevant for a given species," say Bräuer and Uomini.

The paper discusses three distantly related species - chimpanzees, dogs and New Caledonian crows - that are highly sophisticated in one cognitive domain yet perform poorly in others generally believed to be linked.

The paper also lays out recommendations to make future experiments in comparative psychology ecologically relevant to the target species, including differentiating tasks for each species and accounting for diverse senses of perception, such as smell in the case of dogs.

In Germany, where the authors of the paper are based, comparative psychology is a relatively unknown field. The authors hope to stimulate interest and growth in the subject with future research dedicated to the study of each species' cognitive skills for their own sake, leading to a more relevant and holistic perspective on animals' cognitive skills and the recognition that there is not only "one cognition."

Credit: 
Max Planck Institute of Geoanthropology

Tech sector job interviews assess anxiety, not software skills

A new study from North Carolina State University and Microsoft finds that the technical interviews currently used in hiring for many software engineering positions test whether a job candidate has performance anxiety rather than whether the candidate is competent at coding. The interviews may also be used to exclude groups or favor specific job candidates.

"Technical interviews are feared and hated in the industry, and it turns out that these interview techniques may also be hurting the industry's ability to find and hire skilled software engineers," says Chris Parnin, an assistant professor of computer science at NC State and co-author of a paper on the work. "Our study suggests that a lot of well-qualified job candidates are being eliminated because they're not used to working on a whiteboard in front of an audience."

Technical interviews in the software engineering sector generally take the form of giving a job candidate a problem to solve, then requiring the candidate to write out a solution in code on a whiteboard - explaining each step of the process to an interviewer.

Previous research found that many developers in the software engineering community felt the technical interview process was deeply flawed. So the researchers decided to run a study aimed at assessing the effect of the interview process on aspiring software engineers.

For this study, researchers conducted technical interviews of 48 computer science undergraduates and graduate students. Half of the study participants were given a conventional technical interview, with an interviewer looking on. The other half of the participants were asked to solve their problem on a whiteboard in a private room. The private interviews did not require study participants to explain their solutions aloud, and had no interviewers looking over their shoulders.

Researchers measured each study participant's interview performance by assessing the accuracy and efficiency of each solution. In other words, they wanted to know whether the code they wrote would work, and the amount of computing resources needed to run it.

"People who took the traditional interview performed half as well as people that were able to interview in private," Parnin says. "In short, the findings suggest that companies are missing out on really good programmers because those programmers aren't good at writing on a whiteboard and explaining their work out loud while coding."

The researchers also note that the current format of technical interviews may also be used to exclude certain job candidates.

"For example, interviewers may give easier problems to candidates they prefer," Parnin says. "But the format may also serve as a barrier to entire classes of candidates. For example, in our study, all of the women who took the public interview failed, while all of the women who took the private interview passed. Our study was limited, and a larger sample size would be needed to draw firm conclusions, but the idea that the very design of the interview process may effectively exclude an entire class of job candidates is troubling."

What's more, the specific nature of the technical interview process means that many job candidates try to spend weeks or months training specifically for the technical interview, rather than for the actual job they'd be doing.

"The technical interview process gives people with industry connections an advantage," says Mahnaz Behroozi, first author of study and a Ph.D. student at NC State. "But it gives a particularly large advantage to people who can afford to take the time to focus solely on preparing for an interview process that has very little to do with the nature of the work itself.

"And the problems this study highlights are in addition to a suite of other problems associated with the hiring process in the tech sector, which we presented at ICSE-SES [the International Conference on Software Engineering, Software Engineering In Society]," adds Behroozi. "If the tech sector can address all of these challenges in a meaningful way, it will make significant progress in becoming more fair and inclusive. More to the point, the sector will be drawing from a larger and more diverse talent pool, which would contribute to better work."

The study on technical interviews, "Does Stress Impact Technical Interview Performance?," will be presented at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, being held virtually from Nov. 8-13. The study was co-authored by Shivani Shirolkar, a Ph.D. student at NC State who worked on the project while an undergraduate; and by Titus Barik, a researcher at Microsoft and former Ph.D. student at NC State.

Credit: 
North Carolina State University

Machine learning accurately predicts who's who in the health care workforce

Until recently, economists, policy makers and workforce experts have relied on outdated and inaccurate snapshots of the U.S. physician workforce, making it especially difficult to predict the need and availability of health care services across the country. Data about each physician's area of specialty is collected at the beginning of their career and is rarely updated, increasing the potential for outdated information about who is providing care for our nation's population. In this study, Wingrove et al examine how machine learning algorithms may allow for more real-time, accurate descriptions of the medical workforce, including professions that do not formally collect specialty data like physician assistants and nurse practitioners. Algorithms also can identify physicians in new and evolving interdisciplinary positions. One such learning model from the Robert Graham Center and the University of Pittsburgh was trained to identify a majority of medical specialties with 95 percent accuracy. The model was fed data from clinical encounters in the form of procedures and prescriptions billed by Medicare from 2014 to 2016. The models were less accurate at predicting some specialties, like neurosurgery and physical medicine and rehabilitation. But overall, the model correctly predicted 70 percent of physician's practice type within five percentage points of their actual count, including primary care and specialties such as emergency medicine, cardiology, gastroenterology and radiology.

Credit: 
American Academy of Family Physicians

COVID-19 makes clear the need to address social determinants of health

University of Michigan public health experts Julia Wolfson and Cindy Leung argue that the COVID-19 pandemic has made glaringly apparent the structural conditions that underlie inequities in our nation's health. Race and ethnicity, housing, income, occupation and chronic health conditions are all key factors that influence one's ability to safely weather highly infectious disease pandemics like COVID-19. Unlike the novel coronavirus strain, these social, economic and structural factors are not new. The authors argue, "An opportunity exists to use the unfolding crisis to advocate for structural changes to a system that has long perpetuated disparities." Wolfson and Leung draw together four articles in the July-August 2020 issue of the Annals of Family Medicine that emphasize social determinants of health and highlight the calls to action for primary care.

Credit: 
American Academy of Family Physicians

NREL research points to strategies for recycling of solar panels

Researchers at the National Renewable Energy Laboratory (NREL) have conducted the first global assessment into the most promising approaches to end-of-life management for solar photovoltaic (PV) modules.

PV modules have a 30-year lifespan. There is currently no plan for how to manage this at end of their lifespan. The volume of modules no longer needed could total 80 million metric tons by 2050. In addition to quantity, the nature of the waste also poses challenges. PV modules are made of valuable, precious, critical, and toxic materials. There is currently no standard for how to recycle the valuable ones and mitigate the toxic ones.

Numerous articles review individual options for PV recycling but, until now, no one has done a global assessment of all PV recycling efforts to identify the most promising approaches.

"PV is a major part of the energy transition," said Garvin Heath, a senior scientist at NREL who specializes in sustainability science. "We must be good stewards of these materials and develop a circular economy for PV modules."

Heath is lead author of "Research and development priorities for silicon photovoltaic module recycling supporting a circular economy," which appears in the journal Nature Energy. His co-authors from NREL are Timothy Silverman, Michael Kempe, Michael Deceglie, and Teresa Barnes; and former NREL colleagues Tim Remo and Hao Cui. The team also collaborated with outside experts, particularly in solar manufacturing.

"It provides a succinct, in-depth synthesis of where we should and should not steer our focus as researchers, investors, and policymakers," Heath said.

The authors focused on the recycling of crystalline silicon, a material used in more than 90% of installed PV systems in a very pure form. It accounts for about half of the energy, carbon footprint, and cost to produce PV modules, but only a small portion of their mass. Silicon's value is determined by its purity.

"It takes a lot of investment to make silicon pure," said Silverman, PV hardware expert. "For a PV module, you take these silicon cells, seal them up in a weatherproof package where they're touching other materials, and wait 20 to 30 years--all the while, PV technology is improving. How can we get back that energy and material investment in the best way for the environment?"

The authors found some countries have PV recycling regulations in place, while others are just beginning to consider solutions. Currently, only one crystalline silicon PV-dedicated recycling facility exists in the world due to the limited amount of waste being produced today.

Based on their findings, the authors recommend research and development to reduce recycling costs and environmental impacts, while maximizing material recovery. They suggest focusing on high-value silicon versus intact silicon wafers. The latter has been touted as achievable, but silicon wafers often crack and would not likely meet today's exacting standards to enable direct reuse. To recover high-value silicon, the authors highlight the need for research and development of silicon purification processes.

The authors also emphasize that the environmental and economic impacts of recycling practices should be explored using techno-economic analyses and life-cycle assessments.

Finally, the authors note that finding ways to avoid waste to begin with is an important part of the equation, including how to make solar panels last longer, use materials more effectively, and produce electricity more efficiently.

"We need research and development because the accumulation of waste will sneak up on us," Silverman said. "Much like the exponential growth of PV installations, it will seem to move slowly and then rapidly accelerate. By the time there's enough waste to open a PV-dedicated facility, we need to have already studied the proper process."

If successful, these findings could contribute one piece of a PV circular economy.

Credit: 
DOE/National Renewable Energy Laboratory

New french prescribing rules result in decreased prescribing of certain sleep drugs

In France, the implementation of new prescribing rules for the sedative-hypnotic drug zolpidem led to an important and immediate decrease in use. This decline was partially compensated for by a rise in the use of a nonbenzodiazepine drug called zopiclone. In 2017, French health authorities made it mandatory to use a secure prescribing form for zolpidem, which is a popular insomnia drug in France and one of the drugs most involved in falsified prescribing and diversion. A time-series analysis of national prescription drug reimbursement records from 2015 to 2018 shows the positive impact of France's regulations, with prescribing of zolpidem cut in half when comparing rates before and after the policy change. Nearly equal and opposite increases in zopiclone were seen during this same time period. The change in prescribing policies resulted in a shift to the alternative drug zopiclone--a trend that has been seen amongst prescribing restriction efforts, including restrictions on benzodiazepines.

Credit: 
American Academy of Family Physicians

Umbilical cord blood successfully treats rare genetic disorders in largest study to date

image: Division director, bone marrow transplantation and cellular therapies, UPMC Children's Hospital

Image: 
UPMC

PITTSBURGH, July 14, 2020 - Researchers at UPMC Children's Hospital of Pittsburgh found that infusing umbilical cord blood -- a readily available source of stem cells -- safely and effectively treated 44 children born with various non-cancerous genetic disorders, including sickle cell, thalassemia, Hunter syndrome, Krabbe disease, metachromatic leukodystrophy (MLD) and an array of immune deficiencies. This is the largest trial of its kind to date.

The idea was to create a fairly universal treatment, rather than chasing individual therapies for all of these rare diseases, and to do so with minimal risk to the patients. The results were published in today's print edition of Blood Advances.

"There has been a lot of emphasis placed on cool new technologies that might address these diseases, but -- even if they prove effective -- those aren't available to most centers," said study senior author Paul Szabolcs, M.D., division director of bone marrow transplantation and cellular therapies at UPMC Children's Hospital. "The regimen we developed is more robust, readily applicable and will remain significantly less expensive."

For this study, the participants received intravenous injections of banked cord blood, which was donated from the umbilical cords and placentas of healthy babies just after birth and frozen until needed.

To make room in their bone marrow for donor stem cells to take root and prevent them from being rejected, study participants received a low dose of chemotherapy and immunosuppressant drugs in a careful sequence. Once the cells integrated into the patients' bodies, these drugs were tapered off. To kick the immune system back into gear, the researchers reserved a small fraction of the cord blood and gave it to participants a few weeks after the initial infusion.

It's important to note that this procedure doesn't require the donor and recipient to have matching immune profiles.

"That's huge for ethnic minorities," Szabolcs said. "The probability of a perfect match is very low, but with a cord blood graft, we have a chance to overcome this discrepancy over the course of a couple months and then taper immunosuppressants away."

Overall, post-infusion complications were relatively mild. None of the participants experienced severe chronic graft-versus-host disease, and the mortality rate from viral infection due to immune suppression was 5%, which is much lower than in prior studies.

For the 30 children in the trial with metabolic disorders -- in which improper enzyme function causes the buildup of harmful toxins in the body -- all but one exhibited progressive symptoms of neurodevelopmental delays before the start of the trial. Within a year of receiving cord blood, all of them had normal enzyme levels, and all showed a halting of neurological decline. Some even began to gain new skills.

The most common metabolic disorders in this study were leukodystrophies, which typically are fatal within a few years of symptom onset. Even with cord blood treatment, a large retrospective analysis reported a three-year survival rate of about 60%. With the protocol used in this study, more than 90% of symptomatic leukodystrophy patients were still alive three years after their cord blood treatment.

No previous studies using stem cells to treat metabolic, immune or blood disorders have shown such high levels of safety, efficacy or broad applicability.

"There has been a stagnation of outcomes in this field, just changing one chemotherapy agent for another, not a true evolution," Szabolcs said. "We designed an approach now proven to be efficacious for at least 20 diseases. And we believe it might be effective for many, many more."

Since this paper was submitted, the researchers already have had success with using this technique to treat additional diseases, including in adults.

Credit: 
University of Pittsburgh