Brain

A comprehensive review of biosynthesis of inorganic nanomaterials using microorganisms and bacteriophages

image: Single- and two-element map of inorganic nanomaterials biosynthesized using microbial cells and bacteriophages. Fifty-one elements (excluding H, C, N and O) have been used in inorganic nanomaterial synthesis using microbial cells and bacteriophages. White spaces indicate that biosynthesis of inorganic nanomaterials comprising the corresponding elements has not yet been reported. Red denotes unary or binary metal/non-metal nanomaterials that have been biosynthesized. Dark blue denotes metal/non-metal oxides that have been biosynthesized. Light blue indicates biosynthesized metal hydroxides. Light purple indicates that metal/non-metal phosphates have been biosynthesized. Orange indicates that metal carbonates have been biosynthesized. All inorganic nanomaterials biosynthesized using microbial cells and bacteriophages are listed in the paper.

Image: 
KAIST

There are diverse methods for producing numerous inorganic nanomaterials involving many experimental variables. Among the numerous possible matches, finding the best pair for synthesizing in an environmentally friendly way has been a longstanding challenge for researchers and industries.

A KAIST bioprocess engineering research team led by Distinguished Professor Sang Yup Lee conducted a summary of 146 biosynthesized single and multi-element inorganic nanomaterials covering 55 elements in the periodic table synthesized using wild-type and genetically engineered microorganisms. Their research highlights the diverse applications of biogenic nanomaterials and gives strategies for improving the biosynthesis of nanomaterials in terms of their producibility, crystallinity, size, and shape.

The research team described a 10-step flow chart for developing the biosynthesis of inorganic nanomaterials using microorganisms and bacteriophages. The research was published at Nature Reviews Chemistry as a cover and hero paper on December 3.

"We suggest general strategies for microbial nanomaterial biosynthesis via a step-by-step flow chart and give our perspectives on the future of nanomaterial biosynthesis and applications. This flow chart will serve as a general guide for those wishing to prepare biosynthetic inorganic nanomaterials using microbial cells," explained Dr.Yoojin Choi, a co-author of this research.

Most inorganic nanomaterials are produced using physical and chemical methods and biological synthesis has been gaining more and more attention. However, conventional synthesis processes have drawbacks in terms of high energy consumption and non-environmentally friendly processes. Meanwhile, microorganisms such as microalgae, yeasts, fungi, bacteria, and even viruses can be utilized as biofactories to produce single and multi-element inorganic nanomaterials under mild conditions.

After conducting a massive survey, the research team summed up that the development of genetically engineered microorganisms with increased inorganic-ion-binding affinity, inorganic-ion-reduction ability, and nanomaterial biosynthetic efficiency has enabled the synthesis of many inorganic nanomaterials.

Among the strategies, the team introduced their analysis of a Pourbaix diagram for controlling the size and morphology of a product. The research team said this Pourbaix diagram analysis can be widely employed for biosynthesizing new nanomaterials with industrial applications.

Professor Sang Yup Lee added, "This research provides extensive information and perspectives on the biosynthesis of diverse inorganic nanomaterials using microorganisms and bacteriophages and their applications. We expect that biosynthetic inorganic nanomaterials will find more diverse and innovative applications across diverse fields of science and technology."

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

What makes COVID misinformation so tough to stop on social media

A recent study highlights two of the reasons that misinformation about COVID-19 is so difficult to tackle on social media: most people think they're above average at spotting misinformation; and misinformation often triggers negative emotions that resonate with people. The findings may help communicators share accurate information more effectively.

"This study gives us more insight into how users respond to misinformation about the pandemic on social media platforms," says Yang Cheng, first author of the study and an assistant professor of communication at North Carolina State University. "It also gives us information we can use to share accurate information more effectively."

For this study, researchers conducted a survey of 1,793 U.S. adults. The survey asked a range of questions designed to address four issues: the extent to which study participants felt they and others were affected by COVID misinformation online; the extent to which misinformation triggered negative emotions; their support for government restrictions on social media and misinformation; and their support for media literacy training and other corrective actions.

One of the most powerful findings was that study participants overwhelmingly thought that other people were more vulnerable to misinformation. This phenomenon is known as the "third-person effect," which predicts that people perceive media messages as having a greater effect on others than on themselves.

"This makes it harder to get people to participate in media literacy education or training efforts, because it suggests that most people think everyone else needs the training more than they do," Cheng says.

The researchers also found that content containing misinformation was likely to evoke negative emotions such as fear, worry and disgust. That's troubling for two reasons.

"First, people are likely to act on content that evokes negative emotions, and that includes sharing information on social media," Cheng says. "Second, messages that are focused on emotions are more easily transmitted on social media than content that is neutral - such as abstract scientific information."

However, Cheng also notes that science communicators could make use of this information.

"Since fear, worry, or other negative emotions can facilitate information seeking, or encourage people to avoid specific behaviors during a crisis, communicators may want to consider using these emotional messages to convey accurate information about COVID-19 and public health."

The researchers also found that the better an individual thought he or she was at detecting misinformation in relation to everyone else, the more likely that individual was to support both government restrictions on misinformation and corrective actions, such as media literacy education. Participants who experienced negative emotions were also more likely to support government restrictions.

Credit: 
North Carolina State University

Research brief: Researchers develop unique process for producing light-matter mixture

image: Annular holes in a thin gold film filled with silicon dioxide enable ultrastrong coupling between light and atomic vibrations. This structure provides opportunities to probe molecules interacting with quantum vacuum fluctuations and develop novel optoelectronic devices.

Image: 
Oh Group, University of Minnesota

MINNEAPOLIS/ST. PAUL (12/07/2020) -- In groundbreaking new research, an international team of researchers led by the University of Minnesota Twin Cities has developed a unique process for producing a quantum state that is part light and part matter.

The discovery provides fundamental new insights for more efficiently developing the next generation of quantum-based optical and electronic devices. The research could also have an impact on increasing efficiency of nanoscale chemical reactions.

The research is published in Nature Photonics, a high-impact, peer-reviewed scientific journal published by the Springer Nature Publishing Group.

Quantum science studies natural phenomena of light and matter at the smallest scales. In this study, the researchers developed a unique process in which they achieved "ultrastrong coupling" between infrared light (photons) and matter (atomic vibrations) by trapping light in tiny, annular holes in a thin layer of gold. These holes were as small as two nanometers, or approximately 25,000 times smaller than the width of a human hair.

These nanocavities, similar to a highly scaled-down version of the coaxial cables that are used to send electrical signals (like the cable that comes into your TV), were filled with silicon dioxide, which is essentially the same as window glass. Unique fabrication methods, based on techniques developed in the computer-chip industry, make it possible to produce millions of these cavities simultaneously, with all of them simultaneously exhibiting this ultrastrong photon-vibration coupling.

"Others have studied strong coupling of light and matter, but with this new process to engineer nanometer-sized version of coaxial cables, we are pushing the frontiers of ultrastrong coupling, which means we are discovering new quantum states where matter and light can have very different properties and unusual things start to happen," said Sang-Hyun Oh, a University of Minnesota professor of electrical and computer engineering and the senior author of the study. "This ultrastrong coupling of light and atomic vibrations opens up all kinds of possibilities for developing new quantum-based devices or modifying chemical reactions."

The interaction between light and matter is central to life on earth--it allows plants to convert sunlight into energy and it allows us to see objects around us. Infrared light, with wavelengths much longer than what we can see with our eyes, interacts with the vibrations of atoms in materials. For example, when an object is heated, the atoms that make up the object start vibrating faster, giving off more infrared radiation, enabling thermal-imaging or night-vision cameras.

Conversely, the wavelengths of infrared radiation that are absorbed by materials depend on what kinds of atoms make up the materials and how they are arranged, so that chemists can use infrared absorption as a "fingerprint" to identify different chemicals.

These and other applications can be improved by increasing how strongly infrared light interacts with atomic vibrations in materials. This, in turn, can be accomplished by trapping the light into a small volume that contains the materials. Trapping light can be as simple as making it reflect back and forth between a pair of mirrors, but much stronger interactions can be realized if nanometer-scale metallic structures, or "nanocavities," are used to confine the light on ultra-small length scales.

When this happens, the interactions can be strong enough that the quantum-mechanical nature of the light and the vibrations comes into play. Under such conditions, the absorbed energy is transferred back and forth between the light (photons) in the nanocavities and the atomic vibrations (phonons) in the material at a rate fast enough such that the light photon and matter phonon can no longer be distinguished. Under such conditions, these strongly coupled modes result in new quantum-mechanical objects that are part light and part vibration at the same time, known as "polaritons."

The stronger the interaction becomes, the stranger the quantum-mechanical effects that can occur. If the interaction becomes strong enough, it may be possible to create photons out of the vacuum, or to make chemical reactions proceed in ways that are otherwise impossible.

"It is fascinating that, in this coupling regime, vacuum is not empty. Instead, it contains photons with wavelengths determined by the molecular vibrations. Moreover, these photons are extremely confined and are shared by a minute number of molecules," said Professor Luis Martin-Moreno at the Instituto de Nanociencia y Materiales de Aragón (INMA) in Spain, another author of the paper.

"Normally, we think of vacuum as basically nothing, but it turns out this vacuum fluctuation always exists," Oh said. "This is an important step to actually harness this so-called zero energy fluctuation to do something useful."

Credit: 
University of Minnesota

Researchers use genomics to identify diabetic retinopathy factors

In a search to discover the genetic factors underlying diabetic retinopathy, University of Illinois Chicago researchers also have identified a new approach that can be used as a template to study other diseases.

In the paper, "Integration of genomics and transcriptomics predicts diabetic retinopathy susceptibility genes," published in eLife, researchers identified genes that respond differently in response to high glucose in individuals with and without diabetic retinopathy.

Dr. Michael Grassi, associate professor of ophthalmology at UIC's College of Medicine, his collaborator, Dr. Barbara Stranger of Northwestern University, and their teams set out to identify genes that cause diabetic retinopathy, a diabetes complication caused by damage to the light-sensitive tissue at the back of the eye -- the retina -- resulting in vision loss.

Grassi has been interested in diabetic retinopathy since he began his clinical training as a retina specialist.

"I encountered two individuals with disparate outcomes, a 19-year-old who had well-controlled diabetes for five years and went blind, and a Vietnam veteran, who had poorly controlled diabetes for over 30 years but had no vision problems," Grassi said.

For 10 years, Grassi has been looking at the genetic underpinnings of diabetic retinopathy. After several attempts, he finally landed on a method that resulted in identifying genes that increase the risk of developing retinopathy.

Grassi and his team combined several different methods to identify the gene, known as folliculin, or FLCN, that increases the risk of developing retinopathy. They began by comparing levels of gene activity in individuals with and without retinopathy. A set of genes that was unique to those with retinopathy was identified. Next, they took the genetic markers for this set of genes and found that many were associated with the development of diabetic retinopathy. Finally, they tested whether changes in the levels of some of these genes could cause retinopathy and discovered that increased amounts of FLCN increased the retinopathy risk.

The research team examined glucose-induced changes in gene expression in cell lines from people with type 1 diabetes, both with and without retinopathy. The approach provided new insights into the disease. The identification of single nucleotide polymorphisms, or SNPs, associated with such changes -- eQtls (expression quantitative trait loci) -- was followed by validation in independent cohorts. The FLCN as a mediator of diabetic retinopathy using Mendelian Randomization further solidified the method.

"It has been a challenge to study diabetic retinopathy because it is so heterogeneous. There are so many genetic factors that can contribute," Grassi said.

For this study, cell lines generated from blood samples were used from the Diabetes Control and Complications Trial, or DCCT, a large clinical study of diabetic retinopathy. Because the DCCT study generated cell lines for every individual, it allowed for detailed characterization of retinopathy severity in each individual.

Understanding the genetic factors behind diabetic retinopathy can potentially lead to developing new treatment and prevention strategies for retinopathy. The current standard of care involves laser surgery to preserve the center part of vision, or injections into the eye every four weeks.

Credit: 
University of Illinois Chicago

Synthetic llama antibodies rescue doomed proteins inside cells

Columbia researchers have created a new technology using synthetic llama antibodies to prevent specific proteins from being destroyed inside cells. The approach could be used to treat dozens of diseases, including cystic fibrosis, that arise from the destruction of imperfect but still perfectly functional proteins.

In many genetic diseases, including cystic fibrosis, mutated proteins are capable of performing their jobs but are tagged for destruction by the cell's quality control mechanisms.

"The situation is analogous to ugly fruit," says Henry Colecraft, PhD, the John C. Dalton Professor of Physiology & Cellular Biophysics, who led the research. "Shoppers reject fruit that doesn't look perfect, even though ugly fruit is just as nutritious. If mutated proteins in cystic fibrosis can escape the cell's quality control mechanisms, they work pretty well."

In the cell, proteins destined for destruction are marked with a small peptide called ubiquitin. Deubiquitinase enzymes (DUBs) can remove these tags, but simply increasing DUB activity would indiscriminately rescue all proteins in a cell marked for destruction, which would be harmful.

"A lot of proteins are destroyed by the cell for good reason," Colecraft says, "so a therapy needs to be selective."

That's when Colecraft and his graduate student, Scott Kanner, realized they could develop a solution that takes advantage of nanobodies--small antibodies produced naturally by llamas, camels, and alpacas that were discovered nearly 30 years ago. These small nanobodies bind their targets with exquisite specificity and retain this property inside cells, unlike regular antibodies.  

The new technology--called engineered deubiquitinases or enDUBs for short--combines a synthetic nanobody that recognizes a specific protein with an enzyme that can rescue proteins tagged for destruction.

In a new paper in Nature Methods, the researchers tested two different enDUBs, one designed to rescue a protein mutated in cystic fibrosis and another designed to rescue a protein mutated in long QT syndrome, an inherited heart disease that can cause arrhythmia and sudden death. 

To build each enDUB, the researchers first had to find a nanobody that only recognizes and binds the target protein. Until recently, researchers had to inject their target proteins into llamas, camels, or alpacas and wait for the animal to generate such nanobodies. The Columbia researchers instead fished out binders from a synthetic yeast nanobody display library containing millions of unique nanobodies.

Once created, each enDUB was tested in cells that produced the mutated proteins. 

In both cases, enDUBs prevented the destruction of the proteins, and the proteins migrated to their normal locations in the cell membrane where they performed their normal functions.

"In the case of one of the cystic fibrosis proteins we tested, we get a remarkable rescue, restoring protein levels in the cell membrane to about 50% of normal," Colecraft says. "If that happened in a patient, it would be transformative."

Though both diseases investigated in the study are caused by mutations in ion channel proteins, "the approach can be applied to any protein in the cell, not just membrane proteins or proteins altered by genetic mutations," Colecraft says. 

"It could be applicable to any disease where protein degradation is a factor, including cancer and epilepsy."

More Information

Credit: 
Columbia University Irving Medical Center

Paper-based electrochemical sensor can detect COVID-19 in less than five minutes

image: COVID-19 electrochemical sensing platform

Image: 
University of Illinois

As the COVID-19 pandemic continues to spread across the world, testing remains a key strategy for tracking and containing the virus. Bioengineering graduate student, Maha Alafeef, has co-developed a rapid, ultrasensitive test using a paper-based electrochemical sensor that can detect the presence of the virus in less than five minutes. The team led by professor Dipanjan Pan reported their findings in ACS Nano.

"Currently, we are experiencing a once-in-a-century life-changing event," said Alafeef. "We are responding to this global need from a holistic approach by developing multidisciplinary tools for early detection and diagnosis and treatment for SARS-CoV-2."

There are two broad categories of COVID-19 tests on the market. The first category uses reverse transcriptase real-time polymerase chain reaction (RT-PCR) and nucleic acid hybridization strategies to identify viral RNA. Current FDA-approved diagnostic tests use this technique. Some drawbacks include the amount of time it takes to complete the test, the need for specialized personnel and the availability of equipment and reagents.

The second category of tests focuses on the detection of antibodies. However, there could be a delay of a few days to a few weeks after a person has been exposed to the virus for them to produce detectable antibodies.

In recent years, researchers have had some success with creating point-of-care biosensors using 2D nanomaterials such as graphene to detect diseases. The main advantages of graphene-based biosensors are their sensitivity, low cost of production and rapid detection turnaround. "The discovery of graphene opened up a new era of sensor development due to its properties. Graphene exhibits unique mechanical and electrochemical properties that make it ideal for the development of sensitive electrochemical sensors," said Alafeef. The team created a graphene-based electrochemical biosensor with an electrical read-out setup to selectively detect the presence of SARS-CoV-2 genetic material.

There are two components to this biosensor: a platform to measure an electrical read-out and probes to detect the presence of viral RNA. To create the platform, researchers first coated filter paper with a layer of graphene nanoplatelets to create a conductive film. Then, they placed a gold electrode with a predefined design on top of the graphene as a contact pad for electrical readout. Both gold and graphene have high sensitivity and conductivity which makes this platform ultrasensitive to detect changes in electrical signals.

Current RNA-based COVID-19 tests screen for the presence of the N-gene (nucleocapsid phosphoprotein) on the SARS-CoV-2 virus. In this research, the team designed antisense oligonucleotide (ASOs) probes to target two regions of the N-gene. Targeting two regions ensures the reliability of the senor in case one region undergoes gene mutation. Furthermore, gold nanoparticles (AuNP) are capped with these single-stranded nucleic acids (ssDNA), which represents an ultra-sensitive sensing probe for the SARS-CoV-2 RNA.

The researchers previously showed the sensitivity of the developed sensing probes in their earlier work published in ACS Nano. The hybridization of the viral RNA with these probes causes a change in the sensor electrical response. The AuNP caps accelerate the electron transfer and when broadcasted over the sensing platform, results in an increase in the output signal and indicates the presence of the virus.

The team tested the performance of this sensor by using COVID-19 positive and negative samples. The sensor showed a significant increase in the voltage of positive samples compared to the negative ones and confirmed the presence of viral genetic material in less than five minutes. Furthermore, the sensor was able to differentiate viral RNA loads in these samples. Viral load is an important quantitative indicator of the progress of infection and a challenge to measure using existing diagnostic methods.

This platform has far-reaching applications due to its portability and low cost. The sensor, when integrated with microcontrollers and LED screens or with a smartphone via Bluetooth or wifi, could be used at the point-of-care in a doctor's office or even at home. Beyond COVID-19, the research team also foresees the system to be adaptable for the detection of many different diseases.

"The unlimited potential of bioengineering has always sparked my utmost interest with its innovative translational applications," Alafeef said. "I am happy to see my research project has an impact on solving a real-world problem. Finally, I would like to thank my Ph.D. advisor professor Dipanjan Pan for his endless support, research scientist Dr. Parikshit Moitra, and research assistant Ketan Dighe for their help and contribution toward the success of this study."

Credit: 
University of Illinois Grainger College of Engineering

Career thoughts and parental relationships in adolescents with ADHD

A new study published in The Career Development Quarterly looked for potential links between negative or dysfunctional career thoughts and the quality of parental relationships in high school students with attention-deficit/hyperactivity disorder (ADHD).

In the study of 102 adolescents (76 boys, 26 girls) with ADHD, male participants' dysfunctional career thoughts were related to their relationships with their mothers. Those who had positive relationships with their mothers exhibited less external conflict about their career choices.

The findings may be useful for career counselors as they consider the influence of family relationships on the career choices of adolescents with ADHD. Future research should incorporate the influence of gender and race/ethnicity on such relationships and also focus on paternal relationships.

"My collaborators and I are very excited about the findings and the implications of this study. We hope they spur more research to understand how dysfunctional career thoughts develop in relational contexts and aid in evidence-based, contextually driven relational interventions," said lead author Abiola Dipeolu, PhD, of Texas A&M University, Kingsville.

Credit: 
Wiley

CRISPR-edited CAR T cells enhance fight against blood cancers

PHILADELPHIA-- Knocking out a protein known to stifle T cell activation on CAR T cells using the CRISPR/Cas9 technology enhanced the engineered T cells' ability to eliminate blood cancers, according to new preclinical data from researchers in the Perelman School of Medicine at the University of Pennsylvania and Penn's Abramson Cancer Center.

The findings will be presented as an oral presentation at the 62nd American Society of Hematology Annual Meeting & Exposition on Dec. 7 (abstract #554).

The team knocked out the CD5 gene -- which encodes for the CD5 protein on the surface of T cells and can inhibit their activation -- on CAR T cells using CRISPR-Cas9 and infused them back into mice with T- and B-cell leukemia or lymphoma. Mice infused with the CD5-deleted CAR T cells showed higher levels of T cell proliferation in the peripheral blood, as well as a significant reduction in tumor size and better survival outcomes compared to mice infused with non-edited CAR T cells.

CRISPR technology gives scientists the ability to locate and edit any unwanted gene. For cancer, it works by deleting a specific gene only in T cells to help better fight tumors. The approach is closely related to CAR T cell therapy, in which researchers collect a patient's own T cells and engineer them to express a new receptor that seeks out and attacks cancer cells.

"We've shown, for the first time, that we can successfully use CRISPR-Cas9 to knock out CD5 on a CAR T cells and enhance their ability to attack cancer," said Marco Ruella, MD, an assistant professor of Medicine in the division of Hematology-Oncology in Penn's Perelman School of Medicine and scientific director of the Lymphoma Program, who will present the results. "The difference between edited and non-edited CAR T cells was striking in several cancer models."

The authors first tested the approach in a T-cell leukemia model. Anti-CD5 CAR T cells were genetically engineered to seek out CD5 on malignant T cells and attack them. Since CD5 is also expressed on normal T cells, the authors removed it from CAR T cells, both to avoid the possible killing of other CAR T cells, and potentially to unleash CAR T cell activation that would otherwise be inhibited by the presence of CD5 on these cells.

Indeed, CD5-deleted anti-CD5 CAR T cells were significantly more potent than CAR T cells without the deletion (wild-type, CD5+) in both in vitro and in vivo experiments, where more than 50 percent of mice were cured at long term.

To test the hypothesis that deletion of CD5 could increase the anti-tumor effect of CAR T cells targeting antigens other than CD5, the results were then validated in the setting of CTL019 CAR T cells against CD19+ B-cell leukemia. Of note, also in this model, CD5 knockout led to significantly enhanced CTL019 CAR T cell anti-tumor efficacy with prolonged complete remissions in the majority of mice.

In a separate analysis to be shared the day of the presentation, the team reviewed a genomic database of more than 8,000 patients' tumor biopsies to study their levels of CD5 and found a correlation with outcomes. "Basically, in most cancer types, the less CD5 expressed in T cells, the better the outcome," Ruella said. "The level of CD5 in your T cells matters."

The findings are an important step forward that may set up future clinical trials to explore how combining CAR T cell therapy and CRISPR-Cas9 gene editing could improve upon existing and new cell therapies.

Therapies such as the Penn-developed CAR T cell therapy, Kymriah (Novartis), for pediatric and adult blood cancer patients can induce dramatic responses in relapsed or refractory B-cell acute lymphoblastic leukemia and non-Hodgkin lymphomas. However, many patients do not respond or eventually relapse. What's more, CAR T cell therapy has not yet been proven effective in several hematological malignancies, such as T cell lymphoma and leukemia -- blood cancers that frequently express CD5. CD5 is also expressed in the vast majority of chronic lymphocytic leukemia and mantle cell lymphoma patients and also in about 20 percent of acute myeloid leukemia cases.

Today, many of the approaches to enhance CAR T therapy involve combination therapies that address T cell exhaustion, particularly the PD-L1/PD-1 axis. This team's strategy is different in that it aims to intervene during early activation of T cells, which could open up opportunities to increase T cell function in the tumor microenvironment.

"Looking at the long term, this could represent a more universal strategy to enhance the anti-tumor effects of CAR T cells," said Carl June, MD, the Richard W. Vague Professor in Immunotherapy and director of the Center for Cellular Immunotherapies in the Abramson Cancer Center and director of the Parker Institute for Cancer Immunotherapy at the Perelman School of Medicine at the University of Pennsylvania and one of the study's authors. "We look forward to building upon these encouraging findings in the next phase of our work."

A phase I clinical trial investigating this CD5-deleted CAR T cell approach could begin as early as 2021, the researchers said.

Credit: 
University of Pennsylvania School of Medicine

Molecules convert visible light into ultraviolet light with record efficiency

image: A newly developed molecular system in the glass tube on the right efficiently upconverts visible light even from typical LEDs into ultraviolet light through triplet-triplet annihilation. Developed by researchers at Kyushu University, the system achieves an upconversion efficiency of 20% under high-intensity light, doubling previous records, while also being relatively efficient even under weak light.

Image: 
Nobuhiro Yanai, Kyushu University

Light-powered processes from hydrogen production to air purification could see a boost in performance under ambient light thanks to a new material system that can directly convert visible light into ultraviolet light with an efficiency that doubles previous records.

Developed by researchers at Kyushu University, the system achieves a light upconversion efficiency of 20% at high intensities and maintains relatively high performance even under weak light, making it promising for harnessing visible light already around us to drive applications requiring high-energy ultraviolet light.

While people often try to avoid ultraviolet light because of the damage it can do to skin, Nobuhiro Yanai, associate professor of Kyushu University's Faculty of Engineering, has been searching for ways to increase the number of these high-energy rays to power photocatalysts that enable a variety of useful reactions from producing hydrogen for use in fuel-cell vehicles to purifying indoor environments.

"Although dedicated light sources such as ultraviolet LEDs can be used to drive these reactions, they consume energy and increase complexity," explains Yanai. "Instead, a much more elegant solution is to harvest the sunlight and indoor ambient light that is already all around us."

However, these ambient light sources generally have a large portion of their energy in the lower-energy visible region and only a fraction of it in the ultraviolet, so researchers have been searching for ways to directly convert visible light with wavelengths longer than 400 nm into higher-energy ultraviolet light.

To do this, the research team led by Yanai and Nobuo Kimizuka has been focusing on a process called triplet-triplet annihilation. In this process, energetic states called triplets are formed on molecules following absorption of visible light. These "donor" molecules then give their triplets to "acceptor" molecules that can combine two triplets to create a single, higher-energy state that is released as ultraviolet light.

Until recently, the maximum reported efficiency of conventional upconversion from visible to ultraviolet light using triplet-triplet annihilation was about 10% and could only be achieved with visible light 1,000 times more intense than sunlight.

Yanai and his group now report in the journal Angewandte Chemie International Edition that they have smashed this record while also achieving greatly improved efficiencies under weak visible light from the sun and indoor LEDs.

"We have been trying to improve the efficiency of this process for more than five years, but we had been stuck at around 5%," says Yanai. "We finally were able to make a major leap through a new molecular design, which gave us the right molecules for excellent performance."

Poor efficiency of triplet-triplet annihilation by the ultraviolet-emitting acceptor molecules and quenching of the generated ultraviolet emission by the triplet-creating donor molecules have been two key issues limiting performance.

To overcome these problems, the researchers developed a novel acceptor molecule, named TIPS-naphthalene, that has a high triplet-triplet-annihilation efficiency and a low enough triplet energy to easily accept triplets from a molecule called Ir(C6)2(acac), a superior donor they previously found that does not strongly absorb the upconverted ultraviolet emission.

The combination of TIPS-naphthalene and Ir(C6)2(acac) successfully achieved the highest upconversion efficiency of 20.5% under high-intensity light.

Furthermore, the system also succeeds in significantly lowering the intensity of the excitation light required compared to conventional systems, achieving upconversion efficiencies of about 10% even at intensities similar to those of sunlight.

"This system can efficiently convert very low-intensity visible light to ultraviolet light. I was very surprised that we were able to obtain ultraviolet light even with the LEDs that I usually use at my office desk," comments Yanai.

The researchers attribute this performance to rigid bonding of the TIPS groups to the naphthalene center of the acceptor molecule helping suppress internal molecular movement that leads to energy losses and the TIPS groups themselves finely tuning the molecule's triplet energy while keeping the emission in the ultraviolet.

In addition to finding ways to continue to improve efficiency, the researchers are also exploring how to get the system to perform just as well out of solution to further simplify its application to a variety of light-driven processes.

Credit: 
Kyushu University

Tapping overlooked marketing data to drive business growth

Researchers from University of Houston, Columbia University, Emory University, and University of Connecticut published a new paper in the Journal of Marketing that reviews factors that contribute to the disconnect between the data companies create and the productive use of that data.

The study, forthcoming in the the Journal of Marketing, is titled "Capturing Marketing Information to Fuel Growth" and is authored by Rex Du, Oded Netzer, David Schweidel, and Debanjan Mitra.

Digital home assistants and wearables have become more popular than ever, collecting detailed information from consumers. In addition to the data explosion, the public offerings of Palantir and Snowflake highlight the rise of companies focused on big data analytics. Yet, despite enterprise leaders' and researchers' optimism in the potential that data holds, there is still a disconnect between the volume of data created and the ability of organizations to harness that potential to drive growth. A new article in the Journal of Marketing reviews factors that contribute to this disconnect, drawing attention to organizations' tendency to focus on data that is easier to access and measure and highlighting overlooked data sources that offer considerable opportunity to support growth.

To examine how marketing data can be leveraged to drive organizational growth, the researchers look at the different ways value can be created for the organization. Drawing on the customer equity framework, they review how marketing data may support growth in customer acquisition, customer relationship development, and customer retention.

With regards to customer acquisition, the study probes the potential for organizations to make use of biometric data to support acquisition efforts, such as identifying the ideal time and means of engaging prospects. It also identifies opportunities to use social network data to make acquisition efforts more efficient and effective by leveraging existing social ties that may facilitate the spread of marketing messages. In developing customer relationships, the researchers discuss what can be gained from identifying and predicting trends so that organizations can stay ahead of the curve. They also highlight how customer-level competitive intelligence can be gathered and used to grow existing customer relationships. To support customer retention, they illustrate the potential to take advantage of unstructured data such as call center logs and videos of service interactions to support firm representatives by providing them with real-time feedback. They also discuss the value that can be derived from data that supports causal inference and how this may be used to support proactive churn mitigation efforts.

Du elaborates that "While we see tremendous potential in tying marketing data to firm growth, we cannot ignore the challenges to implementing a data-driven growth strategy. Specifically, how does an enterprise move from obtaining control over data and deriving relevant insights to implementing a data strategy? For marketing data to drive organizational growth, marketers must consider data as a component of a strategy problem. That is, how can emerging sources of data be brought into alignment with an organization's growth strategy? To do so, we call for not only quantifying the value of marketing data, but also recognizing the full cost associated with leveraging data." The latter point has been brought to the forefront in recent documentaries like The Great Hack and The Social Dilemma, highlighting the potential use and misuse of consumer data and forcing us to question what we consider appropriate uses of data. Netzer adds "With regulators taking action to protect consumers, from the General Data Protection Regulation (GDPR) in the European Union to the California Consumer Protection Act (CCPA) and the proposed California Privacy Rights and Enforcement Act, marketers will need to grapple with balancing growth through data with consumers' rights."

Credit: 
American Marketing Association

Iron deficiency can be managed better

image: Iron deficiency is a major cause of anaemia, a deficiency in oxygen-carrying red blood cells or hemoglobin in the blood. New guidance published by Australian and European researchers outlines the best practice for managing iron deficiency.

Image: 
WEHI, Australia

Australian and European researchers have released updated, evidence-based guidance for managing iron deficiency, a serious worldwide health problem.

Iron deficiency is a major cause of anaemia, a lack of oxygen-carrying red blood cells or haemoglobin, which is experienced by two billion people worldwide - including almost one in 20 Australian adults. Iron deficiency and anaemia can have serious long-term health consequences, particularly for young children. They can also be a sign of other serious health conditions that should be treated.

Recent research has led to significant updates in the best practice for clinicians to diagnose and manage iron deficiency, and implementing these would lead to significant long-term health benefits both in Australia and around the world.

The new guidance has been published today in a review in The Lancet, by WEHI clinician scientists Associate Professor Sant-Rayn Pasricha and Associate Professor Jason Tye-Din, who are both also physicians at The Royal Melbourne Hospital, together with Professor Martina Muckenthaler from University of Heidelberg, Germany, and Professor Dorine Swinkels from Radboud University Medical Center, the Netherlands.

**At a glance

- Iron deficiency is a very common health condition both in Australia and around the world, and is a common cause of anaemia in people of all ages.

- Anaemia and iron deficiency have serious long-term health consequences, and it is important that they are appropriately diagnosed and managed.

- A new review in The Lancet has outlined cutting-edge, evidence-based guidance for how iron deficiency should be detected and managed, to ensure long-term health benefits.

**A serious health concern

Iron deficiency is a common problem worldwide, including in Australia, where it impacts all ages from young children through to the elderly: twelve per cent of Australian women are currently iron deficient, and one in 10 Australians has been iron deficient at some point of their lives.

A range of health problems can be caused by iron deficiency, including heart problems or, when pregnant women or children are iron deficient, the child is at risk of developmental problems, said Associate Professor Pasricha, who is also a haematologist at The Royal Melbourne Hospital.

"Being able to diagnose iron deficiency, and to understand and manage the causes of that anaemia, can provide a critical boost to the health of people of all ages," he said.

"Our review has provided clear guidelines for how to test for iron deficiency, and the best approaches to treat it both in Australia and internationally."

Associate Professor Pasricha leads the World Health Organisation (WHO) Collaborating Centre for Anaemia Detection and Control at WEHI, which provides up-to-date, evidence-based advice to the WHO. His research has included leading large-scale clinical trials of iron supplementation in low-income countries.

"We recently discovered that approaches to treating iron deficiency should be tailored to different countries," he said.

"In Australia, there have been many advances in how iron deficiency is managed in the last two decades, but as a haematologist I can see that some people are still not getting the best care. For example, some people whom might benefit from intravenous iron are not being offered this, despite clear evidence it can quickly restore iron levels.

"We hope this review will provide clear information for doctors in Australia and around the world, improving the management of iron deficiency - which will have widespread benefits on people's health."

**More than a dietary problem

While iron deficiency is often caused by a lack of iron in the diet, it can also be a sign of serious health problems including bowel cancer or coeliac disease, said Associate Professor Tye-Din, who is a gastroenterologist at the Royal Melbourne Hospital.

"It's really important that the cause of iron deficiency is properly investigated, rather than patients just being instructed to take iron supplements," he said. "If doctors don't take iron deficiency seriously and investigate why it is happening, serious health problems could be overlooked. In some cases these can be potentially life-threatening. This is something we've really highlighted in the review."

Credit: 
Walter and Eliza Hall Institute

Project 5-100 universities see a dramatic increase in publications in leading journals

image: Junior Research Fellow, HSE Center for Institutional Studies

Image: 
Nataliya Matveeva (HSE University)

A team of HSE researchers--Nataliya Matveeva, Ivan Sterligov, and Maria Yudkevich--have analyzed the research activity of universities participating in Russia's Academic Excellence Project 5-100. Overall, the quality of publications of these universities has improved. Collectively, participating universities have tripled their number of publications in reputable journals in the past three years, and researchers have begun to collaborate with each other more frequently. The study was published https://www.sciencedirect.com/science/article/pii/S1751157720306271?dgcid=author the Journal of Informetrics.

HSE researchers assessed the impact of the Project on the publication activity of the participating universities. Particular attention was paid to changes in the number and quality of publications, as well as in the frequency with which universities engage in scholarly collaboration.

Launched in 2013 and scheduled to conclude in 2020, Project 5-100 is a federal initiative to support Russia's leading universities. The project began with 15 universities in its first year and now includes 21 universities. It aims to strengthen the positions of the leading Russian universities in the global academic market through the development of educational programmes, research support, and other activities.

Statistical analysis was used to assess publication data for 2010-2016. The researchers compared the publication activity of 14 universities that had entered the Project from its inception (the experimental group) and 13 universities that did not participate in the Project (the control group). The universities of the control and experimental groups had a comparable number of publications in research journals before the start of the Project.

The analysis showed that the implementation of Project 5-100 had a significant impact on the volume of scientific and scholarly research participating universities produced. Universities of the experimental group showed a steady increase in publications: their number of publications in highly cited journals increased from 100 to 300 articles per year from 2012 to 2016, while universities of the control group maintained their previous rate of about 50 publications per year.

The Project has also influenced the number of co-authored articles. Participating universities increased their number of publications written by more than ten researchers by 4.5 times, up to an average of 80 per year. For universities in the control group, this value remained at their previous level of 10 publications.

'The participating universities have begun to collaborate more often with Russian and international organizations,' comments Nataliya https://www.hse.ru/en/org/persons/101534205, Junior Research Fellow at the Center for Institutional Studies. 'The results of the study demonstrate the complex nature of Project 5-100. The project not only boosts the publication activity of the participants, but also changes the way research is organized in the country's universities in general.'

Credit: 
National Research University Higher School of Economics

Safe space: improving the "clean" methanol fuel cells using a protective carbon shell

image: If successfully commercialized, direct methanol fuel cells could replace the ubiquitous lithium-ion batteries in portable electronics thanks to their higher energy density

Image: 
Unsplash

Because of the many environmental problems caused by the use of fossil fuels, many scientists worldwide are focused on finding efficient alternatives. Though high hopes have been placed on hydrogen fuel cells, the reality is that transporting, storing, and using pure hydrogen comes with a huge added cost, making this process challenging with current technology. In contrast, methanol (CH3O3), a type of alcohol, does not require cold storage, has a higher energy density, and is easier and safer to transport. Thus, a transition into a methanol-based economy is a more realistic goal.

However, producing electricity from methanol at room temperature requires a direct methanol fuel cell (DMFC); a device that, so far, offers subpar performance. One of the main problems in DMFCs is the undesired "methanol oxidation" reaction, which occurs during methanol crossover," that is, when it passes from the anode to the cathode. This reaction results in the degradation of the platinum (Pt) catalyst that is essential for the cell's operation. Although certain strategies to mitigate this problem have been proposed, so far none has been good enough owing to cost or stability issues.

Fortunately, in a recent study published in ACS Applied Materials & Interfaces, a team of scientists from Korea has come up with a creative and effective solution. They fabricated--through a relatively simple procedure--a catalyst made of Pt nanoparticles encapsulated within a carbon shell. This shell forms an almost impenetrable carbon network with small openings caused by nitrogen defects. While oxygen, one of the main reactants in DMFCs, can reach the Pt catalyst through these "holes," methanol molecules are too big to pass through. "The carbon shell acts as a molecular sieve and provides selectivity toward the desired reactants, which can actually reach the catalyst sites. This prevents the undesirable reaction of the Pt cores," explains Professor Oh Joong Kwon from Incheon National University, Korea, who led the study.

The scientists conducted various types of experiments to characterize the overall structure and composition of the prepared catalyst and proved that oxygen could make it through the carbon shell and methanol could not. They also found a straightforward way to tune the number of defects in the shell by simply changing the temperature during a heat treatment step. In subsequent experimental comparisons, their novel shelled catalyst outperformed commercial Pt catalysts and also offered much higher stability.

Prof Kwon has been working on improving fuel cell catalysts for the past 10 years, motivated by the many ways in which this technology could find its way into our daily lives. "DMFCs have a higher energy density than lithium-ion batteries and could therefore become alternative power sources for portable devices, such as laptops and smartphones," he remarks.

With the future of our planet on the line, switching to alternative fuels should be one of humanity's top goals, and this study is a remarkable step in the right direction.

Credit: 
Incheon National University

Coasts drown as coral reefs collapse under warming and acidification

image: Reefs will struggle to keep up with the current trajectory of warming and ocean acidification. The impacts by the end of the century include 'insidious and accelerated loss of coastal protection under unmitigated CO2 emissions'.

Image: 
Kristen Brown.

A new study shows the coastal protection coral reefs currently provide will start eroding by the end of the century, as the world continues to warm and the oceans acidify.

A team of researchers led by Associate Professor Sophie Dove from the ARC Centre of Excellence for Coral Reef Studies at The University of Queensland (Coral CoE at UQ) investigated the ability of coral reef ecosystems to retain deposits of calcium carbonate under current projections of warming and ocean acidification.

Calcium carbonate is what skeletons are made of--and it dissolves under hot, acidic conditions. Marine animals that need calcium carbonate for their skeletons or shells are called 'calcifiers'. Hard corals have skeletons, which is what gives reefs much of their three-dimensional (3D) structure. It's this structure that helps protect coasts--and those living on the coasts--from the brunt of waves, floods and storms. Without coral reefs the coasts 'drown'.

A/Prof Dove says the amount of calcium carbonate within a coral reef ecosystem depends on the biomass of hard corals. But it also depends on the combined impact of warming and acidification on previously deposited calcium carbonate frameworks. She says the results of the study indicate the rate of erosion will overtake the rate of accretion on the majority of present-day reefs.

"Today's Great Barrier Reef has a 30% calcifier cover," A/Prof Dove said.

"If CO2 emissions aren't curbed, by the end-of-century a 50% calcifier cover is required to counter the physical erosion they face from storms and wave impacts," she said.

"In addition, more than 110% calcifier cover is needed to keep up with the minimal levels of sea-level rise."

However, A/Prof Dove says both of these scenarios are unlikely because high amounts of hard corals perish with intense underwater heatwaves. Previous studies show marine heatwaves will become chronic in the warmer months of an average year under unmitigated CO2 emissions.

The study was published in today's Communications Earth & Environment, just after the IUCN World Heritage Outlook 3 rated the Great Barrier Reef as 'critical'.

A/Prof Dove and her team built experimental reefs closely resembling those of shallow reef slopes at Heron Island on the southern Great Barrier Reef. For 18 months, they studied the effects of future climate scenarios on the ecosystem.

"What we saw was the insidious and accelerated loss of coastal protection under unmitigated CO2 emissions," said co-author Professor Ove Hoegh-Guldberg, also from Coral CoE at UQ.

"Under current projections, reefs will not simply adapt. Chronic exposure to the combined impacts of ocean warming and acidification will weaken reefs. They won't be able to re-build after disturbances such as cyclones, nor will they keep up with sea-level rise--possibly for thousands of years," said co-author Dr Kristen Brown, also from Coral CoE at UQ.

This means many coastal areas currently protected by calcareous coral reefs will no longer be so, impacting coastal infrastructure and communities.

"The combined impact of warming with the acidification of our oceans will see more than the collapse of ecosystems," A/Prof Dove said.

Credit: 
ARC Centre of Excellence for Coral Reef Studies

Proverbial wolf can't blow down modern timber high-rises, says UBCO researcher

image: UBCO Engineering Professor Solomon Tesfamariam (centre) examines wood used in mass-timber buildings.

Image: 
UBC Okanagan

With an increasing demand for a more sustainable alternative for high-rise construction, new research from UBC Okanagan, in collaboration with Western University and FPInnovations, points to timber as a sustainable and effective way to make tall, high-density, and renewable buildings.

"Many people have trouble imagining a timber high-rise of up to 40 storeys when we're so used to seeing concrete and steel being the norm in today's construction," explains Matiyas Bezabeh, a doctoral candidate at the UBCO School of Engineering. "But we're starting to demonstrate that the proverbial wolf can't knock over the pig's wooden building when they're built using modern techniques."

Bezabeh and his supervisors, Professors Solomon Tesfamariam from UBC Okanagan and Girma Bitsuamlak from Western University, conducted extensive wind testing on tall mass-timber buildings of varying height between 10 and 40-storeys at Western University's Boundary Layer Wind Tunnel Laboratory.

"We found that the studied buildings up to 20-storeys, using today's building codes, can withstand high-wind events," says Bezabeh. "However, in the cases we studied, once we get up to 30 and 40 storeys, aerodynamic and structural improvements would be needed to address excessive wind-induced motion--something that would impact the comfort of those inside.

In 2020, the National Building Code of Canada doubled the height allowance of timber buildings from six storeys to twelve. The 2021 edition of the International Building Code (IBC) will include provisions to allow mass-timber buildings up to 18-stories.

"What's exciting about our findings is that while additional engineering is required for these taller timber buildings, the problems are absolutely solvable, which opens the door to new architectural possibilities," adds Tesfamariam. "And with a shift towards sustainable urbanization across North America and Europe, the use of timber as a structural material addresses both the issues of sustainability and renewability of resources."

Tesfamariam, an engineering professor at UBCO, also sits on the Systems Design and Connections Subcommittee of the Canadian Wood Council, which is responsible for setting building code and engineering standards nationally.

According to Bezabeh, there is a growing acceptance of using mass-timber products such as cross-laminated timber because of its higher strength-to-weight ratio, aesthetics, and construction efficiency.

"We hope our research will continue the design and structural innovation in this area and perhaps one day soon many of us will be living in mass-timber high-rise apartments."

The School of Engineering offers a new course in advanced design of timber structures, led by Tesfamariam, geared for students and industry professionals interested in understanding timber products, design of timber structural elements, the fundaments of structural dynamics for timber buildings, and the design of low-, mid- and high-rise timber and timber-hybrid buildings.

Credit: 
University of British Columbia Okanagan campus