Tech

Microcrystal electron diffraction supports a new drug development pipeline

image: The growth of unique structure depositions by year in the Protein Data Bank (PDB) and Cambridge Structural Database (CSD) determined by MicroED or related 3D electron diffraction techniques. Depositions into the PDB are broken up into proteins and peptides. Proteins were defined as having more than 50 amino acids.

Image: 
Bruhn, J. et al. Front. Mol. Biosci., 2021 8, 354.

CAMBRIDGE July 13, 2021 - To date, solving structures of potential therapeutics using X-ray diffraction (XRD) has been an assumed, pivotal step in the drug development process. But a recent paper by a team of researchers led by NanoImaging Services shows how microcrystal electron diffraction (MicroED) is growing to obtain the structures of potential pharmaceuticals.

Three-dimensional crystal structures that show the relative positions of atoms, bonds and intramolecular interactions are needed to understand stability, reactivity, solubility and, ultimately, suitability for pharmaceutical use. Pharmaceutical researchers usually use X-ray diffraction (XRD) - with single-crystal XRD preferred - to solve crystal structures. But XRD usually requires large (100 μm or larger), well-ordered crystals and several thousand known active pharmaceutical ingredients (APIs) are available only as crystalline powders that do not readily form large crystals.

"Growing large crystals is a huge bottleneck for those interested in determining crystal structures," said author Dr. Jessica Bruhn, Scientific Group Leader - MicroED at NanoImaging Services. "MicroED can work with crystals of almost any size as it is generally fairly straightforward to break large crystals into a size suitable for MicroED."

Developments in automated data collection and data processing have led to increased interest in electron diffraction as an XRD alternative. Electron diffraction is like XRD, except that it uses a beam of electrons rather than X-rays to obtain structures. Since electrons readily interact with matter, MicroED can solve high-resolution crystal structures from sub-micron-sized crystals. It is especially exciting for small-molecule drugs, many of which readily form microcrystals, and the approach helps with the drug discovery phase when sample quantities are extremely limited. In the development phase, researchers can use it to determine structures of reaction products and by-products, which can help guide synthesis strategies and inform production decisions.

"Single-crystal X-ray diffraction is faster, cheaper and easier to access compared to electron diffraction today," said Bruhn. "However, I do expect to see electron diffraction determining more and more structures inaccessible to X-rays, such as those of transient polymorphs, helping to expand the breadth of crystal structures that can be determined."

Polymorphs are crystalline structures with the same chemical composition but different molecular packings with different lattice properties, like diamond and graphite. Most active pharmaceutical ingredients (APIs) are thought to exist in more than one polymorphic form, which can give rise to drastically different drug properties. Successful drug formulation requires selecting the optimal polymorph, and that requires easily solving structures. In addition, most drug substances being developed today exhibit poor solubility. Determining the structures of the many different forms an API can adopt (including polymorphs, co-crystals, hydrates, solvates, etc.) helps researchers better engineer optimal crystal forms with good pharmacokinetic properties.

In developing their MicroED pipeline, the researchers explored the prevalence of MicroED data available, including data stored in the Cambridge Structural Database (CSD). The CSD houses small-molecule organic and metal-organic experimental crystal structures with entries enriched and annotated by experts. About 98% of the structures in the CSD are from laboratory X-ray diffractometers, but the CSD houses a growing number of 3D electron diffraction datasets, including those solved by MicroED. Currently, there are over 100 unique datasets determined using electron diffraction in the CSD's June 2021 web and desktop offerings. In the past three years, the number of electron structures in the CSD has been increasing more rapidly, and the Cambridge Crystallographic Data Centre (CCDC) is committed to supporting scientists worldwide who are depositing and sharing their MicroED data globally.

Suzanna Ward is the Head of Database at the CCDC.

"Electron diffraction is truly one of the most exciting and rapidly evolving areas of structural science," Ward said. "Recent publications already show how it could help to speed up the development of new drugs, and we are eagerly anticipating how it might impact the volume and breadth of data we are able to share through the CSD. I think we have an interesting journey ahead of us, and it will be intriguing to see how 3D electron diffraction will be utilized in both industry and academia in the coming years."

Credit: 
CCDC - Cambridge Crystallographic Data Centre

Impairments found in neurons derived from people with schizophrenia and genetic mutation

image: Researchers created neurons from cell specimens donated by schizophrenia patients with a rare genetic mutation.

Image: 
UMass Amherst

A scientific team has shown that the release of neurotransmitters in the brain is impaired in patients with schizophrenia who have a rare, single-gene mutation known to predispose people to a range of neurodevelopmental disorders.

Significantly, the results from the research with human-derived neurons validated previous and new experiments that found the same major decrease in neurotransmitter release and synaptic signaling in genetically engineered human neurons with the same genetic variant - the deletion of neurexin 1 (NRXN1). NRXN1 is a protein-coding gene at the synapse, a cellular junction that connects two nerve cells to communicate efficiently.

Both the research with human-derived and engineered human neurons also found an increase in the levels of CASK, an NRXN1-binding protein, which were associated with changes in gene expression.

"Losing one copy of this neurexin 1 gene somehow contributes to the etiology or the disease mechanism in these schizophrenia patients," says molecular neuroscientist ChangHui Pak, assistant professor of biochemistry and molecular biology at the University of Massachusetts Amherst and lead author of the research published in the Proceedings of the National Academy of Sciences. "It causes a deficit in neural communication."

Pak is quick to add that although this single-gene mutation puts people at risk for schizophrenia, autism, Tourette syndrome and other neuropsychiatric disorders, "at the end of the day, we don't know what causes schizophrenia. This variant gives us insight into what cellular pathways would be perturbed among people with schizophrenia and a lead to study this biology."

When she conducted most of the research, Pak was working in the Stanford University lab of Thomas Südhof, a neuroscientist who shared the 2013 Nobel Prize in Physiology or Medicine for helping to lay the molecular basis for brain chemistry, including neurotransmitter release.

The research team obtained cell specimens from schizophrenia patients with an NRXN1 deletion who donated samples to a national biorepository for genetic studies of psychiatric disorders. Pak and colleagues converted the participants' specimens into stem cells and then turned them into functional neurons to study. "We're rewinding these cells back, almost like a time machine - what did these patients' brains look like early on," Pak explains.

Labs at Stanford, Rutgers University and FUJIFILM Cellular Dynamics were independently involved in the generation and analysis of neurons. For comparison with the human-derived neurons, Pak and team also created human neurons from embryonic stem cells, engineering them to have one less copy of the NRXN1 gene. With engineered human neurons, they had previously noted the neurotransmitter impairment and were interested in whether they would have the same findings with patient-derived neurons.

"It was good to see the consistent biological finding that indeed the neurexin 1 deletion in these patients actually does mess up their neuronal synaptic communication, and secondly that this is reproducible across different sites whoever does the experiment," Pak says.

Notably, the researchers did not see the same decrease in neurotransmitter release and other effects in engineered mouse neurons with analogous NRXN1 deletion. "What this suggests is there is a human-specific component to this phenotype. The human neurons are particularly vulnerable to this genetic insult, compared to other organisms, adding to the value of studying human mutations in human cellular systems," Pak says.

Being able to reproduce the results is key to the development of drugs that can better treat schizophrenia. "Everything was done blindly and at different sites. We wanted to not only learn about the biology but also be at the top of our game to ensure rigor and reproducibility of these findings," Pak says. "We showed the field how this can be done."

Pak and her team are now continuing the research in the Pak Lab, supported by a five-year, $2.25 million grant from the National Institute of Mental Health. The scientists are using the latest stem cell and neuroscience methodologies to explore the molecular basis of synaptic dysfunction in schizophrenia and other neuropsychiatric disorders.

Credit: 
University of Massachusetts Amherst

Opening the gate to the next generation of information processing

image: New method for information processing: The coherent information exchange (black undulating lines) between magnons (shaded red area) and microwave photons (shaded blue area) is controlled by turning an electric pulse on and off (square wave at bottom).

Image: 
(Image by Xufeng Zhang, Argonne National Laboratory.)

Many of us swing through gates every day — points of entry and exit to a space like a garden, park or subway. Electronics have gates too. These control the flow of information from one place to another by means of an electrical signal. Unlike a garden gate, these gates require control of their opening and closing many times faster than the blink of an eye.

Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago’s Pritzker School of Molecular Engineering have devised a unique means of achieving effective gate operation with a form of information processing called electromagnonics. Their pivotal discovery allows real-time control of information transfer between microwave photons and magnons. And it could result in a new generation of classical electronic and quantum signal devices that can be used in various applications such as signal switching, low-power computing and quantum networking.

“Signal processing that couples spin waves and microwaves is a high-wire act. The signal must remain coherent despite dissipations and other outside effects threatening to throw the system into incoherence.” — Xufeng Zhang, assistant scientist in the Center for Nanoscale Materials

Microwave photons are elementary particles forming the electromagnetic waves employed in, for example, wireless communications. Magnons are the particle-like representatives of ?“spin waves.” That is, wave-like disturbances in an ordered array of microscopically aligned spins that occur in certain magnetic materials.

“Many research groups are combining different types of information carriers for information processing,” said Xufeng Zhang, assistant scientist in the Center for Nanoscale Materials, a DOE Office of Science User Facility at Argonne. “Such hybrid systems would enable practical applications that are not possible with information carriers of a single type.”

“Signal processing that couples spin waves and microwaves is a high-wire act,” added Zhang. “The signal must remain coherent despite energy dissipations and other outside effects threatening to throw the system into incoherence.”

Coherent gate operation (control over on, off and duration of the magnon-photon interaction) has been a long sought-after goal in hybrid magnonic systems. In principle, this can be achieved by rapid tuning of energy levels between the photon and magnon. However, such tuning has depended on changing the geometric configuration of the device. That typically requires much longer than the magnon lifetime — on the order of 100 nanoseconds (one-hundred billionths of a second). This lack of a rapid tuning mechanism for  interacting magnons and photons has made it impossible to achieve any real-time gating control.

Using a novel method involving energy-level tuning, the team was able to rapidly switch between magnonic and photonic states over a period shorter than the magnon or photon lifetimes. This period is a mere 10 to 100 nanoseconds.

“We start by tuning the photon and magnon with an electric pulse so that they have the same energy level,” said Zhang.  “Then, the information exchange starts between them and continues until the electric pulse is turned off, which shifts the energy level of the magnon away from that of the photon.”

By this mechanism, Zhang said, the team can control the flow of information so that it is all in the photon or all in the magnon or some place in between. This is made possible by a novel device design that allows nanosecond tuning of a magnetic field which controls the magnon energy level. This tunability allows the desired coherent gate operation.

This research points to a new direction for electromagnonics. Most importantly, the demonstrated mechanism not only works in the classical electronics regime, but can also be readily applied for manipulating magnonic states in the quantum regime. This opens opportunities for electromagnonics-based signal processing in quantum computing, communications and sensing.

Credit: 
DOE/Argonne National Laboratory

Researchers build the fastest real-time quantum random number generator

Prof. PAN Jianwei and Prof. ZHANG Jun from University of Science of Technology of China (USTC) of the Chinese Academy of Sciences, collaborating with Prof. CHU Tao's group from Zhejiang University, realized the fastest and miniaturized real-time quantum random number generator (QRNG) with the record-breaking output rate of 18.8 Gbps by combing a state-of-the-art photonic integrated chip with the optimized real-time post processing. The study was published in Applied Physics Letters on June 29.

Random number exists in many fields such as information security and cryptology industries. Different from other random number generators, QRNG, as the key part in quantum communication system, embraces the characteristics of unpredictability, irreproducibility, and unbiasedness.

Density of integration and real-time generating rate are the two key indicators for QRNG. It is hard to realize high density of integration via existing methods.

Prof. PAN and Prof. ZHANG perfected the high-speed quantum generation scheme which extracts randomness from vacuum states, and completed the verification of relevant experiments.

Collaborating with the researchers from Zhejiang University, they produced the photonic integrated chips required by this vacuum state scheme. They butterfly packaged a photonic integrated chip, including an InGaAs homodyne detector and a high-bandwidth transimpedance amplifier (TIA), with a bigger size of 15.6 mm×18 mm.

The random number real-time generating rate as well as high density of integration of QRNG has been improved through an optimized real-time post-processing algorithm in a field programmable gate array (FPGA) and hardware implementation.

The system QRNG finally realized the world-beating rate at 18.8 Gbps after passing the transmitting test.

Credit: 
University of Science and Technology of China

NTU Singapore converts tamarind shells into an energy source for vehicles

image: A close up of tamarind pods, along with pieces of their shell.

Image: 
Credit to NTU Singapore

Shells of tamarind, a tropical fruit consumed worldwide, are discarded during food production. As they are bulky, tamarind shells take up a considerable amount of space in landfills where they are disposed as agricultural waste.

However, a team of international scientists led by Nanyang Technological University, Singapore (NTU Singapore) has found a way to deal with the problem. By processing the tamarind shells which are rich in carbon, the scientists converted the waste material into carbon nanosheets, which are a key component of supercapacitors - energy storage devices that are used in automobiles, buses, electric vehicles, trains, and elevators.

The study reflects NTU's commitment to address humanity's grand challenges on sustainability as part of its 2025 strategic plan, which seeks to accelerate the translation of research discoveries into innovations that mitigate our impact on the environment.

The team, made up of researchers from NTU Singapore, the Western Norway University of Applied Sciences in Norway, and Alagappa University in India, believes that these nanosheets, when scaled up, could be an eco-friendly alternative to their industrially produced counterparts, and cut down on waste at the same time.

Assistant Professor (Steve) Cuong Dang, from NTU's School of Electrical and Electronic Engineering, who led the study, said: "Through a series of analysis, we found that the performance of our tamarind shell-derived nanosheets was comparable to their industrially made counterparts in terms of porous structure and electrochemical properties. The process to make the nanosheets is also the standard method to produce active carbon nanosheets."

Professor G. Ravi, Head, Department of Physics, who co-authored the study with Asst Prof Dr R. Yuvakkumar, who are both from Alagappa University, said: "The use of tamarind shells may reduce the amount of space required for landfills, especially in regions in Asia such as India, one of the world's largest producers of tamarind, which is also grappling with waste disposal issues."

The study was published in the peer-reviewed scientific journal Chemosphere in June.

The step-by-step recipe for carbon nanosheets

To manufacture the carbon nanosheets, the researchers first washed tamarind fruit shells and dried them at 100°C for around six hours, before grinding them into powder.

The scientists then baked the powder in a furnace for 150 minutes at 700-900 degrees Celsius in the absence of oxygen to convert them into ultrathin sheets of carbon known as nanosheets.

Tamarind shells are rich in carbon and porous in nature, making them an ideal material from which to manufacture carbon nanosheets.

A common material used to produce carbon nanosheets are industrial hemp fibres. However, they require to be heated at over 180°C for 24 hours - four times longer than that of tamarind shells, and at a higher temperature. This is before the hemp is further subjected to intense heat to convert them into carbon nanosheets.

Professor Dhayalan Velauthapillai, Head of the research group for Advanced Nanomaterials for Clean Energy and Health Applications at Western Norway University of Applied Sciences, who participated in the study, said: "Carbon nanosheets comprise of layers of carbon atoms arranged in interconnecting hexagons, like a honeycomb. The secret behind their energy storing capabilities lies in their porous structure leading to large surface area which help the material to store large amounts of electric charges."

The tamarind shell-derived nanosheets also showed good thermal stability and electric conductivity, making them promising options for energy storage.

The researchers hope to explore larger scale production of the carbon nanosheets with agricultural partners. They are also working on reducing the energy needed for the production process, making it more environmentally friendly, and are seeking to improve the electrochemical properties of the nanosheets.

The team also hopes to explore the possibility of using different types of fruit skins or shells to produce carbon nanosheets.

Credit: 
Nanyang Technological University

Oregon State researchers begin to unravel the mysteries of kombucha fermentation

image: Keisha Harrison, a doctoral student at Oregon State University, with a kombucha SCOBY.

Image: 
Oregon State University

CORVALLIS, Ore. - Oregon State University scientists are beginning to unravel the key microorganisms that contribute to the fermentation of kombucha, research that is already aiding large-scale kombucha producers in the fast-growing industry.

Kombucha is a fermented tea drink that has been homebrewed around the world for centuries, but in recent years has become widely popular with a global market size expected to grow from $1.3 billion in 2019 to $8.1 billion by 2027, according to an industry report. Several large producers, including Humm and Brew Dr., are based in Oregon.

Kombucha is produced by fermenting sugared tea using a symbiotic culture of bacteria and yeast, commonly referred to as SCOBY, and adding flavorings to enhance the taste. But little is known about what microorganisms in the SCOBY contribute to fermentation, which presents a challenge to kombucha brewers, especially those working on a commercial scale.

"Without having a baseline of which organisms are commonly most important, there are too many variables to try and think about when producing kombucha," said Chris Curtin, an assistant professor of fermentation microbiology at Oregon State. "Now with this research we can say there are four main types of SCOBY. If we want to understand what contributes to differences in kombucha flavors we can narrow that variable to four types as opposed to, say, hundreds of types."

Curtin and Keisha Harrison, a doctoral student in Curtin's lab, recently published a paper in the journal Microorganisms about their kombucha microorganism research, which began in 2017 when Harrison joined the lab.

Harrison gave a presentation about the research at KombuchaKon, an annual technical meeting for the kombucha brewing industry. Her talk caught the attention of representatives from Sierra Nevada Brewing Co., who were looking to launch a line of hard kombucha, Curtin said.

Sierra Nevada launched that line, known as Strainge Beast, in 2020, and recently expanded it with three new flavors. The line uses a proprietary SCOBY culture developed with the Oregon State team, drawing upon results from the recently published paper.

Curtin and Harrison's research follows work by other scientists who have uncovered the microbial communities that contribute to fermentation in other foods and beverages, such as wine, cheese and some types of beer. Past efforts to understand the microbial composition of kombucha have yielded inconclusive results.

In the recently published paper, Curtin and Harrison begin to change that. They used high-throughput DNA sequencing approaches to evaluate the microorganisms in 103 SCOBY used by kombucha brewers, primarily in North America. Only a few studies have applied these techniques with kombucha, and none at this scale.

The major finding was that there are essentially only four main types of SCOBY. Interestingly, each type consisted of very different combinations of yeast and bacteria working together. This contrasts with other fermented beverages, where a single organism consistently becomes dominant, as is the case for beer, wine and cider.

"This is the first comprehensive picture of SCOBY microbial community ecology," Harrison said. "Further research is necessary to relate the microbial community composition of kombucha SCOBY to acidity, flavor and aroma of finished products. That work can now draw upon what we have discovered with the results in this paper."

Credit: 
Oregon State University

Just 25 mega-cities produce 52% of the world's urban greenhouse gas emissions

In 2015, 170 countries worldwide adopted the Paris Agreement, with the goal limiting the average global temperature increase to 1.5°C. Following the agreement, many countries and cities proposed targets for greenhouse gas mitigation. However, the UNEP Emissions Gap Report 2020 shows that, without drastic and strict actions to mitigate the climate crisis, we are still heading for a temperature increase of more than 3°C by the end of the 21st century.

A new study published in the journal Frontiers in Sustainable Cities presents the first global balance sheet of greenhouse gasses (GHGs) emitted by major cities around the world. The aim was to research and monitor the effectiveness of historical GHG reduction policies implemented by 167 globally distributed cities that are at different developmental stages.

While only covering 2% of the Earth's surface, cities are big contributors to the climate crisis. But current urban GHG mitigation targets are not sufficient to achieve global climate change targets by the end of this century. "Nowadays, more than 50% of the global population resides in cities. Cities are reported to be responsible for more than 70% of GHG emissions, and they share a big responsibility for the decarbonization of the global economy. Current inventory methods used by cities vary globally, making it hard to assess and compare the progress of emission mitigation over time and space," says co-author Dr Shaoqing Chen, of Sun Yat-sen University, China.

Key findings

1. The top 25 cities accounted for 52% of the total urban GHG emissions.

2. Cities in Europe, Australia, and the US had significantly higher per capita emissions than cities in developing areas.

3. Stationary energy and transportation were the two main sources of emissions.

4. Of the 42 cities that had time-series traceable data, 30 decreased the annual GHG emissions over the study period. Though in several cities, there was an increase in emissions.

5. 113 out of the 167 set varying types of GHG emission reduction targets, while 40 have set carbon neutrality goals.

The biggest polluters

First, the authors conducted sector-level GHG emission inventories of the 167 cities - from metropolitan areas such as Durban, South Africa, to cities such as Milan, Italy. Then, they analyzed and compared the carbon reduction progresses of the cities based on the emission inventories recorded in different years (from 2012 to 2016). Lastly, they assessed the cities' short-, mid-, and long-term carbon mitigation goals. The cities were chosen from 53 countries (in North and South America, Europe, Asia, Africa, and Oceania) and were selected based on representativeness in urban sizes and regional distribution. The degree of development was distinguished based on whether they belonged to developed and developing countries according to the UN classification criteria.

The results showed that both developed and developing countries have cities with high total GHG emissions, but that megacities in Asia (such as Shanghai in China and Tokyo in Japan) were especially important emitters. The inventory of per capita emissions showed that cities in Europe, the US, and Australia had significantly higher emissions than most cities in developing countries. China, classified here as a developing country, also had several cities where per capita emissions matched those of developed countries. It is important to note that many developed countries outsource high carbon production chains to China, which increases export-related emissions for the latter.

The researchers also identified some of the most important sources of greenhouse gas emissions. "Breaking down the emissions by sector can inform us what actions should be prioritized to reduce emissions from buildings, transportation, industrial processes and other sources," says Chen. Stationary energy - which includes emissions from fuel combustion and electricity use in residential and institutional buildings, commercial buildings, and industrial buildings - contributed between 60 and 80% of total emissions in North American and European cities. In one third of the cities, more than 30% of total GHG emissions were from on-road transportation. Meanwhile, less than 15% of total emissions came from railways, waterways, and aviation.

Lastly, the findings show that the levels of emissions increase and decrease varied between the cities over the study period. For 30 cities, there was a clear emission decrease between 2012 and 2016. The top four cities with the largest per capita reduction were Oslo, Houston, Seattle, and Bogotá. The top four cities with the largest per capita emissions increase were Rio de Janeiro, Curitiba, Johannesburg, and Venice.

Policy recommendations

Of the 167 cities, 113 have set varying types of GHG emission reduction targets, while 40 have set carbon neutrality goals. But this study joins many other reports and research that show that we are a long way off achieving the goals set by the Paris Agreement.

Chen and colleagues make three key policy recommendations. First: "Key emitting sectors should be identified and targeted for more effective mitigation strategies. For example, the differences in the roles that stationary energy use, transportation, household energy use, and waste treatments play for cities should be assessed."

Second, development of methodologically consistent global GHG emission inventories is also needed, to track the effectiveness of urban GHG reductions policies. Lastly: "Cities should set more ambitious and easily-traceable mitigation goals. At a certain stage, carbon intensity is a useful indicator showing the decarbonization of the economy and provides better flexibility for cities of fast economic growth and increase in emission. But in the long run, switching from intensity mitigation targets to absolute mitigation targets is essential to achieve global carbon neutrality by 2050."

Credit: 
Frontiers

Addressing social needs may help mitigate distress and improve the health of women with cancer

A new study published by Wiley early online in CANCER, a peer-reviewed journal of the American Cancer Society, has identified unmet social needs in women with gynecologic cancer that could be addressed to improve care for patients and lessen disparities. For example, identifying patients who reported needing help with reading hospital materials resulted in the use of a cancer care navigator who provided patient education and support, facilitating physician-patient communication and adherence to care recommendations.

The prospective survey-based study conducted at Olive View-UCLA Medical Center, a public safety net hospital near Los Angeles, included 135 women, many of whom were immigrants and living below the federal poverty level. Nearly two-thirds (65.2%) of patients had at least one unmet social need (the lack of a basic resource), and 37.8% of patients screened positive for psychological distress. Help with reading hospital materials was the most frequently reported need (30.4%). Needing someone to talk to, social isolation, housing instability, financial toxicity, food insecurity, and transportation difficulties were also prevalent.

"While it is not within the power of individual healthcare systems or providers to modify social determinants of health, these data offer hope that we can implement programs to reduce healthcare disparities by addressing unmet social needs," said senior author Abdulrahman K. Sinno, MD, of the Sylvester Comprehensive Cancer Center at the University of Miami Miller School of Medicine. "It's important that we focus on addressing social needs regardless of the social, economic, and political inequities that precede them because these needs are ultimately downstream mediators of poor health outcomes."

In addition to using cancer care navigators to help with reading hospital materials, other social needs such as food insecurity, housing instability, and lack of transportation were addressed by connecting patients with available resources. These resources include Meals on Wheels America, Project Angel Food, county-sponsored housing programs, and transportation assistance programs. Furthermore, for patients who screened positive for distress, a social worker with mental health specialization and a psychiatry team were embedded into the clinic to remove barriers to mental healthcare.

"In the future, we plan to demonstrate the utility and cost effectiveness of identified social need intervention algorithms not only for improving quality of life and health outcomes, but also for reducing healthcare disparities," said Dr. Sinno.

Credit: 
Wiley

Oncotarget: Inhibitory effects of Tomivosertib in acute myeloid leukemia

image: LC-MS/MS analysis identifies putative MNK1/2 targets and interactors. (A) FLAG-MNK2 or FLAG-MNK1 was overexpressed in 293T cells. MNK1/2 was immunoprecipitated from 293T cell lysates using anti-FLAG-M2 agarose conjugated beads. An empty vector was used as a negative control. Immunoprecipitated proteins were resolved by SDS-PAGE, and were prepared using standard techniques and then analyzed via LC-MS/MS. A Venn diagram was created depicting the number of proteins that interact with MNK1/2. (B) The results from (A) were annotated using Metascape. The heat map shows the most significant pathways and the overlap between the MNK1 IP and MNK2 IP. (C) FLAG-MNK2 or FLAG-MNK1 was immunoprecipitated as described in (A). Proteins were resolved by SDS-PAGE and immunoblotted with the indicated antibodies. (D) FLAG-MNK2 was overexpressed in 293T cells. Cells were treated with either DMSO (vehicle-control) or Tomivosertib at the indicated doses and time points, lysed and MNK2 was immunoprecipitated with anti-FLAG-M2 agarose conjugated beads. An empty vector (EV) was used as a negative control. Proteins were resolved by SDS-PAGE and immunoblotted with the indicated antibodies. (E) HA-RAPTOR was overexpressed in 293T cells. Cells were treated with either DMSO (vehicle-control) or Tomivosertib at the indicated doses and time points, lysed and RAPTOR was immunoprecipitated with anti-HA Sepharose conjugated beads. An empty vector (EV) was used as negative control. Proteins were resolved by SDS-PAGE and immunoblotted with the indicated antibodies. (C-E) Total cell lysates for each experimental condition were run in parallel with the immunoprecipitated proteins (Input).

Image: 
Correspondence to - Leonidas C. Platanias - l-platanias@northwestern.edu

Oncotarget published "Inhibitory effects of Tomivosertib in acute myeloid leukemia" which reported that the authors evaluated the therapeutic potential of the highly-selective MNK1/2 inhibitor Tomivosertib on AML cells.

Tomivosertib was highly effective at blocking eIF4E phosphorylation on serine 209 in AML cells.

Moreover, combination of Tomivosertib and Venetoclax resulted in synergistic anti-leukemic responses in AML cell lines.

Mass spectrometry studies identified novel putative MNK1/2 interactors, while in parallel studies we demonstrated that MNK2 - RAPTOR - mTOR complexes are not disrupted by Tomivosertib.

Overall, these Oncotarget findings demonstrate that Tomivosertib exhibits potent anti-leukemic properties on AML cells and support the development of clinical translational efforts involving the use of this drug, alone or in combination with other therapies for the treatment of AML.

These Oncotarget findings demonstrate that Tomivosertib exhibits potent anti-leukemic properties on AML cells and support the development of clinical translational efforts involving the use of this drug

Dr. Leonidas C. Platanias from The Northwestern University as well as The Jesse Brown Veterans Affairs Medical Center said, "Acute myeloid leukemia (AML) is the second most common form of leukemia in adults, and has a very poor overall survival rate."

Therefore, there continues to be a need for new therapeutic modalities, including approaches targeting negative-feedback signaling pathways that may be activated in response to antileukemic treatments, leading to resistance.

The pro-neoplastic activity of eIF4E is associated with its phosphorylation/activation by MNK1/2 on serine 209 and correlates with enhanced mRNA translation, as well as nuclear export of mRNAs involved in tumorigenesis and cell cycle control.

Several studies have shown that pharmacological targeting of MNK1/2 results in inhibitory activity against AML cells in pre-clinical models.

As a result, the full therapeutic potential of MNK1/2 inhibition for the treatment of AML has not been fully assessed.

The authors demonstrate that Tomivosertib suppresses eIF4E phosphorylation in AML cells and decreases leukemic cell survival and proliferation.

The Platanias Research Team concluded in their Oncotarget Research Output, "Viewed altogether, these studies indicate that MNK1/2 inhibition would most likely be a successful strategy in only a subset of AML patients. In future studies it will be crucial to ascertain what pathways are responsible for sensitivity to MNK inhibitors. These studies will help to identify potential regulatory programs through which MNK1/2 modulates cell signaling pathways critical for leukemic cell survival and may lead to the development of novel therapeutic interventions for AML."

Credit: 
Impact Journals LLC

A Trojan horse could help get drugs past our brain's tough border patrol

Sclerosis, Parkinson's Disease, Alzheimer's and epilepsy are but a few of the central nervous system disorders. They are also very difficult to treat, since the brain is protected by the blood-brain barrier.

The blood-brain barrier works as a border wall between the blood and the brain, allowing just certain molecules to enter the brain. And whereas water and oxygen can get through, as can other substances such as alcohol and coffee. But it does block more than 99 percent of potentially neuroprotective compounds from reaching their targets in the brain.

Now, in a study conducted in living, including awake mice, a team of researchers from the University of Copenhagen provides direct insight on how to trick the blood-brain barrier's impermeable walls to allow drug delivery to the brain.

They investigated so-called nanoparticle liposome drug carriers and delivered them past the blood-brain barrier while tracking and monitoring them all the way through the system.

"Before this study, the community had no insight what was happening in the blood-brain barrier in the living brain, and why some nanoparticles crossed and others wouldn't. In this regard, the blood-brain barrier was a 'black-box' where the events between drug administration and detection in the brain remaind obscures. It was even doubted whether nanoparticle entry to the brain was possible at all. With our paper, we now provide a direct proof of nanoparticle entry to the brain and describe why, when, and where it happens," explains Assistant Professor Krzysztof Kucharz from the Department of Neuroscience.

The researchers, aided by colleagues at the Technical University of Denmark and Aalborg University, used two-photon imaging to deconstruct the blood-brain barrier in order to understand how the nanoparticle drug carriers travel past the blood-brain barrier in a living organism.

"We monitored the nanoparticles entry to the brain at each step of the process, providing valuable knowledge for the future drug design. Specifically, we show which vascular segments are the most efficient to target with nanoparticles to allow their entry to the brain. And because we were able to monitor the drug carriers at the level of a single nanoparticles, we now provide a novel platform to develop more efficient and safer therapeutic approaches," says Krzysztof Kucharz.

The study, released in Nature Communications, shows that nanoparticles targeted to the brain are picked up in the capillaries and venules by endothelial cells, which are the cells in the blood-brain barrier that allow or reject access of molecules to our brain tissue.

"Analogically to the mythical Trojan horse they are recognized by endothelium and transported across the blood-brain barrier to the brain. These nanoparticles have a cargo space that can be loaded with various neuroprotective drugs to treat many neurodegenerative diseases. This approach is currently tested in many clinical and preclinical trials in brain cancer, stroke, Alzheimer's and Parkinson's disease. However, the levels of nanoparticle transport into the brain are still low and need to improve to reach clinical significance. Therefore, there is a great need to optimize nanoparticle drug delivery and to do so, it is crucial to understand how nanoparticles interact with the blood-brain barrier. This is where we came into play.", says Krzysztof Kucharz.

The researchers used a two-photon imaging approach to study nanoparticles allowed them to open the blood-brain barrier "black box" and get a full picture of nanoparticles route across the blood-brain barrier. They tagged the particles with fluorescent molecules, which allowed the microscopy of nanocarriers in the living, intact brain at the resolution level of a single nanoparticle.

Now, they could observe how nanoparticles circulate in the bloodstream, how they associate over time to the endothelium, how many are taken up by the endothelium, how many are left behind, what happens to them once inside the blood-brain barrier and where the nanoparticles exit to the brain. Then, they observed that brain vessels handle the nanoparticles differently, allowing or rejecting access of nanoparticles to the brain tissue depending on the vessel type.

"Although the anatomy and function of the endothelium differ between different vessel types, this principal feature of the brain had so far been overlooked in drug delivery studies, and whether or how it impacted drug delivery had been unknown," says Krzysztof Kucharz.

They show that nanoparticles can enter the brain mainly at big vessels, i.e. venules, which are surrounded by so-called perivascular space, and not, as previously believed small and numerous capillaries. The perivascular space surrounds venules making nanoparticles easier to exit the endothelium and progress further into the brain, but it is absent in capillaries.

"Our results challenge the assumed view that capillaries constitute the main locus for nanoparticle transport to the brain. Instead, venules should be targeted for efficient nanoparticle drug delivery to the brain", says Krzysztof Kucharz.

The methodological platform developed by authors may constitute an excellent platform to fine-tune nanoparticle formulations for increased transport to the brain and provide valuable information for the future design of novel drug delivery systems. This will hopefully provide a great leap forward to efficiently treat brain disorders.

Credit: 
University of Copenhagen - The Faculty of Health and Medical Sciences

First actionable clock that predicts immunological health and chronic diseases of aging

image: The first actionable clock that highlights the critical role of the immune system in the aging process

Image: 
James O'Brien for the Buck Institute

Researchers from the Buck Institute and Stanford University have created an inflammatory clock of aging (iAge) which measures inflammatory load and predicts multi-morbidity, frailty, immune health, cardiovascular aging and is also associated with exceptional longevity in centenarians. Utilizing deep learning, a form of AI, in studies of the blood immunome of 1001 people, researchers also identified a modifiable chemokine associated with cardiac aging which can be used for early detection of age-related pathology and provides a target for interventions. Results are published in Nature Aging.

"Standard immune metrics which can be used to identify individuals most at risk for developing single or even multiple chronic diseases of aging have been sorely lacking," said David Furman, PhD, Buck Institute Associate Professor, Director of the 1001 Immunomes Project at Stanford University School of Medicine and senior author of the study. "Bringing biology to our completely unbiased approach allowed us to identify a number of metrics, including a small immune protein which is involved in age-related systemic chronic inflammation and cardiac aging. We now have means of detecting dysfunction and a pathway to intervention before full-blown pathology occurs."

According to first author Nazish Sayed, MD, PhD, Assistant Professor of Vascular Surgery at Stanford Medicine, the study identified the soluble chemokine CXCL9 as the strongest contributor to iAge. Furman described it as a small immune protein that is usually called into action to attract lymphocytes to the site of an infection. "But in this case we showed that CXCL9 upregulates multiple genes implicated in inflammation and is involved in cellular senescence, vascular aging and adverse cardiac remodeling" adding that silencing CXCL9 reversed loss of function in aging endothelial cells in both humans and mice.

Larger implications for iAge

Results from the initial analysis (which also included information from comprehensive clinical health assessments of 902 individuals) were validated in an independent cohort of centenarians and all-cause mortality in the Framingham Heart Study. Furman says when it comes to health and longevity, the "age" of one's immune system most certainly trumps the chronological information that can be derived from a driver's license. "On average, centenarians have an immune age that is 40 years younger than what is considered 'normal' and we have one outlier, a super-healthy 105 year-old man (who lives in Italy) who has the immune system of a 25 year old," he said.

Study results involving cardiac health were also validated in a separate group of 97 extremely healthy adults (age 25 - 90 years of age) recruited from Palo Alto, California. Furman says researchers found a correlation between CXCL9 and results from pulse wave velocity testing, a measure of vascular stiffness. "These people are all healthy according to all available lab tests and clinical assessments, but by using iAge we were able to predict who is likely to suffer from left ventricular hypertrophy (an enlargement and thickening of the walls of the heart's main pumping chamber) and vascular dysfunction."

Furman says the tool can be used to track someone's risk of developing multiple chronic diseases by assessing the cumulative physiological damage to their immune system. For example, age-related frailty can be predicted by comparing biological immune metrics with information about how long it takes someone to stand up from a chair and walk a certain distance as well as their degree of autonomy and independence. "Using iAge it's possible to predict seven years in advance who is going to become frail," he said. "That leaves us lots of room for interventions."

Highlighting the connection between immune health and aging

In 2013 a group of researchers studying aging identified nine "hallmarks" of the aging process. Age-related immune system dysfunction was not part of the mix. "It's becoming clear that we have to pay more attention to the immune system with age, given that almost every age-related malady has inflammation as part of its etiology," said Furman. "If you're chronically inflamed, you will have genomic instability as well as mitochondrial dysfunction and issues with protein stability. Systemic chronic inflammation triggers telomere attrition, as well as epigenetic alterations. It's clear that all of these nine hallmarks are, by and large, triggered by having systemic chronic inflammation in your body. I think of inflammation as the 10th hallmark"

Credit: 
Buck Institute for Research on Aging

Coastal ecosystems worldwide: Billion-dollar carbon reservoirs

According to the study, Australia, Indonesia and the USA provide the largest carbon storage potential with their coastal ecosystems. The team also calculated which countries benefit most from the coastal CO2 uptake worldwide. The different ways in which countries are affected by climate change are quantified by using the so-called social costs of carbon.

"If we take into account the differences in marginal climate damages that occur in each country, we find that Australia and Indonesia are clearly the largest donors in terms of globally avoided climate damages originating from coastal CO2 uptake, as they themselves derive comparatively little benefit from the high storage potential of their coasts," says Wilfried Rickels, who heads the Global Commons and Climate Policy Research Center at the Kiel Institute. "The U.S., on the other hand, also store a lot of carbon in their coastal ecosystems, but at the same time benefit the most from natural sinks behind India and China. In monetary terms, the three countries realize annual welfare gains of about 26.4 billion US dollar (India), 16.6 billion US dollar (China) and 14.7 billion US dollar (U.S.) thanks to global coastal ecosystems and the resulting lower climate impact costs."

The basis for the monetary calculations are the so-called social cost of carbon, which allow assessing the contribution of coastal carbon uptake in the "inclusive wealth" concept. 'Inclusive wealth' is defined as the totality of all natural and man-made capital stocks, valued with so-called shadow prices, i.e. the contributions to social welfare. Among other factors, the absolute scarcity of resources plays an important role for shadow prices. Atmospheric CO2 has a negative impact on welfare primarily through climate change. However, countries are differently affected by climate change and accordingly country-specific shadow prices are used in the study.

The analysis does not include other carbon sinks or emissions from energy and industry. When carbon emissions from energy and industry are also considered, only Guinea-Bissau, Belize, Vanuatu, Sierra Leone, Solomon Islands, Guinea, Comoros, Samoa, Madagascar, and Papua New Guinea make a net positive contribution through their coastal ecosystems, since they store more CO2 in coastal ecosystems than they emit in total.

The study also emphasizes that carbon storage is only a small part of positive impacts of coastal ecosystems for humans. "Coastal ecosystems are an essential component of marine ecosystems and are therefore particularly important for marine biodiversity and for fisheries. At the same time, they contribute to flood and coastal protection and are therefore important for adaptation to climate change," emphasizes Martin Quaas, who heads the Biodiversity Economics research group at iDiv and UL.

In any case, there is currently still a very strong focus on afforestation on land when it comes to the challenges of achieving the Paris climate goals. "Marine CO2 uptake as well as its enhancement requires more attention in the debate on net-zero greenhouse gas emissions and net-negative CO2 emissions targets," Rickels points out. Especially a possible weakening of the marine carbon sinks would require even more significant mitigation and carbon dioxide removal efforts. "The coasts, with their numerous different user groups as well as possible conflicts of use, have a special role to play here."

The natural capital approach used in the study is suitable for assessing the redistribution resulting from CO2 emissions and CO2 sinks, which, unlike existing market-based assessments, is not influenced by the stringency of the underlying climate policy. The researchers plan to explore this question in further studies.

Credit: 
German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig

Immune system 'clock' predicts illness and mortality

You're as old as your immune system.

Investigators at the Stanford University School of Medicine and the Buck Institute for Research on Aging have built an inflammatory-aging clock that's more accurate than the number of candles on your birthday cake in predicting how strong your immune system is, how soon you'll become frail or whether you have unseen cardiovascular problems that could become clinical headaches a few years down the road.

In the process, the scientists fingered a bloodborne substance whose abundance may accelerate cardiovascular aging.

The story of the clock's creation will be published July 12 in Nature Aging.

"Every year, the calendar tells us we're a year older," said David Furman, PhD, the study's senior author. "But not all humans age biologically at the same rate. You see this in the clinic -- some older people are extremely disease-prone, while others are the picture of health."

This divergence, Furman said, traces in large part to differing rates at which people's immune systems decline. The immune system -- a carefully coordinated collection of cells, substances and strategies with which evolution has equipped us to deal with threats such as injuries or invasions by microbial pathogens -- excels at mounting a quick, intense, localized, short-term, resist-and-repair response called acute inflammation. This "good inflammation" typically does its job, then wanes within days. (An example is that red, swollen finger you see when you have a splinter, and the rapid healing that follows.)

As we grow older, a low-grade, constant, bodywide "bad inflammation" begins to kick in. This systemic and chronic inflammation causes organ damage and promotes vulnerability to a who's who of diseases spanning virtually every organ system in the body and including cancer, heart attacks, strokes, neurodegeneration and autoimmunity.

To date, there have been no metrics for accurately assessing individuals' inflammatory status in a way that could predict these clinical problems and point to ways of addressing them or staving them off, Furman said. But now, he said, the study has produced a single-number quantitative measure that appears to do just that.

Furman directs the Stanford 1000 Immunomes Project and is a visiting scholar at Stanford's Institute for Immunity, Transplantation and Infection. In addition, he's an associate professor at the Novato, California-based Buck Institute for Research on Aging and director of the Artificial Intelligence Platform at the same institute.

Lead authors of the study are Nazish Sayed, MD, PhD, assistant professor of vascular surgery at Stanford, and Yingxiang Huang, PhD, senior data scientist at the Buck Institute.

1,001 blood samples

For the 1000 Immunomes Project, blood samples were drawn from 1,001 healthy people ages 8-96 between 2009 and 2016. The samples were subjected to a barrage of analytical procedures determining levels of immune-signaling proteins called cytokines, the activation status of numerous immune-cell types in responses to various stimuli, and the overall activity levels of thousands of genes in each of those cells.

The new study employed artificial intelligence to boil all this data down to a composite the researchers refer to as an inflammatory clock. The strongest predictors of inflammatory age, they found, were a set of about 50 immune-signaling proteins called cytokines. Levels of those, massaged by a complex algorithm, were sufficient to generate a single-number inflammatory score that tracked well with a person's immunological response and the likelihood of incurring any of a variety of aging-related diseases.

In 2017, the scientists assessed nearly 30 1000 Immunomes Project participants ages 65 or older whose blood had been drawn in 2010. They measured the participants' speed at getting up from a chair and walking a fixed distance and, through a questionnaire, their ability to live independently ("Can you walk by yourself?" "Do you need help getting dressed?"). Inflammatory age proved superior to chronological age in predicting frailty seven years later.

Next, Furman and his colleagues obtained blood samples from an ongoing study of exceptionally long-lived people in Bologna, Italy, and compared the inflammatory ages of 29 such people (all but one a centenarian) with those of 18 50- to 79-year-olds. The older people had inflammatory ages averaging 40 years less than their calendar age. One, a 105-year-old man, had an inflammatory age of 25, Furman said.

To further assess inflammatory age's effect on mortality, Furman's team turned to the Framingham Study, which has been tracking health outcomes in thousands of individuals since 1948. The Framingham study lacked sufficient data on bloodborne-protein levels, but the genes whose activity levels largely dictate the production of the inflammatory clock's cytokines are well known. The researchers measured those cytokine-encoding genes' activity levels in Framingham subjects' cells. This proxy for cytokine levels significantly correlated with all-cause mortality among the Framingham participants.

A key substance

The scientists observed that blood levels of one substance, CXCL9, contributed more powerfully than any other clock component to the inflammatory-age score. They found that levels of CXCL9, a cytokine secreted by certain immune cells to attract other immune cells to a site of an infection, begin to rise precipitously after age 60, on average.

Among a new cohort of 97 25- to 90-year-old individuals selected from the 1000 Immunomes Project for their apparently excellent health, with no signs of any disease, the investigators looked for subtle signs of cardiovascular deterioration. Using a sensitive test of arterial stiffness, which conveys heightened risk for strokes, heart attacks and kidney failure, they tied high inflammatory-age scores -- and high CXCL9 levels -- to unexpected arterial stiffness and another portent of untoward cardiac consequences: excessive thickness of the wall of the heart's main pumping station, the left ventricle.

CXCL9 has been implicated in cardiovascular disease. A series of experiments in laboratory dishware showed that CXCL9 is secreted not only by immune cells but by endothelial cells -- the main components of blood-vessel walls. The researchers showed that advanced age both correlates with a significant increase in endothelial cells' CXCL9 levels and diminishes endothelial cells' ability to form microvascular networks, to dilate and to contract.

But in laboratory experiments conducted on tissue from mice and on human cells, reducing CXCL9 levels restored youthful endothelial-cell function, suggesting that CXCL9 directly contributes to those cells' dysfunction and that inhibiting it could prove effective in reducing susceptible individuals' risk of cardiovascular disease.

"Our inflammatory aging clock's ability to detect subclinical accelerated cardiovascular aging hints at its potential clinical impact," Furman said. "All disorders are treated best when they're treated early."

Credit: 
Stanford Medicine

AAN issues ethical guidance for dementia diagnosis and care

MINNEAPOLIS - The American Academy of Neurology (AAN), the world's largest association of neurologists with more than 36,000 members, is issuing ethical guidance for neurologists and neuroscience professionals who care for people with Alzheimer's disease and other dementias. The new position statement is published in the July 12, 2021 online issue of Neurology®, the medical journal of the American Academy of Neurology. This update to the 1996 AAN position statement was developed by the Ethics, Law, and Humanities Committee, a joint committee of the American Academy of Neurology, American Neurological Association and Child Neurology Society.

"Dementia care and scientific understanding have advanced considerably, including greater recognition of non-Alzheimer's dementias and advances in genetics, brain imaging and biomarker testing," said position statement author Winston Chiong, MD, PhD, of the University of California San Francisco and a member of the AAN's Ethics, Law, and Humanities Committee. "This American Academy of Neurology position statement focuses on day-to-day ethical problems faced by clinicians, patients and families in the course of dementia care."

The position statement was developed before FDA approval of the new medication aducanumab and does not address that drug.

The AAN position statement notes that communicating a dementia diagnosis can be ethically challenging. Some families may request withholding a diagnosis from their loved one, but that may deprive the person of important opportunities to plan for future needs. In most cases, the statement says family members' fears about potential emotional harm can be lessened by compassionate disclosure and so it recommends ways to communicate serious information.

"Caring for people with dementia requires respecting their autonomy and involving them in their care preferences as early as possible so their wishes can be known, while acknowledging their diminishing ability to make decisions," said Orly Avitzur, MD, MBA, FAAN, President of the American Academy of Neurology. "This position statement offers guidance in accordance with core ethical principles, supporting the American Academy of Neurology's mission to promote the highest quality patient-centered neurologic care."

The position statement notes that Alzheimer's disease is only one form of dementia and symptoms can vary depending on which form of dementia a person has. Some forms begin with behavior disturbances that may be misinterpreted as a psychiatric rather than neurologic disorder.

The statement distinguishes between genetic or biomarker testing in people with symptoms of dementia and testing in people who do not have symptoms but are believed to be at risk of future dementia. Genetic and biomarker testing in people without symptoms of dementia is not recommended except in a research context. The statement recommends that anyone undergoing genetic testing should receive genetic counseling before and after receiving results.

Ethnic and racial disparities are persistent in dementia and dementia care. The statement notes that Black and Latino people are at higher risk for developing dementia compared with white and Asian people, likely due to social and economic differences earlier in life, and often experience delays in receiving a diagnosis of dementia due to poorer access, unequal care by the medical establishment, and the subsequent mistrust that this unequal care generates. It says doctors should be aware that those with ethnic or cultural backgrounds different from their own may have different perceptions of illness and priorities for care than they do.

For decision-making, planning in the early stages of dementia is crucial. The statement says people with dementia should be encouraged to discuss their overall goals with their families and doctors, create advance health care directives, and engage in other financial and legal planning as a guide for when they are no longer able to make decisions. In moderate stages, people may still be able to participate in decision-making by relaying their values to guide care decisions. When a person can no longer make decisions, their representatives should give priority to preferences the person previously expressed.

For daily activities such as driving, cooking and managing finances, the position statement recommends that doctors and family members remain alert to ways of monitoring a person's activities to lessen risks while preserving their independence and dignity when possible.

The AAN also recognizes the potential for abuse and says doctors should look for and document physical signs of abuse, isolation of the person from trusted family or friends, failure to pay for care needs and malnutrition.

The position statement recognizes that for some patients in in advanced stages, there are ways to maintain care for a person in the home. But it also suggests that doctors recommend reassessing whether in-home care remains feasible when caregivers experience burnout.

While some may request physician-hastened death, which is legal in some states, the statement points out that such laws generally do not apply to people with dementia. These laws require that someone have an estimated survival of six months or less yet still be able to make decisions on their own. People with such advanced dementia typically are not able to make these decisions.

Finally, the position statement notes that families often bear significant financial strain associated with dementia care and says new ways of providing and financing long-term care are needed.

Credit: 
American Academy of Neurology

Giving a "tandem" boost to solar-powered water splitting

image: The use of a semitransparent TiO2 photoanode allows the SiC photocathode to make use of the transmitted light. Using photocatalysts with different energy gaps results in increased conversion efficiency.

Image: 
Image courtesy: Masashi Kato from Nagoya Institute of Technology.

Turning away from fossil fuels is necessary if we are to avert an environmental crisis due to global warming. Both industry and academia have been focusing heavily on hydrogen as a feasible clean alternative. Hydrogen is practically inexhaustible and when used to generate energy, only produces water vapor. However, to realize a truly eco-friendly hydrogen society, we need to be able to mass-produce hydrogen cleanly in the first place.

One way to do that is by splitting water via "artificial photosynthesis," a process in which materials called "photocatalysts" leverage solar energy to produce oxygen and hydrogen from water. However, the available photocatalysts are not yet where they need to be to make solar-powered water splitting economically feasible and scalable. To get them there, two main problems should be solved: the low solar-to-hydrogen (STH) conversion efficiency and the insufficient durability of photoelectrochemical water splitting cells.

At Nagoya Institute of Technology, Japan, Professor Masashi Kato and his colleagues have been working hard to take photocatalysts to the next level by exploring new materials and their combinations and gaining insight into the physicochemical mechanisms that underlie their performances. In their latest study published in Solar Energy Materials and Solar Cells, Dr. Kato and his team have now managed to do just that by combining titanium oxide (TiO2) and p-type cubic SiC (3C-SiC), two promising photocatalyst materials, into a tandem structure that makes for a highly durable and efficient water splitting cell (see Figure).

The tandem structure explored by the team in their study has both the photocatalyst materials in series, with a semi-transparent TiO2 operating as a photoanode and 3C-SiC as a photocathode. Since each material absorbs solar energy at different frequency bands, the tandem structure can markedly increase the conversion efficiency of the water splitting cell by allowing more of the incoming light to excite charge carriers and generate the necessary currents.

The team measured the effects of applied external voltage and pH on the photocurrents generated in the cell and then conducted water splitting experiments under different light intensities. They also measured the amounts of oxygen and hydrogen generated. The results were highly encouraging, as Dr. Kato remarks: "The maximum applied-bias photon-to-current efficiency measured was 0.74%. This value, coupled with the observed durability of about 100 days, puts our water splitting system among the best currently available." Moreover, the findings of this study hinted at some of the potential mechanisms behind the observed performance of the proposed tandem structure.

Further research is needed to continue improving photoelectrochemical water splitting systems until they become widely applicable. Still, this study is clearly a step towards a clean future. "Our contributions shall accelerate the development of artificial photosynthesis technologies, which will generate energy resources directly from solar light. Thus, our findings may assist in the realization of sustainable societies," says Dr. Kato, speaking of his vision.

We certainly hope the future he envisions is not too far away!

Credit: 
Nagoya Institute of Technology