Tech

One of Darwin's evolution theories finally proved by Cambridge researcher

image: Laura van Holstein in the Old Library at St John's College, Cambridge, with a first edition of Charles Darwin's seminal book On The Origins of Species

Image: 
Nordin Catic

Scientists have proved one of Charles Darwin's theories of evolution for the first time - nearly 140 years after his death.

Laura van Holstein, a PhD student in Biological Anthropology at St John's College, University of Cambridge, and lead author of the research published today (March 18) in Proceedings of the Royal Society, discovered mammal subspecies play a more important role in evolution than previously thought.

Her research could now be used to predict which species conservationists should focus on protecting to stop them becoming endangered or extinct.

A species is a group of animals that can interbreed freely amongst themselves. Some species contain subspecies - populations within a species that differ from each other by having different physical traits and their own breeding ranges. Northern giraffes have three subspecies that usually live in different locations to each other and red foxes have the most subspecies - 45 known varieties - spread all over the world. Humans have no subspecies.

van Holstein said: "We are standing on the shoulders of giants. In Chapter 3 of On the Origin of Species Darwin said animal lineages with more species should also contain more 'varieties'. Subspecies is the modern definition. My research investigating the relationship between species and the variety of subspecies proves that sub-species play a critical role in long-term evolutionary dynamics and in future evolution of species. And they always have, which is what Darwin suspected when he was defining what a species actually was."

The anthropologist confirmed Darwin's hypothesis by looking at data gathered by naturalists over hundreds of years ¬- long before Darwin famously visited the Galapagos Islands on-board HMS Beagle. On the Origin of Species by Means of Natural Selection, was first published in 1859 after Darwin returned home from a five-year voyage of discovery. In the seminal book, Darwin argued that organisms gradually evolved through a process called 'natural selection' - often known as survival of the fittest. His pioneering work was considered highly controversial because it contradicted the Bible's account of creation.

van Holstein's research also proved that evolution happens differently in land mammals (terrestrial) and sea mammals and bats (non-terrestrial)¬ because of differences in their habitats and differences in their ability to roam freely.

van Holstein said: "We found the evolutionary relationship between mammalian species and subspecies differs depending on their habitat. Subspecies form, diversify and increase in number in a different way in non-terrestrial and terrestrial habitats, and this in turn affects how subspecies may eventually become species. For example, if a natural barrier like a mountain range gets in the way, it can separate animal groups and send them off on their own evolutionary journeys. Flying and marine mammals - such as bats and dolphins - have fewer physical barriers in their environment."

The research explored whether subspecies could be considered an early stage of speciation - the formation of a new species. van Holstein said: "The answer was yes. But evolution isn't determined by the same factors in all groups and for the first time we know why because we've looked at the strength of the relationship between species richness and subspecies richness."

The research acts as another scientific warning that the human impact on the habitat of animals will not only affect them now, but will affect their evolution in the future. This information could be used by conservationists to help them determine where to focus their efforts.

van Holstein explained: "Evolutionary models could now use these findings to anticipate how human activity like logging and deforestation will affect evolution in the future by disrupting the habitat of species. The impact on animals will vary depending on how their ability to roam, or range, is affected. Animal subspecies tend to be ignored, but they play a pivotal role in longer term future evolution dynamics."

van Holstein is now going to look at how her findings can be used to predict the rate of speciation from endangered species and non-endangered species.

Notes to editors: What Darwin said on page 55 in 'On the Origin of Species': "From looking at species as only strongly-marked and well-defined varieties, I was led to anticipate that the species of the larger genera in each country would oftener present varieties, than the species of the smaller genera; for wherever many closely related species (i.e species of the same genus) have been formed, many varieties or incipient species ought, as a general role, to be now forming. Where many large trees grow, we expect to find saplings."

Datasets: Most of the data is from Wilson and Reeder's Mammal Species Of The World, a global collated database of mammalian taxonomy. The database contains hundreds of years' worth of work by taxonomists from all over the world. The current way of "doing" taxonomy goes all the way back to botanist Carl Linnaeus (1735), so the accumulation of knowledge is the combined work of all taxonomists since then.

Credit: 
St. John's College, University of Cambridge

Kaiser Permanente finds weight loss lasts long after surgery

Weight loss was maintained better after gastric bypass than after sleeve gastrectomy in large study of diverse patients in Washington and California.

SEATTLE -- People with severe obesity who underwent bariatric surgery maintained significantly more weight loss at 5 years than those who did not have surgery according to a Kaiser Permanente study published March 16 in Annals of Surgery. Although some weight regain was common after surgery, regain to within 5% of baseline was rare, especially in patients who had gastric bypass instead of sleeve gastrectomy.

"Earlier research has shown that bariatric surgery is the most effective weight-loss treatment for patients with severe obesity," said first author David Arterburn, MD, MPH, a senior investigator at Kaiser Permanente Washington Health Research Institute and internal medicine physician at Kaiser Permanente in Washington.

"Our new results could help ease concerns about long-term weight regain, which have contributed to a low rate of bariatric surgery -- only about 1 in 100 eligible patients choose to have these procedures each year," he added.

The study found that:

At 5 years after gastric bypass:

People had lost, on average, 22% of their initial body weight.

25% had lost 30% or more of their total body weight.

Only 4% had regained weight to within 5% of their pre-surgical weight.

At 5 years after sleeve gastrectomy:

People had lost 16% of their initial body weight.

8% had lost 30% or more of their total body weight.

About 10% had regained weight to within 5% of their pre-surgical weight.

At 10 years:

People who had gastric bypass maintained 20% weight loss compared to 5% weight loss among those with usual medical care.

Longer-term results were not available for sleeve gastrectomy because it is a newer procedure.

This weight-maintenance information is important because sleeve gastrectomy, which is simpler to perform than gastric bypass, now accounts for more than 2 in 3 bariatric surgery procedures. However, earlier research from the same team showed fewer reoperations and interventions to address problems or complications after sleeve gastrectomy than after gastric bypass, over a 5-year follow-up period.

"It's important to monitor patients closely for early signs of weight regain -- and to intervene early with a detailed nutritional and medical evaluation to look for behavioral and surgical explanations for weight regain," Dr. Arterburn said. People in the study who stopped losing weight early -- within the first year, not the second -- tended to have a greater risk of weight regain by 5 years.

This is one of a few large, long-term studies comparing the weight outcomes of bariatric procedures to nonsurgical treatment. It included more than 129,000 diverse patients at Kaiser Permanente in Washington and Northern and Southern California. More than 31,000 patients had bariatric surgery -- more than 17,000 bypass and nearly 14,000 sleeve. And nearly 88,000 control patients had similar characteristics but received usual medical care for their weight loss instead of bariatric surgery. The study demonstrates the value of real-world evidence because prior randomized trials did not find differences in weight loss between the 2 types of bariatric surgery. Differences between this and prior studies might be attributable to the characteristics of patients and surgeons involved in the studies.

Previously, the same research team showed that bariatric surgery was associated with half the risk of microvascular complications (nephropathy, neuropathy, and retinopathy) and of heart attacks and strokes compared to patients with type 2 diabetes and severe obesity undergoing usual medical care.

"Providers should engage all patients with severe obesity, especially those who also have type 2 diabetes, in a shared-decision-making conversation to discuss the benefits and risks of different bariatric procedures," Dr. Arterburn said. "And more 10-year follow-up studies of bariatric surgery, particularly sleeve gastrectomy, are needed."

Credit: 
Kaiser Permanente

Perovskite solution aging: Scientists find solution

image: The clean stabilizer triethyl borate is used to inhibit side reactions in the perovskite solution.

Image: 
WANG Xiao

Perovskite solar cells have developed quickly in the past decade. But like silicon solar cells, the efficiency of perovskite solar cells is highly dependent upon the quality of the perovskite layer, which is related to its crystallinity.

Unfortunately, the aging process of the perovskite solution used to fabricate solar cells makes the solution unstable, thus leading to poor efficiency and poor reproducibility of the devices. Reactants and preparation conditions also contribute to poor quality.

To combat these problems, a research team from the Qingdao Institute of Bioenergy and Bioprocess Technology (QIBEBT) of the Chinese Academy of Sciences (CAS) has proposed a new understanding of the aging process of perovskite solution and also found a way to avoid side reactions. The study was published in Chem on Mar. 17, with the title of "Perovskite solution aging: What happened and how to inhibit?".

Prof. PANG Shuping, corresponding author of the paper, said "an in-depth understanding of fundamental solution chemistry" had not kept up with rapid efficiency improvements in perovskite solar cells, even though such cells have been studied for 10 years.

"Normally, we need high temperature and a long time to fully dissolve the reactants, but some side reactions can happen simultaneously," said Prof. PANG. "Fortunately, we have found a way to inhibit them."

Achieving a highly stable perovskite solution is especially important in commercializing perovskite solar cells, since it will be easier to make devices with high consistency, said Prof. PANG.

WANG Xiao, an associate professor at QIBEBT and the first author of the paper, said side condensation reactions happen when methylammonium iodide and formamidinium iodide coexist in the solution. They represent the main side reactions in aging perovskite solution, although other side reactions between solute and solvent can occur at very high temperature.

FAN Yingping, a graduate student at the Qingdao University of Science and Technology (QUST) and the co-first author of the paper, studied many methods for stopping unwanted side reactions, but finally found that the stabilizer triethyl borate, with low boiling point, was very effective. FAN also noted that it's a "clean" stabilizer, because it can be fully removed from the film during the following thermal annealing treatment.

With this new stabilizer, the reproducibility of the perovskite solar cells has improved greatly. "Now, we don't need to make fresh solutions every time before we make devices," said Prof. CUI Guanglei from QIBEBT, who noted that the finding is "very important" for the fabrication of perovskite modules.

Credit: 
Chinese Academy of Sciences Headquarters

Composing new proteins with artificial intelligence

image: Using musical scores to code the structure and folding of proteins composed of amino acids, each of which vibrates with a unique sound

Image: 
Markus J. Buehler

WASHINGTON, March 17, 2020 -- Proteins are the building blocks of life, and consequently, scientists have long studied how they can improve proteins and design completely new proteins that perform new functions and processes.

Traditionally, new proteins are created by either mimicking existing proteins or manually editing the amino acids that make up the proteins. This process, however, is time-consuming, and it is difficult to predict the impact of changing any one of the amino acid components of a given protein.

In this week's APL Bioengineering, from AIP Publishing, researchers in the United States and Taiwan explore how to create new proteins by using machine learning to translate protein structures into musical scores, presenting an unusual way to translate physics concepts across disparate domains.

Each of the 20 amino acids that make up proteins has a unique vibrational frequency. The chemical structure of entire proteins can consequently be mapped with audible representations, using known concepts from music theory like note volume, melody, chords and rhythm. The specific sounds generated, determined by the way a protein folds, can be used to train deep learning neural networks.

"These networks learn to understand the complex language folded proteins speak at multiple time scales," said Markus J. Buehler, from the Massachusetts Institute of Technology. "And once the computer has been given a seed of a sequence, it can extrapolate and design entirely new proteins by improvising from this initial idea, while considering various levels of musical variations -- controlled through a temperature parameter -- during the generation."

The team compared the new proteins against a large database with information about all known proteins and used molecular dynamics equilibration and characterization by using a normal mode analysis. Through these steps, the researchers demonstrated the method could design proteins that nature had not yet invented. The new proteins appear to be stable, folded designs, and scientists created an algorithm to materialize music from sound waves to matter.

"This paves the way for making entirely new biomaterials," said Buehler. "Or perhaps you find an enzyme in nature and want to improve how it catalyzes or come up with new variations of proteins altogether."

By adjusting the temperature, the number of variations the algorithm creates can be increased. The new mutations can be measured to see which are most effective as enzymes, for example.

The "protein music" (https://soundcloud.com/user-275864738) the researchers uncovered could also help create new compositional techniques in classical music by illuminating the rhythms and tones of proteins, a method Buehler refers to as materiomusic.

"In the evolution of proteins over thousands of years, nature also gives us new ideas for how sounds be combined and merged," said Buehler.

Credit: 
American Institute of Physics

Emissions of several ozone-depleting chemicals are larger than expected

In 2016, scientists at MIT and elsewhere observed the first signs of healing in the Antarctic ozone layer. This environmental milestone was the result of decades of concerted effort by nearly every country in the world, which collectively signed on to the Montreal Protocol. These countries pledged to protect the ozone layer by phasing out production of ozone-depleting chlorofluorocarbons, which are also potent greenhouse gases.

While the ozone layer is on a recovery path, scientists have found unexpectedly high emissions of CFC-11 and CFC-12, raising the possibility of production of the banned chemicals that could be in violation of the landmark global treaty. Emissions of CFC-11 even showed an uptick around 2013, which has been traced mainly to a source in eastern China. New data suggest that China has now tamped down on illegal production of the chemical, but emissions of CFC-11 and 12 emission are still larger than expected.

Now MIT researchers have found that much of the current emission of these gases likely stems from large CFC "banks" -- old equipment such as building insulation foam, refrigerators and cooling systems, and foam insulation, that was manufactured before the global phaseout of CFCs and is still leaking the gases into the atmosphere. Based on earlier analyses, scientists concluded that CFC banks would be too small to contribute very much to ozone depletion, and so policymakers allowed the banks to remain.

It turns out there are oversized banks of both CFC-11 and CFC-12. The banks slowly leak these chemicals at concentrations that, if left unchecked, would delay the recovery of the ozone hole by six years and add the equivalent of 9 billion metric tons of carbon dioxide to the atmosphere -- an amount that is similar to the current European Union pledge under the UN Paris Agreement to reduce climate change.

"Wherever these CFC banks reside, we should consider recovering and destroying them as responsibly as we can," says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT, who is a co-author of the study. "Some banks are easier to destroy than others. For instance, before you tear a building down, you can take careful measures to recover the insulation foam and bury it in a landfill, helping the ozone layer recover faster and perhaps taking off a chunk of global warming as a gift to the planet."

The team also identified an unexpected and sizable source of another ozone-depleting chemical, CFC-113. This chemical was traditionally used as a cleaning solvent, and its production was banned, except for in one particular use, as a feedstock for the manufacturing of other chemical substances. It was thought that chemical plants would use the CFC-113 without allowing much leakage, and so the chemical's use as a feedstock was allowed to continue.

However, the researchers found that CFC-113 is being emitted into the atmosphere, at a rate of 7 billion grams per year -- nearly as large as the spike in CFC-11, which amounted to about 10 billion grams per year.

"A few years ago, the world got very upset over 10 gigagrams of CFC-11 that wasn't supposed to be there, and now we're seeing 7 gigagrams of CFC-113 that wasn't supposed to be there," says lead author of the study and MIT graduate student Megan Lickley. "The two gases are similar in terms of their ozone depletion and global warming potential. So this is a significant issue."

The study appears in Nature Communications. Co-authors with Lickley and Solomon are Sarah Fletcher, and Kane Stone of MIT, along with Guus Velders of Utrecht University, John Daniel and Stephen Montzka of the National Oceanic and Atmospheric Administration, Matthew Rigby of the University of Bristol, and Lambert Kuijpers of A/gent Ltd. Consultancy, in the Netherlands.

From top to bottom

The new results are based on an analysis the team developed that combines two common methods for estimating the size of CFC banks around the world.

The first method is a top-down approach, which looks at CFCs produced around the world, based on country-by-country reporting, and then compares these numbers to actual concentrations of the gasses and how long they persist in the atmosphere. After accounting for atmospheric destruction, the difference between a chemical's production and its atmospheric concentrations gives scientists an estimate of the size of CFC banks around the world.

Based on recent international assessments that use this top-down approach, there should be no CFC banks left in the world.

"But those values are subject to large uncertainties: Small differences in production values or lifetimes or concentrations can lead to large differences in the bank size," Lickley notes.

The second method is a bottom-up approach, which uses industry-reported values of CFC production and sales in a variety of applications such as refrigeration or foams, and estimates of how quickly each equipment type is depleting over time.

The team combined the best of both methods in a Bayesian probabilistic model -- a hybrid approach that calculates the global size of CFC banks based on both atmospheric data, and country and industry-level reporting of CFC production and sales in various uses.

"We also allow there to be some uncertainties, because there could be reporting errors from different countries, which wouldn't be surprising at all," Solomon says. "So it's a much better quantification of the size of the bank."

Chasing a lost opportunity

The CFC banks, and the sheer quantity of old equipment storing these chemicals around the world, seem to be larger than any previous estimates. The team found the amount of CFC 11 and 12 stored up in banks is about 2.1 million metric tons -- an amount that would delay ozone recovery by six years if released to the atmosphere. This CFC bank is also equivalent to about 9 billion metric tons of carbon dioxide in terms of its effect on climate change.

Interestingly, the amount of both CFC-11 and CFC-12 that is being emitted from these banks is enough to account for the recently observed emissions in both gases.

"It really looks like, other than the extra amount being produced in China that seems to have stopped now, the rest of what we're seeing is no mystery: It's just what's coming out of the banks. That's good news," Solomon says. "It means there doesn't seem to be any further cheating going on. If there is, it's very small. And we wanted to know, if you were to recover and destroy these building foams, and replace old cooling systems and such, in a more responsible way, what more could that do for climate change?"

To answer that, the team explored several theoretical policy scenarios and their potential effect on the emissions produced by CFC banks.

An "opportunity lost" scenario considers what would have happened if all banks were destroyed back in 2000 -- the year that many developed countries agreed to phase out CFC production. If this scenario had played out, the measure would have saved the equivalent of 25 billion metric tons of carbon dioxide between 2000 and 2020, and there would be no CFC emissions lingering now from these banks.

A second scenario predicts CFC emissions in the atmosphere if all banks are recovered and destroyed in 2020. This scenario would save the equivalent of 9 billion metric tons of carbon dioxide emitted to the atmosphere. If these banks were destroyed today, it would also help the ozone layer recover six years faster.

"We lost an opportunity in 2000, which is really sad," Solomon says. "So let's not miss it again."

Credit: 
Massachusetts Institute of Technology

Long-distance fiber link poised to create powerful networks of optical clocks

image: Researchers connected three laboratories in a 100-kilometer region with an optical telecommunications fiber network stable enough to connect optical atomic clocks.

Image: 
Tomoya Akatsuka, Nippon Telegraph and Telephone Corporation

WASHINGTON -- An academic-industrial team in Japan has connected three laboratories in a 100-kilometer region with an optical telecommunications fiber network stable enough to remotely interrogate optical atomic clocks. This type of fiber link is poised to expand the use of these extremely precise timekeepers by creating an infrastructure that could be used in a wide range of applications such as communication and navigation systems.

"The laser system used for optical clocks is extremely complex and thus not practical to build at multiple locations," said Tomoya Akatsuka, a member of the research team from telecommunications company Nippon Telegraph and Telephone Corporation (NTT). "With our network scheme, a shared laser would enable an optical clock to operate remote clocks with much simpler laser systems."

In The Optical Society (OSA) journal Optics Express, researchers from NTT, the University of Tokyo, RIKEN, and NTT East Corporation (NTT East), all in Japan, report the new low-noise fiber link.

"Optical clocks and optical fiber links have reached the stage where they can be put into practical use," said Akatsuka. "Our system is compatible with existing optical communication systems and will help accelerate practical applications. For example, because optical clocks are sensitive to gravitational potential, linked clocks could be used for highly sensitive detection of early signs of earthquakes."

Dealing with noise

Because of optical clocks' extremely high precision, noise is a critical issue when linking optical clocks over a long fiber link. Even small vibrations or temperature variations can introduce noise into the network that skews the laser signal enough that it no longer reflects what originally came from the optical clock.

"Although optical clock networks that simply connect distant clocks have been demonstrated in Europe, our scheme is more challenging because operating remote clocks with the delivered light requires a more stable fiber link," said Akatsuka. "In addition, the country's urban environments tend to contribute more noise to fiber networks in Japan. To cope with that noise, we used a cascaded link that divides a long fiber into shorter spans connected by ultralow-noise laser repeater stations that incorporate planar lightwave circuits (PLCs)."

Optical interferometers fabricated on a small PLC chip were key for enabling a fiber link with extremely low noise. These interferometers were used in laser repeater stations that copy the optical phase of the received light to a repeater laser that is sent to a next station with fiber noise compensation. Applying noise compensation for each short span makes the laser signal less susceptible to noise and thus more stable.

"Optical interferometers fabricated on a PLC chip have an unprecedented stability and provide a compact, robust and ultralow-noise optical system," said Akatsuka. "This is very advantageous when constructing cascaded fiber links in noisy environments such as those found in Japan."

Connecting the laboratories

To demonstrate the system, the researchers sent laser light at a wavelength of 1397 nanometers through an optical fiber from RIKEN to the University of Tokyo and NTT. Using another fiber link, they measured a beat signal between the shared lasers at the University of Tokyo and NTT to evaluate the link stability for a 240-kilometer-long fiber loop. As expected, the results showed that the cascaded link was better than a non-cascaded link.

The laser's 1397-nanometer wavelength is twice that of the laser used to create the most stable type of optical clock known as a strontium optical lattice clock. This means that the fiber network could be used to operate many distant strontium optical lattice clocks via a shared laser.

The researchers are now preparing optical lattice clocks to demonstrate a clock network using this fiber link and are working to make electrical components of the system more practical.

Credit: 
Optica

Finding the Achilles' heel of cancer cells

image: In both p53-defiencient and -positive cancer cells (HeLa [Human Cervix Epitheloid Carcinoma] and U2OS [Human Osteosarcoma], respectively), Cdc7 depletion results in cell death, whereas the same treatment does not affect the growth of non-cancer cells (NHFD; Normal Human Dermal Fibroblast)

Image: 
Tokyo Metropolitan Institute of Medical Science

The key to effective cures for cancers is to find weak points of cancer cells that are not found in non-cancer cells. Researchers at the Tokyo Metropolitan Institute of Medical Science found that cancerous and non-cancerous cells depend on different factors for survival when their DNA replication is blocked. Drugs that inhibited the survival factor required by cancer cells would selectively make cancer cells more vulnerable to replication inhibition.

Copying the 3 billion base-pairs of the human genome (DNA replication) takes 6-8 hrs. During this time, cells can encounter many problems that interfere with the copying process. DNA consists of long chains of nucleotide bases, and obstacles such as DNA interacting compounds, damaged bases, cross-linked DNA strands, reductions in nucleotide precursors, blockage by DNA binding proteins, unusual secondary structures of template DNAs, and collision with transcription, can all contribute to blocking replication. It is essential for growing cells to overcome these problems to ensure that the entire genome is copied accurately. If cells cannot cope with these crises, the genome is likely to undergo changes that could cause cancer cells to develop.

Cells have evolved elaborate mechanisms to protect against damage from faulty DNA replication. When DNA replication machinery encounters an obstacle preventing replication, it activates a safety mechanism known as replication checkpoint. Replication checkpoint stops DNA replication and activates repair pathways. A critical protein in this process is called Claspin. When replication is blocked, a phosphate is covalently attached to Claspin, in a process known as phosphorylation. Phosphorylation of Claspin is a critical first step in activating replication checkpoint.

Recently, Chi-Chun Yang and his colleagues at the Tokyo Metropolitan Institute of Medical Science determined how Claspin is phosphorylated. Their findings were reported in the 2019 December issue of E-life. By using genetically modified animals and cells as well as biochemical analyses with purified proteins, they found that when DNA replication is blocked in cancer cells, a protein called Cdc7 predominantly phosphorylates Claspin to activate replication checkpoint. In contrast, in non-cancer cells, a different protein, CK1?1, phosphorylates Claspin when DNA replication is blocked. The reason for this difference is because cancer cells have high amounts of Cdc7, while non-cancerous cells have low amounts of Cdc7 and relatively high amounts of CK1?1. The results show an example of differential cellular strategies to deal with crisis between cancer and non-cancer cells.

Cancer cells are generally more sensitive to replication block than non-cancer cells, since cellular protection mechanisms are often compromised in cancers. Understanding how cells respond to cellular crises such as DNA replication block is crucial to developing new strategies for cancer treatments. Hisao Masai, Ph.D., the senior author of the E-life article, says, "We can take advantage of the different mechanisms utilized by cancer and non-cancer cells to devise a way to specifically target cancer cells. Our findings suggest that inhibiting Cdc7 will efficiently inactivate the safe-guard system against replication block selectively in cancer cells, leading to their clearance by cell death. Indeed, we and others have developed low molecular-weight compounds that specifically inhibit Cdc7 as candidate anti-cancer agents."

Credit: 
Tokyo Metropolitan Institute of Medical Science

Novel approach to enhance performance of graphitic carbon nitride

image: A schematic illustration of modified g-C3N4 by nitrogen vacancies and reaction mechanism. The nitrogen vacancies not only increased the specific surface area, exposed more active sites, but also inhibited the recombination of photogenerated carriers on the surface of photocatalysts, resulting in the enhancement of photocatalytic and electrocatalytic performances.

Image: 
Kai Yang

In a report published in NANO, scientists from the Jiangxi University of Science and Technology, Guangdong University of Petrochemical technology, Gannan Medical University and Nanchang Hangkong University in China underline the importance of defect engineering to promote catalytic performance by providing a simple and efficient way for modifying and optimizing metal-free semiconductor photocatalyst graphitic carbon nitride (g-C3N4) to solve the dual problems of environmental pollution and lack of fossil resources.

With the rapid growth of industrialization and population, environmental pollution and shortage of fossil resources have become two major challenges for sustainable social development in the 21st century. Hence, the development of green treatment technology is an imperative.

Semiconductor photocatalysis technology has become one of the most promising strategies due to its green, nontoxic and high efficiency by using solar energy. Recently, graphitic carbon nitride (g-C3N4), as a new semiconductor photocatalyst of nonmetallic polymer, has attracted wide attention in photo-catalytic field due to its better stability and optical properties. The photocatalytic activity of bare g-C3N4 is unsatisfactory due to its smaller surface area and rapid recombination of photogenerated carriers under visible light irradiation.

In this work, urea was used as the activated support for the nitrogen vacancies on the basis of bare g-C3N4 by the calcination of melamine. This led to great improvement of photocatalytic performance for the degradation of organic dyes in water, such as rhodamine (RhB), acid orange II, methyl orange (MO) and methyl blue (MB) under visible light irradiation (λ > 420nm). Thus, electrocatalytic performance for hydrogen evolution was achieved due to broader light response, efficient generation and migration of electron/hole charge carriers.

It is hoped that this research will provide an idea about the innovative design, synthesis and fabrication of modifying g-C3N4 and other N-based photocatalysts. There is potential for applying this catalyst to the treatment of environmental pollutants and the preparation of new energy.

Credit: 
World Scientific

A protein that controls inflammation

A study by the research team of Prof. Geert van Loo (VIB-UGent Center for Inflammation Research) has unraveled a critical molecular mechanism behind autoimmune and inflammatory diseases such as rheumatoid arthritis, Crohn's disease, and psoriasis. They discovered how the protein A20 prevents inflammation and autoimmunity, not through its enzymatic activities as has been proposed, but through a non-enzymatic mechanism. These findings open up new possibilities for the treatment of inflammatory diseases. The results of the study are published in the leading journal Nature Immunology.

Inflammatory autoimmune diseases

Inflammatory autoimmune diseases such as rheumatoid arthritis, inflammatory bowel diseases, psoriasis, and multiple sclerosis are amongst the most common diseases. Their prevalence has been rapidly expanding over the last few decades, which means that currently millions of patients are under constant treatment with anti-inflammatory drugs to keep their condition under control.

As an example, about 1 % of the western population is affected by rheumatoid arthritis (RA), a chronic and progressive inflammatory disease of the joints that severely affects the patients' quality of life. The molecular mechanisms that cause diseases such as RA need to be better understood in order to be able to develop new therapies to treat patients suffering from inflammatory pathologies.

An anti-inflammatory domain

Arne Martens in the research group of Prof. Geert van Loo investigated the molecular signaling mechanism by which the protein A20 controls inflammatory reactions. The current study builds further upon earlier work at the VIB-UGent Center for Inflammation Research which demonstrated that A20 acts as a strong anti-inflammatory mediator in many models of inflammatory disease.

In this study, the researchers show that the anti-inflammatory activity of A20 depends on the presence of a specific domain within the protein that is able to bind to ubiquitin, an important modification on other proteins. This allows A20 to interfere with signaling pathways within the cell and as such prevent the activation of a cellular response that would normally result in inflammation and disease development.

New therapies for the treatment of inflammation.

Prof. van Loo explains the importance of their findings: "Our results are important from a scientific point of view since they help us understand how A20 prevents inflammatory reactions in the body's cells. However, this knowledge also has therapeutic implications, and suggests that drugs based on that specific A20 domain could have strong anti-inflammatory activities. Therefore, such drugs could be effective in the treatment of RA and various other inflammatory and autoimmune diseases."

Credit: 
VIB (the Flanders Institute for Biotechnology)

Dams in the upper Mekong River modify nutrient bioavailability downstream

image: Dams stimulate phytoplankton production and modify nutrient export downstream in the Lancang-Mekong River

Image: 
©Science China Press

The number of hydropower dams has increased dramatically in the last 100 years for energy supply, climate change mitigation, and economic development. However, recent studies have overwhelmingly stressed the negative consequences of dam construction. Notably, it is commonly assumed that reservoirs retain nutrients, and this nutrient reduction significantly reduces primary productivity, fishery catches and food security downstream. Such perception largely hampers electricity supply and even sustainable socio-economic development in many developing regions such as Congo and lower Mekong basins.

However, solid scientific support for the widespread belief that dams retain nutrients is usually lacking, because monitoring programs gathering data to establish how nutrient fluxes and phytoplankton production have changed after dam construction are rare. A new article by Qiuwen Chen and his research group at Nanjing Hydraulic Research Institute, China, together with Prof. Jef Huisman from the University of Amsterdam and Prof. Stephen C Maberly from UK Centre for Ecology & Hydrology now provides extensive monitoring data for the upper Mekong River. Their data reveal some surprising new insights.

Contrary to expectation, their study shows that a cascade of reservoirs along the upper Mekong River increased downstream bioavailability of nitrogen and phosphorus. The core mechanism is the synergic effect of increased hydraulic residence time and the development of hypoxic conditions due to stratification and organic matter accumulation. The lack of oxygen results in release of nutrients from the sediment and subsequent accumulation of ammonium and phosphorus in the deeper water layers of the reservoir, which enhances the concentration of dissolved nutrients released downstream from the base of the reservoirs.

Moreover, the longer residence time in the reservoirs strongly increased phytoplankton production, with a shift in species composition from diatoms upstream to green algae in the downstream reservoirs.

Upstream dams are regularly blamed for nutrient retention and consequently the collapse of primary productivity and fisheries, and even human rights of subsistence in the lower Mekong River. This work implies that the fishery decline in the lower Mekong River might be caused by other factors such as over-fishing, habitat modification, disruption of fish migration by dam construction or water quality deterioration from local sources, rather than a reduction in nutrient availability or primary productivity induced by the cascade dams upstream.

This novel perspective on the globally important issue emphasizes the need for dedicated monitoring of the environmental impacts of hydropower dams on nutrient cycling and primary production. The findings are of great significance not only for science, but also for sustainable social-economic development along the Mekong River and other transboundary rivers worldwide.

Credit: 
Science China Press

Passport tagging for express cargo transportation in cells

image: (A) The passport sequence built in coagulation factor VIII. (B) Secretion enhancement of biopharmaceuticals by tagging them with the passport sequence

Image: 
© Koichi Kato

Many proteins produced by the cells are decorated with sugars and delivered out of the cells. In this secretory pathway, MCFD2 sorts and transports blood coagulation factors V and VIII as special cargos. The impairment of MCFD2 function results in a deficiency of these coagulation factors. The collaborative groups, including researchers at the Graduate School of Pharmaceutical Sciences of Nagoya City University, Exploratory Research Center on Life and Living Systems (ExCELLS), Institute for Molecular Science (IMS), and National Institute for Basic Biology (NIBB) of National Institutes of Natural Sciences, elucidated the molecular mechanisms behind MCFD2-mediated cargo transportation. The groups found that a 10-amino acid sequence is built into these coagulation factors as a "passport", which is recognized by MCFD2, and elimination of this sequence attenuates their cellular secretion. Moreover, they discovered that the intracellular transportation and consequent secretion of recombinant erythropoietin, a glycoprotein that is used to treat anemia, are significantly enhanced simply by tagging it with the factor VIII-derived passport sequence. Presently, most biopharmaceuticals are produced using mammalian cell cultures. The collaborative group findings provide the molecular basis for the intracellular trafficking of blood coagulation factors and an explanation for genetic deficiency as well as offer this "passport sequence" as a potentially useful tool for improving production yields of recombinant glycoproteins of biopharmaceutical interest.

Credit: 
Nagoya City University

Seeing with electrons: Scientists pave the way to more affordable and accessible cryo-EM

image: Researchers from the Quantum Wave Microscopy Unit at OIST, Dr. Hidehito Adaniya (left) and Dr. Martin Cheung (right) showcase the new cryo-electron microscope.

Image: 
OIST

Visualizing the structure of viruses, proteins and other small biomolecules can help scientists gain deeper insights into how these molecules function, potentially leading to new treatments for disease. In recent years, a powerful technology called cryogenic electron microscopy (cryo-EM), where flash-frozen samples are embedded in glass-like ice and probed by an electron beam, has revolutionized biomolecule imaging. However, the microscopes that the technique relies upon are prohibitively expensive and complicated to use, making them inaccessible to many researchers.

Now, scientists from the Okinawa Institute of Science and Technology Graduate University (OIST) have developed a cheaper and more user-friendly cryo-electron microscope, which could ultimately put cryo-EM in reach of thousands of labs.

In a six-year construction process, the team built the microscope by adding a new imaging function to a scanning electron microscope. They used the hybrid microscope to image three different biomolecules: two distinctly shaped viruses and an earthworm protein.

"Building this microscope was a long and challenging process, so we are thrilled about its results so far," said Dr. Hidehito Adaniya, a researcher in the Quantum Wave Microscopy (QWM) Unit and co-first author of the study, published in Ultramicroscopy. "As well as being cheaper and simpler to use, our microscope utilizes low-energy electrons, which could potentially improve the contrast of the images."

Currently, cryo-EM works by firing high-energy electrons at a biological specimen. The electrons interact with atoms in the biomolecule and scatter, changing their direction. The scattered electrons then hit detectors, and the specific scatter pattern is used to build up an image of the sample.

But at high energies, only a relatively small number of these scattering events occur because the electrons interact very weakly with the atoms in the sample as they speed past. "Biomolecules are predominantly composed of elements with a low atomic mass, such as carbon, nitrogen, hydrogen and oxygen," explained co-author and researcher, Dr. Martin Cheung. "These lighter elements are practically invisible to high-speed electrons."

In contrast, low-energy electrons travel slower and interact more strongly with the lighter elements, creating more frequent scattering events.

This strong interaction between low-energy electrons and lighter elements is challenging to harness, however, because the layer of ice surrounding the specimen also scatters electrons, creating background noise that masks the biomolecules. To overcome this issue, the scientists adapted the microscope so it could switch to a different imaging technique: cryo-electron holography.

Forming the hologram

In holographic mode, an electron gun fires a beam of low-energy electrons towards the specimen so that part of the electron beam passes through the ice and specimen, forming an object wave, while the other part of the electron beam only passes through the ice, forming a reference wave. The two parts of the electron beam then interact with each other, like colliding ripples in a pond, creating a distinct pattern of interference - the hologram.

Based on the hologram's interference pattern, the detectors can distinguish scattering by the specimen from scattering by the ice film. Scientists can also compare the two parts of the beam to gain extra information from the electrons that is difficult to detect using conventional cryo-EM.

"Electron holography provides us with two different kinds of information - amplitude and phase - whereas conventional cryo-electron microscopy techniques can only detect phase," said Dr. Adaniya. This added information could allow scientists to gain more knowledge about the structure of the specimen, he explained.

A breakthrough in thin ice

In addition to building the hybrid microscope, the scientists also had to optimize the sample preparation. Since low-energy electrons are more prone to being scattered by the ice than high-energy electrons, the ice film enveloping the sample had to be as thin as possible to maximize the signal. The scientists used flakes of hydrated graphene oxide to hold the biomolecules in place, allowing thinner films of ice to form.

The scientists also had to take special steps to prevent the formation of crystalline ice, which is "bad news for cryo-EM imaging", Cheung said.

With the current set up and optimized samples, the microscope produced images with a resolution of up to a few nanometers, which the researchers acknowledge is far lower than the near-atomic resolution achieved by conventional cryo-EM.

But even with the current resolution, the microscope still fills an important niche as a pre-screening microscope. "Because the low-energy electrons interact so strongly with the ice, our cheaper and user-friendly microscope can help researchers gauge their ice quality before spending valuable time and money using conventional cryo-EM microscopes," said Dr. Adaniya.

The whole process is quick and simple, the researchers say. The SEM/STEM mode helps scientists locate the best spot for imaging, followed by a seamless transition into the holographic mode. What's more, the ability for this mode-switching technology to be implemented into other commercial scanning electron microscopes makes it a widely adoptable imaging method.

In the future, the team hopes to improve image resolution further, by changing the electron gun to one that creates a higher quality electron beam. "That will be the next step forward," they said.

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Semiconductors can behave like metals and even like superconductors

image: Left - Shape of nanostructures made of lead sulphide, computer reconstructed based on series of transmission electron microscopy images. The left straight stripe behaves like a semiconductor and the right zigzag nanowire behaves like a metal.

Right - Electrical device consisting of two gold electrodes contacting a nanowire (in red) on a silicon chip (in blue).

Image: 
Hungria/Universidad de Cádiz, Ramin/DESY, Klinke/University of Rostock and Swansea University.

The crystal structure at the surface of semiconductor materials can make them behave like metals and even like superconductors, a joint Swansea/Rostock research team has shown. The discovery potentially opens the door to advances like more energy-efficient electronic devices.

Semiconductors are the active parts of transistors, integrated circuits, sensors, and LEDs. These materials, mostly based on silicon, are at the heart of today's electronics industry.

We use their products almost continuously, in modern TV sets, in computers, as illumination elements, and of course as mobile phones.

Metals, on the other hand, wire the active electronic components and are the framework for the devices.

The research team, led by Professor Christian Klinke of Swansea University's chemistry department and the University of Rostock in Germany, analysed the crystals at the surface of semiconductor materials.

Applying a method called colloidal synthesis to lead sulphide nanowires, the team showed that the lead and sulphur atoms making up the crystals could be arranged in different ways. Crucially, they saw that this affected the material's properties.

In most configurations the two types of atoms are mixed and the whole structure shows semiconducting behaviour as expected.

However, the team found that one particular "cut" through the crystal, with the so called {111} facets on the surface, which contains only lead atoms, shows metallic character.

This means that the nanowires carry much higher currents, their transistor behaviour is suppressed, they do not respond to illumination, as semiconductors would, and they show inverse temperature dependency, typical for metals.

Dr. Mehdi Ramin, one of the researchers from the Swansea/Rostock team, said:

"After we discovered that we can synthesize lead sulphide nanowires with different facets, which makes them look like straight or zigzag wires, we thought that this must have interesting consequences for their electronic properties.

But these two behaviours were quite a surprise to us. Thus, we started to investigate the consequences of the shape in more detail."

The team then made a second discovery: at low temperatures the skin of the nanostructures even behaves like a superconductor. This means that the electrons are transported through the structures with significantly lower resistance.

Professor Christian Klinke of Swansea University and Rostock University, who led the research, said:

"This behaviour is astonishing and certainly needs to be further studied in much more detail.

But it already gives new exciting insights into how the same material can possess different fundamental physical properties depending on its structure and what might be possible in the future.

One potential application is lossless energy transport, which means that no energy is wasted.

Through further optimization and transfer of the principle to other materials, significant advances can be made, which might lead to new efficient electronic devices.

The results presented in the article are merely a first step in what will surely be a long and fruitful journey towards new thrilling chemistry and physics of materials."

Credit: 
Swansea University

Scientists can see the bias in your brain

image: The strength of alpha brain waves (left) reveals biased decisions (right).

Image: 
Grabot and Kayser, JNeurosci 2020

The strength of alpha brain waves reveals if you are about to make a biased decision, according to research recently published in JNeurosci.

Everyone has bias, and neuroscientists can see what happens inside your brain as you succumb to it. The clue comes from alpha brain waves -- a pattern of activity when the neurons in the front of your brain fire in rhythm together. Alpha brain waves pop up when people make decisions, but it remains unclear what their role is.

Grabot and Kayser used electroencephalography to monitor the brain activity of adults while they made a decision. The participants saw a picture and heard a sound milliseconds apart and then decided which one came first. Prior to the experiment, the researchers determined if the participants possessed a bias for choosing the picture or sound. Before the first stimulus appeared, the strength of the alpha waves revealed how the participants would decide. Weaker alpha waves meant resisting the bias; stronger alpha waves indicated succumbing to the bias.

Credit: 
Society for Neuroscience

Study shows widely used machine learning methods don't work as claimed

Models and algorithms for analyzing complex networks are widely used in research and affect society at large through their applications in online social networks, search engines, and recommender systems. According to a new study, however, one widely used algorithmic approach for modeling these networks is fundamentally flawed, failing to capture important properties of real-world complex networks.

"It's not that these techniques are giving you absolute garbage. They probably have some information in them, but not as much information as many people believe," said C. "Sesh" Seshadhri, associate professor of computer science and engineering in the Baskin School of Engineering at UC Santa Cruz.

Seshadhri is first author of a paper on the new findings published March 2 in Proceedings of the National Academy of Sciences. The study evaluated techniques known as "low-dimensional embeddings," which are commonly used as input to machine learning models. This is an active area of research, with new embedding methods being developed at a rapid pace. But Seshadhri and his coauthors say all these methods share the same shortcomings.

To explain why, Seshadhri used the example of a social network, a familiar type of complex network. Many companies apply machine learning to social network data to generate predictions about people's behavior, recommendations for users, and so on. Embedding techniques essentially convert a person's position in a social network into a set of coordinates for a point in a geometric space, yielding a list of numbers for each person that can be plugged into an algorithm.

"That's important because something abstract like a persons 'position in a social network' can be converted to a concrete list of numbers. Another important thing is that you want to convert this into a low-dimensional space, so that the list of numbers representing each person is relatively small," Seshadhri explained.

Once this conversion has been done, the system ignores the actual social network and makes predictions based on the relationships between points in space. For example, if a lot of people close to you in that space are buying a particular product, the system might predict that you are likely to buy the same product.

Seshadhri and his coauthors demonstrated mathematically that significant structural aspects of complex networks are lost in this embedding process. They also confirmed this result by empirically by testing various embedding techniques on different kinds of complex networks.

"We're not saying that certain specific methods fail. We're saying that any embedding method that gives you a small list of numbers is fundamentally going to fail, because a low-dimensional geometry is just not expressive enough for social networks and other complex networks," Seshadhri said.

A crucial feature of real-world social networks is the density of triangles, or connections between three people.

"Where you have lots of triangles, it means there is a lot of community structure in that part of a social network," Seshadhri said. "Moreover, these triangles are even more significant when you're looking at people who have limited social networks. In a typical social network, some people have tons of connections, but most people don't have a lot of connections."

In their analysis of embedding techniques, the researchers observed that a lot of the social triangles representing community structure are lost in the embedding process. "All of this information seems to disappear, so it's almost like the very thing you wanted to find has been lost when you construct these geometric representations," Seshadhri said.

Low-dimensional embeddings are by no means the only methods being used to generate predictions and recommendations. They are typically just one of many inputs into a very large and complex machine learning model.

"This model is a huge black box, and a lot of the positive results being reported say that if you include these low-dimensional embeddings, your performance goes up, maybe you get a slight bump. But if you used it by itself, it seems you would be missing a lot," Seshadhri said.

He also noted that new embedding methods are mostly being compared to other embedding methods. Recent empirical work by other researchers, however, shows that different techniques can give better results for specific tasks.

"Let's say you want to predict who's a Republican and who's a Democrat. There are techniques developed specifically for that task which work better than embeddings," he said. "The claim is that these embedding techniques work for many different tasks, and that's why a lot of people have adopted them. It's also very easy to plug them into an existing machine learning system. But for any particular task, it turns out there is always something better you can do."

Given the growing influence of machine learning in our society, Seshadhri said it is important to investigate whether the underlying assumptions behind the models are valid.

"We have all these complicated machines doing things that affect our lives significantly. Our message is just that we need to be more careful about evaluating these techniques," he said. "Especially in this day and age when machine learning is getting more and more complicated, it's important to have some understanding of what can and cannot be done."

Credit: 
University of California - Santa Cruz