Tech

A method for calculating optimal parameters of liquid chrystal displays developed at RUDN University

image: A professor from RUDN University together with his colleagues from Saratov Chernyshevsky State University and D. Mendeleev University of Chemical Technology of Russia developed a method for calculating the parameters of diffraction optical elements used in LCDs. In particular, the new technology can be used to expand the angle of view while preserving high resolution and color rendition.

Image: 
RUDN University

A professor from RUDN University together with his colleagues from Saratov Chernyshevsky State University and D. Mendeleev University of Chemical Technology of Russia developed a method for calculating the parameters of diffraction optical elements used in LCDs. In particular, the new technology can be used to expand the angle of view while preserving high resolution and color rendition. The results of the study were published in the Journal of The Society for Information Display.

Each pixel on a display corresponds to a group of three light sources: red, green, and blue. When the brightness of all three diodes is the same, white light is produced; and by changing the share of each respective light, one can achieve different shades of colors. Modern-day displays use liquid crystals to adjust the brightness of light sources. When energized, they turn, and their transparency changes muting some of the colors. This way, a required shade is produced. However, if one looks at a display from an angle, the images may become darker, and color rendition may be distorted. A professor from RUDN University and his colleagues developed a method of calculating the parameters of LCD bases to achieve intended qualities, for example, to expand the angle of view.

To reduce the impact of an angle of view on image quality, the light in displays undergoes additional processing. Only the beams that are oriented in a certain plane are chosen from a disorderly flux of light. To do so, diffraction optical elements are used. DOE are bases with surface microrelief. It is this relief that determines their optical properties, in particular, the intensity of light that goes through them. The researchers compared three surface reliefs and developed an algorithm for calculating their optical parameters. Previously, similar results had been obtained for crystals with positive optical anisotropy (a parameter that shows the movement of light beams inside a crystal). In his work, the professor from RUDN University studied DOE made of the so-called discotic liquid crystals (DLC) that have negative optical anisotropy.

"Our goal was to calculate diffraction in DOE with negative optical anisotropy. Such elements can be based on discotic liquid crystals. DOE of this type may be used to expand the angles of view of liquid crystal displays," said Prof. Viktor Belyaev, a Ph.D. in Technical Sciences from the Department of Mechanics and Mechatronics of RUDN University.

The team used DOE with undulated and rectangular periodic profiles. In all elements, the period or waves or rectangular ridges was 2.52 um, and the width of cuts in the rectangular DOE amounted to either 0.63 or 1.25 um. The team chose these values because they were divisible by the wavelength of the incident light (0.63 um for red light). The height of the relief varied from 0.063 to 1.89 um. Using these parameters, the team calculated how the intensity of light depended on the ratio of the period to the height of relief in all three types of DOE. The new algorithm can be used to calculate the parameters of DOE based on specific display requirements.

"We have calculated the effect of the periodic profile of microrelief on diffraction parameters. The method suggested by our team helps calculate relief parameters to achieve the required properties of a spatially inhomogeneous anisotropic structure," added Prof. Viktor Belyaev from RUDN University.

Credit: 
RUDN University

No more needles for diagnostic tests?

image: Engineers at the McKelvey School of Engineering at Washington University in St. Louis have developed a microneedle patch that can be applied to the skin, capture a biomarker of interest from interstitial fluid and, thanks to its unprecedented sensitivity, allow clinicians to detect its presence.

Image: 
Image: Sisi Cao

Blood draws are no fun.

They hurt. Veins can burst, or even roll -- like they're trying to avoid the needle, too.

Oftentimes, doctors use blood samples to check for biomarkers of disease: antibodies that signal a viral or bacterial infection, such as SARS-CoV-2, the virus responsible for COVID-19; or cytokines indicative of inflammation seen in conditions such as rheumatoid arthritis and sepsis.

These biomarkers aren't just in blood, though. They can also be found in the dense liquid medium that surrounds our cells, but in a low abundance that makes it difficult to be detected.

Until now.

Engineers at the McKelvey School of Engineering at Washington University in St. Louis have developed a microneedle patch that can be applied to the skin, capture a biomarker of interest and, thanks to its unprecedented sensitivity, allow clinicians to detect its presence.

The technology is low cost, easy for a clinician or patients themselves to use, and could eliminate the need for a trip to the hospital just for a blood draw.

The research, from the lab of Srikanth Singamaneni, the Lilyan & E. Lisle Hughes Professor in the Department of Mechanical Engineering & Material Sciences, was published online Jan. 22 in the journal Nature Biomedical Engineering.

In addition to the low cost and ease of use, these microneedle patches have another advantage over blood draws, perhaps the most important feature for some: "They are entirely pain-free," Singamaneni said.

Finding a biomarker using these microneedle patches is similar to blood testing. But instead of using a solution to find and quantify the biomarker in blood, the microneedles directly capture it from the liquid that surrounds our cells in skin, which is called dermal interstitial fluid (ISF). Once the biomarkers have been captured, they're detected in the same way -- using fluorescence to indicate their presence and quantity.

ISF is a rich source of biomolecules, densely packed with everything from neurotransmitters to cellular waste. However, to analyze biomarkers in ISF, conventional method generally requires extraction of ISF from skin. This method is difficult and usually the amount of ISF that can be obtained is not sufficient for analysis. That has been a major hurdle for developing microneedle-based biosensing technology.

Another method involves direct capture of the biomarker in ISF without having to extract ISF. Like showing up to a packed concert and trying to make your way up front, the biomarker has to maneuver through a crowded, dynamic soup of ISF before reaching the microneedle in the skin tissue. Under such conditions, being able to capture enough of the biomarker to see using the traditional assay isn't easy.

But the team has a secret weapon of sorts: "plasmonic-fluors," an ultrabright fluorescence nanolabel. Compared with traditional fluorescent labels, when an assay was done on microneedle patch using plasmonic-fluor, the signal of target protein biomarkers shined about 1,400 times as bright and become detectable even when they are present at low concentrations.

"Previously, concentrations of a biomarker had to be on the order of a few micrograms per milliliter of fluid," Zheyu (Ryan) Wang, a graduate student in the Singamaneni lab and one of the lead authors of the paper, said. That's far beyond the real-world physiological range. But using plasmonic-fluor, the research team was able to detect biomarkers on the order of picograms per milliliter.

"That's orders of magnitude more sensitive," Ryan said.

These patches have a host of qualities that can make a real impact on medicine, patient care and research.

They would allow providers to monitor biomarkers over time, particularly important when it comes to understanding how immunity plays out in new diseases.

For example, researchers working on COVID-19 vaccines need to know if people are producing the right antibodies and for how long. "Let's put a patch on," Singamaneni said, "and let's see whether the person has antibodies against COVID-19 and at what level."

Or, in an emergency, "When someone complains of chest pain and they are being taken to the hospital in an ambulance, we're hoping right then and there, the patch can be applied," Jingyi Luan, a student who recently graduated from the Singamaneni lab and one of the lead authors of the paper, said. Instead of having to get to the hospital and have blood drawn, EMTs could use a microneedle patch to test for troponin, the biomarker that indicates myocardial infarction.

For people with chronic conditions that require regular monitoring, microneedle patches could eliminate unnecessary trips to the hospital, saving money, time and discomfort -- a lot of discomfort.

The patches are almost pain-free. "They go about 400 microns deep into the dermal tissue," Singamaneni said. "They don't even touch sensory nerves."

In the lab, using this technology could limit the number of animals needed for research. Sometimes research necessitates a lot of measurements in succession to capture the ebb and flow of biomarkers -- for example, to monitor the progression of sepsis. Sometimes, that means lot of small animals.

"We could significantly lower the number of animals required for such studies," Singamaneni said.

The implications are vast -- and Singamaneni's lab wants to make sure they are all explored.

There is a lot of work to do, he said: "We'll have to determine clinical cutoffs," that is, the range of biomarker in ISF that corresponds to a normal vs. abnormal level. "We'll have to determine what levels of biomarker are normal, what levels are pathological." And his research group is working on delivery methods for long distances and harsh conditions, providing options for improving rural healthcare.

"But we don't have to do all of this ourselves," Singamaneni said. Instead, the technology will be available to experts in different areas of medicine.

"We have created a platform technology that anyone can use," he said. "And they can use it to find their own biomarker of interest."

We don't have to do all of this ourselves

Singamaneni and Erica L. Scheller, assistant professor of Medicine in the Division of Bone and Mineral Disease at the School of Medicine, worked together to investigate the concentration of biomarkers in local tissues.

Current approaches for such evaluation require the isolation of local tissues and do not allow successive and continuous inspection. Singamaneni and Scheller are developing a better platform to achieve long term monitoring of local biomarker concentration.

Working together

Srikanth Singamaneni, the Lilyan E. Lisle Hughes Professor in the Department of Mechanical Engineering & Materials Science, and Jai S. Rudra, assistant professor in the Department of Biomedical Engineering, worked together to look at cocaine vaccines, which work by blocking cocaine's ability to enter the brain.

Current candidates for such a vaccine don't confer long-lasting results; they require frequent boosting. Singamaneni and Rudra wanted a better way to determine when the effects of the vaccine had waned. "We've shown that we can use the patches to understand whether a person is still producing the necessary antibodies," Singamaneni said. "No blood draw necessary."

Credit: 
Washington University in St. Louis

A professor from RUDN University developed new liquid crystals

image: A professor from RUDN University together with his Indian colleagues synthesized and studied new dibenzophenazine-based liquid crystals that could potentially be used in optoelectronics and solar panels.

Image: 
RUDN University

A professor from RUDN University together with his Indian colleagues synthesized and studied new dibenzophenazine-based liquid crystals that could potentially be used in optoelectronics and solar panels. The results of the study were published in the Journal of Molecular Liquids.

Liquid crystals are an intermediate phase between a liquid and a solid body. They are ordered like regular chrystals but at the same time have a flow like liquids. It is this duality that allows them to be used in organic LEDs and LCDs. Unlike other liquid crystals, discotic ones (DLC) are capable of self-assembly into ordered structures. This makes them a promising material for industrial electronics, namely, for the production of displays. A professor from RUDN University together with his Indian colleagues synthesized and described new dibenzophenazine-based DLCs.

"Discotic liquid crystals are interesting because of their ability to form self-assembled ordered columnar structures. In such structures, an electric charge can move along the column, which makes them useful for optoelectronic devices such as organic LEDs, organic field-effect transistors (OFET), photoelectric solar elements, and sensors," said Prof. Viktor Belyaev, a Ph.D. in Technical Sciences from the Department of Mechanics and Mechatronics at RUDN University.

DLCs consist of disc-shaped molecules aligned in columns. In the center of each disc, there is an aromatic ring (a cyclical organic fragment) surrounded by chains of other organic fragments. Due to this aromatic center, a DLC can transfer a charge along the axis of a column. Prof. Belyaev developed discotic liquid crystals with an aromatic compound called dibenzophenazine in the center. As for the chains that surrounded it, the team tried three different types of fragments. The molecular structure of the new DLCs was studied using spectral, X-ray diffraction, and elementary analysis. Then, the team tested the three groups of DLCs in a set of experiments.

The experiments showed that alkoxy thiol chains increased the polarity of the molecules in liquid crystals thus improving the internal structure of the columns and making them more even. All new DLCs were able to withstand temperatures up to 330?. However, the crystals that consisted of smaller molecules (i.e. the ones with their aromatic center surrounded by alkyl thiols) lost their intermediary status and transitioned from the liquid crystal to the liquid form at lower temperatures (55.1 ?) that the crystals from the other two groups. This is due to the size of the molecules in the columns: the bigger they are, the more stable is the liquid crystal state.

"The new discotic liquid crystals could play an important role in organic optoelectronic devices and solar panels," added Prof. Viktor Belyaev from RUDN University.

Credit: 
RUDN University

New technique builds super-hard metals from nanoparticles

image: This gold "coin" was made from nanoparticle building blocks, thanks to a new technique developed by Brown University researchers. Making bulk metals this way allows for precise of the metal's microstructure, which enhances its mechanical properties.

Image: 
Chen Lab / Brown University

PROVIDENCE, R.I. [Brown University] -- Metallurgists have all kinds of ways to make a chunk of metal harder. They can bend it, twist it, run it between two rollers or pound it with a hammer. These methods work by breaking up the metal's grain structure -- the microscopic crystalline domains that form a bulk piece of metal. Smaller grains make for harder metals.

Now, a group of Brown University researchers has found a way to customize metallic grain structures from the bottom up. In a paper published in the journal Chem, the researchers show a method for smashing individual metal nanoclusters together to form solid macro-scale hunks of solid metal. Mechanical testing of the metals manufactured using the technique showed that they were up to four times harder than naturally occurring metal structures.

"Hammering and other hardening methods are all top-down ways of altering grain structure, and it's very hard to control the grain size you end up with," said Ou Chen, an assistant professor of chemistry at Brown and corresponding author of the new research. "What we've done is create nanoparticle building blocks that fuse together when you squeeze them. This way we can have uniform grain sizes that can be precisely tuned for enhanced properties."

For this study, the researchers made centimeter-scale "coins" using nanoparticles of gold, silver, palladium and other metals. Items of this size could be useful for making high-performance coating materials, electrodes or thermoelectric generators (devices that convert heat fluxes into electricity). But the researchers think the process could easily be scaled up to make super-hard metal coatings or larger industrial components.

The key to the process, Chen says, is the chemical treatment given to the nanoparticle building blocks. Metal nanoparticles are typically covered with organic molecules called ligands, which generally prevent the formation of metal-metal bonds between particles. Chen and his team found a way to strip those ligands away chemically, allowing the clusters to fuse together with just a bit of pressure.

The metal coins made with the technique were substantially harder than standard metal, the research showed. The gold coins, for example, were two to four times harder than normal. Other properties like electrical conduction and light reflectance were virtually identical to standard metals, the researchers found.

The optical properties of the gold coins were fascinating, Chen says, as there was a dramatic color change when the nanoparticles were compressed into bulk metal.

"Because of what's known as the plasmonic effect, gold nanoparticles are actually purplish-black in color," Chen said. "But when we applied pressure, we see these purplish clusters suddenly turn to a bright gold color. That's one of the ways we knew we had actually formed bulk gold."

In theory, Chen says, the technique could be used to make any kind of metal. In fact, Chen and his team showed that they could make an exotic form of metal known as a metallic glass. Metallic glasses are amorphous, meaning they lack the regularly repeating crystalline structure of normal metals. That gives rise to remarkable properties. Metallic glasses are more easily molded than traditional metals, can be much stronger and more crack-resistant, and exhibit superconductivity at low temperatures.

"Making metallic glass from a single component is notoriously hard to do, so most metallic glasses are alloys," Chen said. "But we were able to start with amorphous palladium nanoparticles and use our technique to make a palladium metallic glass."

Chen says he's hopeful that the technique could one day be widely used for commercial products. The chemical treatment used on the nanoclusters is fairly simple, and the pressures used to squeeze them together are well within the range of standard industrial equipment. Chen has patented the technique and hopes to continue studying it.

"We think there's a lot of potential here, both for industry and for the scientific research community," Chen said.

Credit: 
Brown University

Risk factors for intraoperative pressure injury in aortic surgery

In a new publication from Cardiovascular Innovations and Applications; DOI https://doi.org/10.15212/CVIA.2019.1263, Yao Dong, Jun-E Liu and Ling Song from the Capital Medical University, Beijing, China consider risk factors for intraoperative pressure injury in aortic surgery.

Intraoperative pressure injuries are some of the most significant health problems in clinical practice. Patients undergoing aortic surgery are at high risk of developing an intraoperative pressure injury, with an incidence much higher than that associated with other types of cardiac surgery.

In this article the authors identify risk factors associated with an increased risk of intraoperative pressure injury in patients undergoing aortic surgery concluding that nurses should thoroughly assess the risk of intraoperative pressure injury and implement appropriate preventative interventions, particularly in high-risk patients undergoing aortic surgery.

Credit: 
Compuscript Ltd

NIH study compares low-fat, plant-based diet to low-carb, animal-based diet

image: Examples of dinners given to study participants: low-carb, animal-based diet (left) and low-fat, plant-based diet (right)

Image: 
Amber Courville and Paule Joseph, National Institutes of Health

People on a low-fat, plant-based diet ate fewer daily calories but had higher insulin and blood glucose levels, compared to when they ate a low-carbohydrate, animal-based diet, according to a small but highly controlled study at the National Institutes of Health. Led by researchers at the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), the study compared the effects of the two diets on calorie intake, hormone levels, body weight, and more. The findings, published in Nature Medicine, broaden understanding of how restricting dietary carbohydrates or fats may impact health.

"High-fat foods have been thought to result in excess calorie intake because they have many calories per bite. Alternatively, high-carb foods can cause large swings in blood glucose and insulin that may increase hunger and lead to overeating," said NIDDK Senior Investigator Kevin Hall, Ph.D., the study's lead author. "Our study was designed to determine whether high-carb or high-fat diets result in greater calorie intake."

The researchers housed 20 adults without diabetes for four continuous weeks in the NIH Clinical Center's Metabolic Clinical Research Unit. The participants, 11 men and nine women, received either a plant-based, low-fat diet or an animal-based, low-carbohydrate diet for two weeks, immediately followed by two weeks on the alternate diet. The low-fat diet was high in carbohydrates. The low-carbohydrate diet was high in fats. Both diets were minimally processed and had equivalent amounts of non-starchy vegetables. The participants were given three meals a day, plus snacks, and could eat as much as desired.

The main results showed that people on the low-fat diet ate 550 to 700 fewer calories per day than when they ate the low-carb diet. Despite the large differences in calorie intake, participants reported no differences in hunger, enjoyment of meals, or fullness between the two diets. Participants lost weight on both diets, but only the low-fat diet led to a significant loss of body fat.

"Despite eating food with an abundance of high glycemic carbohydrates that resulted in pronounced swings in blood glucose and insulin, people eating the plant-based, low-fat diet showed a significant reduction in calorie intake and loss of body fat, which challenges the idea that high-carb diets per se lead people to overeat. On the other hand, the animal-based, low-carb diet did not result in weight gain despite being high in fat," said Hall.

These findings suggest that the factors that result in overeating and weight gain are more complex than the amount of carbs or fat in one's diet. For example, Hall's laboratory showed last year that a diet high in ultra-processed food led to overeating and weight gain in comparison to a minimally processed diet matched for carbs and fat.

The plant-based, low-fat diet contained 10.3% fat and 75.2% carbohydrate, while the animal-based, low-carb diet was 10% carbohydrate and 75.8% fat. Both diets contained about 14% protein and were matched for total calories presented to the subjects, although the low-carb diet had twice as many calories per gram of food than the low-fat diet. On the low-fat menu, dinner might consist of a baked sweet potato, chickpeas, broccoli and oranges, while a low-carb dinner might be beef stir fry with cauliflower rice. Subjects could eat what and however much they chose of the meals they were given.

"Interestingly, our findings suggest benefits to both diets, at least in the short-term. While the low-fat, plant-based diet helps curb appetite, the animal-based, low-carb diet resulted in lower and more steady insulin and glucose levels," Hall said. "We don't yet know if these differences would be sustained over the long term."

The researchers note that the study was not designed to make diet recommendations for weight loss, and results may have been different if participants were actively trying to lose weight. Further, all meals were prepared and provided for participants in an inpatient setting, which may make results difficult to repeat outside the lab, where factors such as food costs, food availability, and meal preparation constraints can make adherence to diets challenging. The tightly controlled clinical environment, however, ensured objective measurement of food intake and accuracy of data.

"To help us achieve good nutrition, rigorous science is critical ? and of particular importance now, in light of the COVID-19 pandemic, as we aim to identify strategies to help us stay healthy," said NIDDK Director Griffin P. Rodgers, M.D. "This study brings us closer to answering long-sought questions about how what we eat affects our health."

Credit: 
NIH/National Institute of Diabetes and Digestive and Kidney Diseases

Stanford: forecasting coastal water quality

image: Stanford researcher Ryan Searcy collects water samples from a tide pool at the Fitzgerald Marine Reserve, in Moss Beach, California.

Image: 
Meghan Shea

Less than two days of water quality sampling at local beaches may be all that's needed to reduce illnesses among millions of beachgoers every year due to contaminated water, according to new Stanford research. The study, published in Environmental Science & Technology, presents a modeling framework that dependably predicts water quality at beaches after only a day or two of frequent water sampling. The approach, tested in California, could be used to keep tabs on otherwise unmonitored coastal areas, which is key to protecting the well-being of beachgoers and thriving ocean economies worldwide.

"This work combines knowledge of microbiology, coastal processes and data science to produce a tool to effectively manage one of our most precious resources and protect human health," said senior author Alexandria Boehm, a Stanford professor of civil and environmental engineering.

Measuring concentrations of fecal indicator bacteria (FIB) - which denote the presence of fecal matter and can lead to unsafe water conditions - at beaches ensures the health and safety of the public. While all ocean water contains some degree of pathogens, such as bacteria or viruses, they're typically diluted to harmless concentrations. However, changes in rainfall, water temperature, wind, runoff, boating waste, storm sewer overflow, proximity to waste treatment plants, animals and waterfowl can lead to an influx of water contamination. Exposure to these contaminants can cause many ailments, including respiratory diseases and gastrointestinal illnesses, along with skin, eye and ear infections to swimmers.

Protecting coastal waters and the people that use them remains essential for much of California's 840 miles of coastline. Over 150 million people swim, surf, dive and play at one of the state's 450 beaches annually, generating over $10 billion in revenue. According to the California State Water Resources Control Board, health agencies across 17 counties, publicly owned sewage treatment plants, environmental groups and several citizen-science groups perform water sampling across the state. However, not all waters are routinely checked due to accessibility issues, budget resource constraints or the season, despite their use by the public.

Another obstacle to safeguarding public health lies in the lag time between sampling and results - up to two-days - leading beach managers to make decisions reflecting past water quality conditions. When monitored waters contain high levels of bacteria and pose a health risk, beach managers post warning signs or close beaches. The delay in current testing methods could unknowingly expose swimmers to unhealthy waters.

To overcome these limitations, the researchers combined water sampling and environmental data with machine learning methods to accurately forecast water quality. While predictive water quality models aren't new, they have generally required historical data spanning several years to be developed.

The team used water samples collected at 10-minute intervals over a relatively brief timeframe of one to two days at beaches in Santa Cruz, Monterey and Huntington Beach. Among the three sites, 244 samples were measured for FIB concentrations and marked as above or below the acceptable level deemed safe by the state. The researchers then collected meteorological data such as air temperature, solar radiation and wind speed along with oceanographic data including tide level, wave heights and water temperature (all factors influencing FIB concentrations) over the same timeframe.

Using the high-frequency water quality data and machine learning methods, they trained computer models to accurately predict FIB concentrations at all three beaches. The researchers found hourly water sampling for 24 hours straight - capturing an entire tidal and solar cycle - proved enough for reliable results. Feeding the framework meteorological and tidal data from longer time periods resulted in future water quality predictions that were dependable for at least an entire season.

"These results are really empowering for communities who want to know what's going on with water quality at their beach," Searcy said. "With some resources to get started and a day of sampling, these communities could collect the data needed to initiate their own water quality modeling systems."

The framework code, which is publicly accessible, could also be developed for accurate predictions of other contaminants such as harmful algae, metals and nutrients known to wreak havoc on local waters. The researchers point out that more analysis is needed to better determine the exact timeframe these models remain accurate and note that continually assessing and retraining the models remains a best practice for accurate predictions.

Credit: 
Stanford University

Defects may help scientists understand the exotic physics of topology

image: Photo of a metamaterial composed of a pattern of resonators. The defect appears as a pentagon in an otherwise regular array of circuit elements.

Image: 
Kitt Peterson

Real-world materials are usually messier than the idealized scenarios found in textbooks. Imperfections can add complications and even limit a material's usefulness. To get around this, scientists routinely strive to remove defects and dirt entirely, pushing materials closer to perfection. Now, researchers at the University of Illinois at Urbana-Champaign have turned this problem around and shown that for some materials defects could act as a probe for interesting physics, rather than a nuisance.

The team, led by professors Gaurav Bahl and Taylor Hughes, studied artificial materials, or metamaterials, which they engineered to include defects. The team used these customizable circuits as a proxy for studying exotic topological crystals, which are often imperfect, difficult to synthesize, and notoriously tricky to probe directly. In a new study, published in the January 20th issue of Nature, the researchers showed that defects and structural deformations can provide insights into a real material's hidden topological features.

"Most studies in this field have focused on materials with perfect internal structure. Our team wanted to see what happens when we account for imperfections. We were surprised to discover that we could actually use defects to our advantage," said Bahl, an associate professor in the Department of Mechanical Science and Engineering. With that unexpected assist, the team has created a practical and systematic approach for exploring the topology of unconventional materials.

Topology is a way of mathematically classifying objects according to their overall shape, rather than every small detail of their structure. One common illustration of this is a coffee mug and a bagel, which have the same topology because both objects have only one hole that you can wrap your fingers through.

Materials can also have topological features related to the classification of their atomic structure and energy levels. These features lead to unusual, yet possibly useful, electron behaviors. But verifying and harnessing topological effects can be tricky, especially if a material is new or unknown. In recent years, scientists have used metamaterials to study topology with a level of control that is nearly impossible to achieve with real materials.

"Our group developed a toolkit for being able to probe and confirm topology without having any preconceived notions about a material." says Hughes, who is a professor in the Department of Physics. "This has given us a new window into understanding the topology of materials, and how we should measure it and confirm it experimentally."

In an earlier study published in Science, the team established a novel technique for identifying insulators with topological features. Their findings were based on translating experimental measurements made on metamaterials into the language of electronic charge. In this new work, the team went a step further - they used an imperfection in the material's structure to trap a feature that is equivalent to fractional charges in real materials.

A single electron by itself cannot carry half a charge or some other fractional amount. But, fragmented charges can show up within crystals, where many electrons dance together in a ballroom of atoms. This choreography of interactions induces odd electronic behaviors that are otherwise disallowed. Fractional charges have not been measured in either naturally occurring or custom-grown crystals, but this team showed that analogous quantities can be measured in a metamaterial.

The team assembled arrays of centimeter-scale microwave resonators onto a chip. "Each of these resonators plays the role of an atom in a crystal and, similar to an atom's energy levels, has a specific frequency where it easily absorbs energy - in this case the frequency is similar that of a conventional microwave oven." said lead author Kitt Peterson, a former graduate student in Bahl's group.

The resonators are arranged into squares, repeating across the metamaterial. The team included defects by disrupting this square pattern - either by removing one resonator to make a triangle or adding one to create a pentagon. Since all the resonators are connected together, these singular disclination defects ripple out, warping the overall shape of the material and its topology.

The team injected microwaves into each resonator of the array and recorded the amount of absorption. Then, they mathematically translated their measurements to predict how electrons act in an equivalent material. From this, they concluded that fractional charges would be trapped on disclination defects in such a crystal. With further analysis, the team also demonstrated that trapped fractional charge signals the presence of certain kinds of topology.

"In these crystals, fractional charge turns out to be the most fundamental observable signature of interesting underlying topological features" said Tianhe Li, a theoretical physics graduate student in Hughes' research group and a co-author on the study.

Observing fractional charges directly remains a challenge, but metamaterials offer an alternative way to test theories and learn about manipulating topological forms of matter. According to the researchers, reliable probes for topology are also critical for developing future applications for topological quantum materials.

The connection between the topology of a material and its imperfect geometry is also broadly interesting for theoretical physics. "Engineering a perfect material does not necessarily reveal much about real materials," says Hughes. "Thus, studying the connection between defects, like the ones in this study, and topological matter may increase our understanding of realistic materials, with all of their inherent complexities."

Credit: 
University of Illinois Grainger College of Engineering

The Lancet and The Lancet Oncology: Global demand for cancer surgery set to grow by almost 5 million procedures within 20 years, with greatest burden in low-income countries

A modelling study suggests that demand for cancer surgery will rise by 52% - equal to 4.7 million procedures - between 2018 and 2040, with the greatest relative increase in low-income countries, which already have substantially lower staffing levels than high-income countries.

A separate observational study comparing global cancer surgery outcomes also suggests that patients in low- and middle-income countries (LMICs) are four times more likely to die from colorectal or gastric cancer (odds of 4.59 and 3.72, respectively) than those in high-income countries (HICs) currently, and that poor provision of care to manage post-operative complications (which includes staffing, ward space and access to facilities) explains a significant proportion of the disproportionate deaths in LMICs.

Demand for cancer surgery is expected to increase from 9.1 million to 13.8 million procedures over the next twenty years, requiring a huge increase in the workforce including nearly 200,000 additional surgeons and 87,000 anaesthetists globally. With access to post-operative care strongly linked to lower mortality, improving care systems worldwide must be a priority in order to reduce disproportionate number of deaths following complications.

The findings of the two studies, published in The Lancet and The Lancet Oncology, highlight an urgent need to improve cancer surgery provision in low- and middle income countries, while also scaling-up their workforces in order to cope with increasing demand. Until now, a lack of data examining outcomes of cancer surgery in different income settings, and an absence of evidence-based estimates of future demand, had limited efforts to improve cancer care globally.

Cancer is a leading cause of death and disability globally, and exerts substantial economic impacts, with recent evidence suggesting a disproportionate burden of disease in LMICs. With more than half of cancer patients predicted to require surgery at some stage, it is a pivotal component of multidisciplinary care globally and plays a key role in preventing deaths. A 2015 study estimated that US$6.2 trillion in global GDP could be lost by 2030 if surgical cancer systems are not improved. [1]

While the new studies did not assess impacts of COVID-19, the authors acknowledge that the delivery of high-quality post-operative care is more challenging during the pandemic.

Increasing future demand

The Article in The Lancet Oncology journal is a modelling study of global demand for cancer surgery and estimated surgical and anaesthesia workforce requirements between 2018 and 2040.

Using best-practice guidelines, patient characteristics and cancer stage data, the authors calculated the proportion of newly diagnosed cancer cases requiring surgery in 183 countries. To predict future surgery demand, they applied these rates to GLOBACAN cancer incidence predictions from 2018 to 2040.

The team's analysis estimates that the number of cancer cases requiring surgery globally each year will rise from 9.1 million to 13.8 million (52%, an increase of 4.7 million) from 2018 to 2040. The greatest relative increase will occur in 34 low-income countries, where the number of cases requiring surgery is expected to more than double by 2040 (314,355 cases to 650,164, 107%).

Current and future surgical and anaesthesia workforces needed for the optimal delivery of cancer surgery services were also predicted using staffing estimates based on optimal surgical use in high-income countries as a benchmark for global requirements. To evaluate staffing gaps, the optimal estimated workforce (median workforce of 44 high-income countries) was compared with numbers of surgeons and anaesthetists in each country.

The authors estimate there is currently a global shortage of 199,000 (56%) surgeons and 87,000 (51%) anaesthetists (current workforce of 766,000 surgeons and 372,000 anaesthetists, compared with 965,000 and 459,000 optimal workforce, respectively, estimated by the team's model). The gap is estimated to be greatest in low-income countries, where the current surgeon availability is 22,000 fewer than the model estimated optimal number of 28,000 surgeons. The current number of anaesthetists in low-income countries falls 11,000 below the model estimated demand of 13,000 anaesthetists.

In recognition of the rising global demand for cancer surgery, estimates were calculated for the optimal surgical and anaesthesia workforces needed in 2040. Extrapolating 2018 data, taking account of predicted future cancer incidence burden in each country, revealed that the surgical workforce will need to increase from 965,000 in 2018 to 1,416,000 (47% increase) in 2040. The anaesthetist workforce would need to rise from 459,000 in 2018 to 674,000 (47% increase) in 2040.

The greatest relative increase in optimal workforce requirements from 2018 to 2040 will occur in low-income countries, where surgeon numbers are required to rise from 28,210 to 58,219 by 2040 (106%). Anaesthetist numbers will also need to increase from 13,000 to 28,000 by 2040 (115%).

However, to match the current benchmark of high-income countries, the actual number of surgeons in low-income countries would need to increase almost 400% (increase from 6,000 to 28,000), and anaesthetists by nearly 550% (increase from 2,000 to 13,000), of their baseline values. This is because the current workforce in these countries is already substantially smaller than in high-income countries.

Dr Sathira Perera, from the University of New South Wales, Australia, said: "Our analysis has revealed that, in relative terms, low-income countries will bear the brunt of increased future demand for cancer surgery, bringing with it a need to substantially increase numbers of surgeons and anaesthetists. These findings highlight a need to act quickly to ensure that increasing workforce requirements in low-income countries are adequately planned for. There needs to be an increased focus on the application of cost-effective models of care, along with government endorsement of scientific evidence to mobilise resources for expanding services." [2]

Estimates in the study relied on several assumptions. Predictions of future cancer rates were based on 2018 estimates, however, country-level changes - such as economic developments or altered capacity to screen for early diagnosis - could alter cancer incidence and therefore surgical demand and workforce requirements. Observed gaps in the workforce could also be narrower than the actual gaps in practice, as predictions were conservative because they only considered initial surgical encounters and did not account for any follow-up interactions.

Cancer surgery outcomes

The Article in The Lancet is an observational study exploring global variation in post-operative complications and deaths following surgery for three common cancers.

Deaths among gastric cancer patients were nearly four times higher in low/lower middle-income countries (33 deaths among 326 patients, 3.72 odds of death) than high-income countries (27 deaths among 702 patients).

Patients with colorectal cancer in low/lower middle-income countries were also more than four times more likely to die (63 deaths among 905 patients, 4.59 odds of death), compared with those in high-income countries (94 deaths among 4,142 patients). Those in upper middle-income countries were two times as likely to die (47 deaths among 1,102 patients, 2.06 odds of death) as patients in high-income countries.

No difference in 30-day mortality was seen following breast cancer surgery.

Similar rates of complications were observed in patients across all income groups, however those in low/lower middle-income countries were six times more likely to die within 30 days of a major complication (96 deaths among 133 patients, 6.15 odds of death), compared with patients in high-income countries (121 deaths among 693 patients). Patients in upper middle-countries were almost four times as likely to die (58 deaths among 151 patients, 3.89 odds of death) as those in high-income countries.

Patients in upper middle-income and low/lower middle-income countries tended to present with more advanced disease compared with those in high-income countries, however researchers found that cancer stage alone explained little of the variation in mortality or post-operative complications.

Between April 2018 and January 2019, researchers enrolled 15,958 patients from 428 hospitals in 82 countries undergoing surgery for breast, colorectal or gastric cancer. 57% of patients were from high-income countries (9,106 patients), with 17% from upper middle-income countries (2,721 patients), and 26% from low/lower middle-income countries (4,131 patients). 53% (8,406) of patients underwent surgery for breast cancer, 39% (6,215) for colorectal cancer, and 8% (1,337) for gastric cancer.

Assessing hospital facilities and practices across the different income groups revealed that hospitals in upper middle-income and low/lower middle-income countries were less likely to have post-operative care infrastructure (such as designated post-operative recovery areas and consistently available critical care facilities) and cancer care pathways (such as oncology services). Further analysis revealed that the absence of post-operative care infrastructure was associated with more deaths in low/lower middle-income countries (7 to 10 more deaths per 100 major complications) and upper middle-income countries (5 to 8 more deaths per 100 major complications).

Professor Ewen Harrison, of the University of Edinburgh, UK, said: "Our study is the first to provide in-depth data globally on complications and deaths in patients within 30 days of cancer surgery. The association between having post-operative care and lower mortality rates following major complications indicates a need to improve care systems to detect and intervene when complications occur. Increasing this capacity to rescue patients from complications could help reduce deaths following cancer surgery in low- and middle-income countries.

"High quality all-round surgical care requires appropriate recovery and ward space, a sufficient number of well-trained staff, the use of early warning systems, and ready access to imaging, operating theatre space, and critical care facilities. While in this study it wasn't possible to assess cancer patients' full healthcare journey, we did identify several parts of the surgical health system, as well as patient-level risk factors, which could warrant further study and intervention." [2]

The authors acknowledge some limitations to their study. Researchers only looked at early outcomes following surgery, but, in future, they will study longer-term outcomes and other cancers. Outcomes can be poorly captured and understood in settings with limited resources, which will have affected the team's findings on the effectiveness of surgery. Further detailed analysis is needed to provide more robust evidence regarding associations between patient outcomes and hospital facilities.

Credit: 
The Lancet

Much of Earth's nitrogen was locally sourced

image: An artist's conception shows a protoplanetary disk of dust and gas around a young star. New research by Rice University shows that Earth's nitrogen came from both inner and outer regions of the disk that formed our solar system, contrary to earlier theory.

Image: 
NASA/JPL-Caltech

HOUSTON - (Jan. 21, 2021) - Where did Earth's nitrogen come from? Rice University scientists show one primordial source of the indispensable building block for life was close to home.

The isotopic signatures of nitrogen in iron meteorites reveal that Earth likely gathered its nitrogen not only from the region beyond Jupiter's orbit but also from the dust in the inner protoplanetary disk.

Nitrogen is a volatile element that, like carbon, hydrogen and oxygen, makes life on Earth possible. Knowing its source offers clues to not only how rocky planets formed in the inner part of our solar system but also the dynamics of far-flung protoplanetary disks.

The study by Rice graduate student and lead author Damanveer Grewal, Rice faculty member Rajdeep Dasgupta and geochemist Bernard Marty at the University of Lorraine, France, appears in Nature Astronomy.

Their work helps settle a prolonged debate over the origin of life-essential volatile elements in Earth and other rocky bodies in the solar system.

"Researchers have always thought that the inner part of the solar system, within Jupiter's orbit, was too hot for nitrogen and other volatile elements to condense as solids, meaning that volatile elements in the inner disk were in the gas phase," Grewal said.

Because the seeds of present-day rocky planets, also known as protoplanets, grew in the inner disk by accreting locally sourced dust, he said it appeared they did not contain nitrogen or other volatiles, necessitating their delivery from the outer solar system. An earlier study by the team suggested much of this volatile-rich material came to Earth via the collision that formed the moon.

But new evidence clearly shows only some of the planet's nitrogen came from beyond Jupiter.

In recent years, scientists have analyzed nonvolatile elements in meteorites, including iron meteorites that occasionally fall to Earth, to show dust in the inner and outer solar system had completely different isotopic compositions.

"This idea of separate reservoirs had only been developed for nonvolatile elements," Grewal said. "We wanted to see if this is true for volatile elements as well. If so, it can be used to determine which reservoir the volatiles in present-day rocky planets came from."

Iron meteorites are remnants of the cores of protoplanets that formed at the same time as the seeds of present-day rocky planets, becoming the wild card the authors used to test their hypothesis.

The researchers found a distinct nitrogen isotopic signature in the dust that bathed the inner protoplanets within about 300,000 years of the formation of the solar system. All iron meteorites from the inner disk contained a lower concentration of the nitrogen-15 isotope, while those from the outer disk were rich in nitrogen-15.

This suggests that within the first few million years, the protoplanetary disk divided into two reservoirs, the outer rich in the nitrogen-15 isotope and the inner rich in nitrogen-14.

"Our work completely changes the current narrative," Grewal said. "We show that the volatile elements were present in the inner disk dust, probably in the form of refractory organics, from the very beginning. This means that contrary to current understanding, the seeds of the present-day rocky planets -- including Earth -- were not volatile-free."

Dasgupta said the finding is significant to those who study the potential habitability of exoplanets, a topic of great interest to him as principal investigator of CLEVER Planets, a NASA-funded collaborative project exploring how life-essential elements might come together on distant exoplanets.

"At least for our own planet, we now know the entire nitrogen budget does not come only from outer solar system materials," said Dasgupta, Rice's Maurice Ewing Professor of Earth, Environmental and Planetary Sciences.

"Even if other protoplanetary disks don't have the kind of giant planet migration resulting in the infiltration of volatile-rich materials from the outer zones, their inner rocky planets closer to the star could still acquire volatiles from their neighboring zones," he said.

Credit: 
Rice University

Producing green hydrogen through the exposure of nanomaterials to sunlight

image: View through a window of the interior of an ultra-high vacuum reactor where TiO2 nanotubes are decorated with CoO nanoparticles. We see the flame (plasma produced by laser ablation) that sputters the CoO to give rise to the formation of its nanoparticles.

Image: 
Christian Fleury (INRS)

A research team from the Institut national de la recherche scientifique (INRS) has joined forces with French researchers from the Institute of Chemistry and Processes for Energy, Environment and Health (ICPEES), a CNRS-University of Strasbourg joint research lab, to pave the way towards the production of green hydrogen. This international team has developed new sunlight-photosensitive-nanostructured electrodes. The results of their research were published in the November 2020 issue of the journal of Solar Energy Materials and Solar Cells.

An Energy Transition Vector

Hydrogen is being considered by several countries of the Organisation for Economic Co-operation and Development (OECD) as a key player in the transition towards decarbonized industries and sectors. According to the INRS Professor My Ali El Khakani, Quebec could strategically position itself in this energy sector of the future. "Thanks to high-performance nanomaterials, we can improve the efficiency of water dissociation to produce hydrogen. This "clean" fuel is becoming increasingly important for the decarbonisation of the heavy-duty trucking and public transportation. For example, buses using hydrogen as a fuel are already in operation in several European countries and in China. These buses emit water instead of greenhouse gases," added the physicist and nanomaterials specialist.

Splitting water molecules into oxygen and hydrogen has long been done by electrolysis. However, industrial electrolyzers are very energy-intensive and require large investments. The INRS and ICPEES researchers were rather inspired by a natural mechanism: photosynthesis. Indeed, they have developed specially engineered and structured electrodes that split water molecules under the sun's light. This is a process known as photocatalysis.

Challenges in the Design and Fabrication of the Nanostructured Electrodes

For maximum use of solar energy, the research teams have selected a very abundant and chemically stable material: titanium dioxide (TiO2). TiO2 is a semiconductor known for being photosensitive to UV-light, which accounts only for 5% of the solar irradiance. Researchers have used their expertise in the field to first change the atomic composition of TiO2 and extend its photosensitivity to visible light. They were able to produce electrodes that can absorb up to 50% of the light emitted by the sun. A significant gain right from the start!

The researchers have then proceeded with the nanostructuration of the electrode to form a network of TiO2 nanotubes that resembles a beehive-like structure. This method multiplied the effective surface area of the electrode by a factor of 100,000 or more. "Nanostructuring maximizes the ratio between surface and volume of a material. For example, TiO2 nanostructures can offer a surface area of up to 50 m2 per gram. That's the surface area of a mid-size flat!", Professor El Khakani enthusiastically pointed out.

The final step of the electrode elaboration is their "nanodecoration". This process consists of depositing catalyst nanoparticles on the otherwise infinite network of TiO2 nanotubes to increase their efficiency of hydrogen production. To achieve this nanodecoration step, the researchers used the laser ablation deposition technique, a field where Professor El Khakani has developed a unique expertise over the last 25 years. The challenge was not only to control the size, dispersion and anchorage of catalyst nanoparticles on the TiO2 nanotube matrix, but also to find alternatives to the costly iridium and platinum classical catalysts.

This research identified cobalt oxide (CoO), a material that is quite available in Quebec's underground, as effective co-catalysts for splitting water molecules. A comparison of the two materials showed that CoO nanoparticles enabled a tenfold increase the photocatalytic efficiency of these new nanodecorated electrodes under visible light compared to bare nanotubes.

Credit: 
Institut national de la recherche scientifique - INRS

Taking sieving lessons from nature

video: Nanostructure-templated electrochemical polymerization enhances speed and selectivity in organic membrane-based processes.

Image: 
© 2021 KAUST; Anastasia Serin

Generating membranes using electrochemical polymerization, or electropolymerization, could provide a simple and cost-effective route to help various industries meet increasingly strict environmental regulations and reduce energy consumption.

Researchers from KAUST have produced membranes with well-defined microscopic pores by electrochemically depositing organic conjugated polymers onto highly porous electrodes. These microporous membranes have numerous applications, ranging from organic solvent nanofiltration to selective molecular transport technologies.

High-performance separation depends on membranes that are robust with well-ordered and dense microporous structures, such as zeolites and metal organic frameworks. Unlike these state-of-the-art materials, conventional polymers produce membranes with the desired tiny pores through cheap and scalable processes, but their amorphous architecture and low porosity make them less effective.

Conjugated microporous polymers have shown potential for polymer-based membranes with enhanced performance. These solvent-stable polymers form cross-linked networks with uniform pore sizes and high surface area when created by electropolymerization, a relatively simple method that relies on electroactive monomers. The drawback, however, is that the membranes produced are too brittle to withstand pressure-driven separations. The KAUST team, led by Zhiping Lai, sought a new approach to manufacture a robust membrane.

Taking inspiration from spider silk, which gets its exceptional strength and ductility from its skin-core structure, the team developed an electropolymerization approach to grow the conjugated polymer polycarbazole inside the porous network of an electrode1. They dispersed electroactive carbazole monomers in the electrolyte solution of an electrochemical cell and oxidized the monomers under applied voltage to coat the electrode with the polymer film. The electrode was made of carbon-based tubular nanostructures that served as a sturdy and porous scaffold for the membrane.

The membrane showed faster solvent transport than most existing systems because of its high surface area and high affinity for organic solvents. It also separated dye molecules within a narrow molecular weight difference. "This narrow molecular sieving is attributed to the uniform pore size," says Ph.D. student Zongyao Zhou.

A similar electropolymerization-based approach -- this time inspired by the protective role of human skin -- was used by another Lai-led team to prevent cathode decomposition in lithium-sulfur batteries2. Environmentally friendly and inexpensive, these rechargeable batteries have potential to store more energy than their ubiquitous lithium-ion counterparts, which could make them useful for electrical cars, drones and other portable electronics. However, their sulfur cathode forms compounds called polysulfides that readily dissolve into the electrolyte during discharge. These soluble compounds can shuttle between the cathode and anode, causing permanent capacity loss and degrading the lithium metal anode.

Previous attempts to prevent the dissolving of the polysulfide, such as capturing and anchoring the compounds to the cathode, have had limited success. "We thought that growing an artificial skin for the sulfur cathode would help stop polysulfide leakage from the cathode," says Ph.D. student Dong Guo.

The researchers synthesized another polycarbazole membrane that conforms to the cathode surface under applied voltage. This nanoskin features tiny uniform pores that block polysulfide diffusion but facilitate rapid lithium ion transport, which enhances the sulfur utilization and energy density of the battery.

The team plans to evaluate the electropolymerization process in other electrode systems. The nanoskin holds promise for organic batteries, in which the dissolution of redox-active organic molecules is rather challenging, Lai says.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Alpha particles lurk at the surface of neutron-rich nuclei

Scientists from an international collaboration have found evidence of alpha particles at the surface of neutron-rich heavy nuclei, providing new insights into the structure of neutron stars, as well as the process of alpha decay.

Neutron stars are amongst the most mysterious objects in our universe. They contain extremely dense matter that is radically different from the ordinary matter surrounding us--being composed almost entirely of neutrons rather than atoms. However, in the nucleus at the center of normal atoms, matter exists at similar densities.

"Understanding the nature of matter at such extremes is important for our understanding of neutron stars, as well as the beginning, workings, and final fate of the universe," says Junki Tanaka, one of the leaders of the study.

"Intriguingly, despite the vast difference in size and mass, the tiny atomic nuclei found on Earth and the enigmatic neutron star are actually governed by the same type of interactions," says Zaihong Yang, also a co-corresponding author of the paper. This connection has been well established by scientists through the nuclear equation of state (EOS) which describes the relation between the density and the pressure of nuclear matter.

In the current finding, published in Science, the research team led by scientists from the RIKEN Nishina Center for Accelerator-Based Science, TU Darmstadt, and the Research Center for Nuclear Physics (RCNP) examined tin nuclei, and found evidence of "alpha clusters"--groups of two protons and two neutrons--in heavy nuclei.

For the experiments, the group examined a series of neutron-rich isotopes of tin ranging from tin 112, which has only 62 neutrons, to tin 124, which has 74 neutrons and thus has a much thicker neutron-skin. They used a method to deliberately knock out alpha particles from the nuclei by bombarding them with protons at the Research Center for Nuclear Physics (RCNP) at Osaka University, and then examined how frequently they were able to observe alpha particles in progressively heavier isotopes.

They identified the clusters in the very surface region of neutron-rich tin atomic nuclei, implying that the so-called "neutron skin" is not purely neutron matter as its name implies, but also includes alpha clusters. Importantly, they also discovered that "the effective number of alpha clusters"--the probability of finding alpha clusters in nuclei--decreased gradually along with the number of neutrons, and theorized that this is due to the interplay between alpha cluster formation and the thickness of the "neutron skin" that surrounds the nucleus.

This finding has important implications for our understanding of nuclear EOS and neutron stars. In the near future, more and more accurate data on the bulk properties--mass and radius--of neutron stars will be available from electromagnetic and gravitational-wave observations.

The present work is also a key step towards a full understanding of alpha decay--a type of radioactive decay in which an atomic nucleus spontaneously emits an alpha particle. About 90 years ago, physicist George Gamow famously proposed that alpha decay takes place due to the quantum tunneling of preformed alpha particles or clusters. However, though the theory was generally accepted, it was never shown conclusively that such clusters actually existed in heavy atoms.

The finding of the current work that alpha clusters exist at the surface of heavy nuclei could provide an answer to the question about the origin of alpha particles in alpha decay. In the future, the group plans to work with high-energy accelerators such as the RI Beam Factory at RIKEN and GSI/FAIR, a new accelerator facility being built in Germany to do studies using alpha-radioactive isotopes.

Credit: 
RIKEN

Researchers make domestic high-performance bipolar membranes possible

The bipolar membrane, a type of ion exchange membrane, is considered the pivotal material for zero emission technology. It is composed of an anode and cathode membrane layer, and an intermediate hydrolysis layer. Under reverse bias, the water molecules in the intermediate layer produce OH- and H+ by polarization.

Large-scale production of the membrane is hindered by the different expansion coefficients of the anode and cathode layers, causing the two layers easy to delaminate. Besides, the mostly used intermediate catalysts are small molecules or transition, which are instable and inefficient.

In a study published on Nature Communications, a team led by Prof. XU Tongwen and Prof WU Liang from the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS) adopted an in-situ growth idea to construct a stable and efficient membrane.

In their previous studies on bipolar membranes, the researchers have developed the anode and cathode membrane layer of polyphenylene ether substrate to solve the problem of different expansion coefficients, and prepared a series of intermediate catalytic layer structures to address the second problem. However, for the industrial application, further research is needed as the hydrolysis pressure drop are too high for large scale production.

Therefore, the researchers constructed a stable hydrolysis intermediate layer by regulating the in-situ growth position at the anode and cathode membrane interface, aniline molecules aggregated, polymerized, and encapsulated FeO(OH) particles.

The polyaniline network provides strong adhesion between the membrane layers and realizes the fixation and uniform dispersion of FeO(OH) particles. FeO(OH) particles of uniform size provide active sites for hydrolysis and promote water polarization, releasing H+ and OH- quickly under an electric field.

This newly synthesized membrane outperforms the corresponding Japanese commercial membrane Neosepta® BP1 in the aspects of starting voltage of water dissociation, stability at high current density and ability to generate acid and alkali by hydrolysis.

Furthermore, the researchers developed molding techniques with independent intellectual property rights. The large-scale production line is under construction.

Credit: 
University of Science and Technology of China

Electrons caught in the act

image: Fig.2 Electron dynamics around a misoriented molecular defect. (a) STM image and snapshots obtained over an area including the defect indicated by the white arrow. Snapshots clearly show that electrons were still trapped in the single bright defect even 63 ps after IR pulse excitation as shown in (b). The defect appears brighter than the other C60 molecules because of the trap of electrons at the single molecular site.

Image: 
University of Tsukuba

Tsukuba, Japan - A team of researchers from the Faculty of Pure and Applied Sciences at the University of Tsukuba filmed the ultrafast motion of electrons with sub-nanoscale spatial resolution. This work provides a powerful tool for studying the operation of semiconductor devices, which can lead to more efficient electronic devices.

The ability to construct ever smaller and faster smartphones and computer chips depends on the ability of semiconductor manufacturers to understand how the electrons that carry information are affected by defects. However, these motions occur on the scale of trillionths of a second, and they can only be seen with a microscope that can image individual atoms. It may seem like an impossible task, but this is exactly what a team of scientists at the University of Tsukuba was able to accomplish.

The experimental system consisted of Buckminsterfullerene carbon molecules--which bear an uncanny resemblance to stitched soccer balls--arranged in a multilayer structure on a gold substrate. First, a scanning tunneling microscope was set up to capture the movies. To observe the motion of electrons, an infrared electromagnetic pump pulse was applied to inject electrons into the sample. Then, after a set time delay, a single ultrafast terahertz pulse was used to probe the location of the elections. Increasing the time delay allowed the next "frame" of the movie to be captured. This novel combination of scanning tunneling microscopy and ultrafast pulses allowed the team to achieve sub-nanoscale spatial resolution and near picosecond time resolution for the first time. "Using our method, we were able to clearly see the effects of imperfections, such as a molecular vacancy or orientational disorder," explains first author Professor Shoji Yoshida. Capturing each frame took only about two minutes, which allows the results to be reproducible. This also makes the approach more practical as a tool for the semiconductor industry.

"We expect that this technology will help lead the way towards the next generation of organic electronics" senior author Professor Hidemi Shigekawa says. By understanding the effects of imperfections, some vacancies, impurities, or structural defects can be purposely introduced into devices to control their function.

Credit: 
University of Tsukuba