Tech

Cleaning Up the Mississippi River

image: Mississippi River in fog

Image: 
LSU

Louisiana State University College of the Coast & Environment Boyd Professor R. Eugene Turner reconstructed a 100-year record chronicling water quality trends in the lower Mississippi River by compiling water quality data collected from 1901 to 2019 by federal and state agencies as well as the New Orleans Sewerage and Water Board. The Mississippi River is the largest river in North America with about 30 million people living within its watershed. Turner focused on data that tracked the water's acidity through pH levels and concentrations of bacteria, oxygen, lead and sulphate in this study published in Ambio, a journal of the Royal Swedish Academy of Sciences.

Rivers have historically been used as disposal sites worldwide. From the polluted Cuyahoga River in Cleveland, Ohio that caught fire to the Mississippi River where sewage was dumped resulting in increases in lead and decreases in oxygen, rivers were environmentally hazardous until the passage of the U.S. Clean Water Act in 1972. The Clean Water Act as well as the Clean Air Act, the Toxic Substances Control Act and others established a federal structure to reduce pollutant discharges into the environment and gave the Environmental Protection Agency the authority to restrict the amounts and uses of certain toxic chemicals such as lead. Turner's study assesses changes in water quality before and after the Clean Water Act and Clean Air Act went into effect. The water quality data he compiled were collected from four locations on the southern end of the Mississippi River at St. Francisville, Plaquemine, two locations in New Orleans and at Belle Chasse, Louisiana.

His research found that after these environmental policies were put into place, bacterial concentrations decreased by about 3 orders of magnitude, oxygen content increased, lead concentrations decreased and sulphate concentrations declined less dramatically. His research also found that as sulfur dioxide emissions peaked in 1965, the river's pH dropped to a low of 5.8. In the U.S., natural water falls between 6.5 and 8.5 with 7.0 being neutral. However, as sulfur dioxide emissions declined in 2019, the pH of the river was restored to an average of 8.2.

"The promulgation and acceptance of the Clean Water Act and Clean Air Act demonstrates how public policy can change for the better and help everyone who is demonstrably 'downstream' in a world of cycling pollutants," Turner said.

Consistent vigilance and monitoring are necessary to ensure water quality in the Mississippi River and northern Gulf of Mexico. Plastics fill oceans, pharmaceuticals are distributed in sewage and COVID-19 virus and other viruses spread in partially treated sewerage wastes from aging septic tanks, unconstrained wetland treatment systems with insufficient hydrologic controls and overloaded treatment systems.

New pollutants are added to the river each year, which will require monitoring and testing. Unfortunately, lead monitoring has stopped, but decades of sustained and effective efforts at a national scale created water quality improvements and are an example for addressing new and existing water quality challenges, Turner said.

Credit: 
Louisiana State University

WVU biologists uncover forests' unexpected role in climate change

image: WVU alumnus Justin Mathias holds a tree increment borer to extract tree cores at Gaudineer Knob in West Virginia. Mathias and Richard Thomas, professor of forest ecology and climate change, found that trees are taking in more carbon dioxide than previously thought in a new study.

Image: 
West Virginia University

New research from West Virginia University biologists shows that trees around the world are consuming more carbon dioxide than previously reported, making forests even more important in regulating the Earth's atmosphere and forever shift how we think about climate change.

In a study published in the Proceedings of the National Academy of Sciences, Professor Richard Thomas and alumnus Justin Mathias (BS Biology, '13 and Ph.D. Biology, '20) synthesized published tree ring studies. They found that increases in carbon dioxide in the atmosphere over the past century have caused an uptick in trees' water-use efficiency, the ratio of carbon dioxide taken up by photosynthesis to the water lost by transpiration - the act of trees "breathing out" water vapor.

"This study really highlights the role of forests and their ecosystems in climate change," said Thomas, interim associate provost for graduate academic affairs. "We think of forests as providing ecosystem services. Those services can be a lot of different things - recreation, timber, industry. We demonstrate how forests perform another important service: acting as sinks for carbon dioxide. Our research shows that forests consume large amounts of carbon dioxide globally. Without that, more carbon dioxide would go into the air and build up in the atmosphere even more than it already is, which could exacerbate climate change. Our work shows yet another important reason to preserve and maintain our forests and keep them healthy."

Previously, scientists have thought that trees were using water more efficiently over the past century through reduced stomatal conductance - meaning trees were retaining more moisture when the pores on their leaves began closing slightly under rising levels of carbon dioxide.

However, following an analysis using carbon and oxygen isotopes in tree rings from 1901 to 2015 from 36 tree species at 84 sites around the world, the researchers found that in 83% of cases, the main driver of trees' increased water efficiency was increased photosynthesis - they processed more carbon dioxide. Meanwhile, the stomatal conductance only drove increased efficiency 17% of the time. This reflects a major change in how trees' water efficiency has been explained in contrast to previous research.

"We've shown that over the past century, photosynthesis is actually the overwhelming driver to increases in tree water use efficiency, which is a surprising result because it contradicts many earlier studies," Mathias said. "On a global scale, this will have large implications potentially for the carbon cycle if more carbon is being transferred from the atmosphere into trees."

Since 1901, the intrinsic water use efficiency of trees worldwide has risen by approximately 40% in conjunction with an increase of approximately 34% in atmospheric carbon dioxide. Both of these characteristics increased approximately four times faster since the 1960s compared to the previous years.

While these results show the rise in carbon dioxide is the main factor in making trees use water more efficiently, the results also vary depending on temperature, precipitation and dryness of the atmosphere. These data can help refine models used to predict the effects of climate change on global carbon and water cycles.

"Having an accurate representation of these processes is critical in making sound predictions about what may happen in the future," Mathias said. "This helps us get a little closer to making those predictions less uncertain."

The study is a product of the researchers' seven-year research collaboration during Mathias' time as a doctoral student. After graduating from WVU, Mathias joined University of California, Santa Barbara as a postdoctoral researcher.

"Since moving to California, my work has taken a turn from being in the field, collecting measurements, analyzing data and writing manuscripts," Mathias said. "My new position is more focused on ecological theory and ecosystem modeling. Instead of measuring plants, I form hypotheses and seek out answers to questions using computer models and math."

In the future, Mathias aspires to become a professor at a research university to continue these research pursuits.

"I would love to run my own lab at a university, mentor graduate students and pursue research questions to continue building on the work we've already accomplished. There's been a lot of progress in our field. There are also an infinite number of questions that are relevant moving forward," Mathias said. "I owe everything to my time and training from the people at WVU. My long-term goal is to be in a position where I can continue moving the field forward while giving back through teaching and mentoring students."

Credit: 
West Virginia University

Mount Sinai study finds wearable devices can detect COVID-19 symptoms and predict diagnosis

Wearable devices can identify COVID-19 cases earlier than traditional diagnostic methods and can help track and improve management of the disease, Mount Sinai researchers report in one of the first studies on the topic. The findings were published in the Journal of Medical Internet Research on January 29.

The Warrior Watch Study found that subtle changes in a participant's heart rate variability (HRV) measured by an Apple Watch were able to signal the onset of COVID-19 up to seven days before the individual was diagnosed with the infection via nasal swab, and also to identify those who have symptoms.

"This study highlights the future of digital health," says the study's corresponding author Robert P. Hirten, MD, Assistant Professor of Medicine (Gastroenterology) at the Icahn School of Medicine at Mount Sinai, and member of the Hasso Plattner Institute for Digital Health at Mount Sinai and the Mount Sinai Clinical Intelligence Center (MSCIC). "It shows that we can use these technologies to better address evolving health needs, which will hopefully help us improve the management of disease. Our goal is to operationalize these platforms to improve the health of our patients and this study is a significant step in that direction. Developing a way to identify people who might be sick even before they know they are infected would be a breakthrough in the management of COVID-19."

The researchers enrolled several hundred health care workers throughout the Mount Sinai Health System in an ongoing digital study between April and September 2020. The participants wore Apple Watches and answered daily questions through a customized app. Changes in their HRV--a measure of nervous system function detected by the wearable device--were used to identify and predict whether the workers were infected with COVID-19 or had symptoms. Other daily symptoms that were collected included fever or chills, tiredness or weakness, body aches, dry cough, sneezing, runny nose, diarrhea, sore throat, headache, shortness of breath, loss of smell or taste, and itchy eyes.

Additionally, the researchers found that 7 to 14 days after diagnosis with COVID-19, the HRV pattern began to normalize and was no longer statistically different from the patterns of those who were not infected.

"This technology allows us not only to track and predict health outcomes, but also to intervene in a timely and remote manner, which is essential during a pandemic that requires people to stay apart," says the study's co-author Zahi Fayad, PhD, Director of the BioMedical Engineering and Imaging Institute, Co-Founder of the MSCIC, and the Lucy G. Moses Professor of Medical Imaging and Bioengineering at the Icahn School of Medicine at Mount Sinai.

The Warrior Watch Study draws on the collaborative effort of the Hasso Plattner Institute for Digital Health and the MSCIC, which represents a diverse group of data scientists, engineers, clinical physicians, and researchers across the Mount Sinai Health System who joined together in the spring of 2020 to combat COVID-19. The study will next take a closer look at biometrics including HRV, sleep disruption, and physical activity to better understand which health care workers are at risk of the psychological effects of the pandemic.

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Shuffling bubbles reveal how liquid foams evolve

image: A rearrangement event in a monodisperse foam. Note how bubbles move in the same direction along the same row, or in exactly the opposite direction in adjacent rows over long distances (the correlation length).

Image: 
Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University studied the dynamics of foams. When a drop of water was added to a foam raft, the bubbles rearranged themselves to reach a new stable state. The team found that bubble movement was qualitatively different depending on the range of bubble sizes present. Along with analogies with soft-jammed materials, these findings may inspire the design of new foam materials for industry.

Foams are everywhere. Whether it's soaps and detergents, meringues, beer foam, cosmetics or insulation for clothing and building, we're surrounded by everyday materials featuring a foam material. The application of foams tends to take advantage of their unique structure, which is why understanding how their structure can change over time is so important.

A team led by Prof. Rei Kurita of Tokyo Metropolitan University have been studying liquid foams, like those made with detergent and water at home. They were interested in understanding how the bubbles of a foam rearrange themselves. While previous studies usually applied a force to the foam with a prod to the side, the team adopted the much gentler method of adding a tiny amount of water, preserving the bubbles but changing the conditions enough for the bubbles to rearrange themselves and find a new stable state. This made it much easier to see how subtle environmental nudges or perturbations lead to small, isolated bubble relaxation events.

By filming the bubbles rearranging themselves, the team showed for the first time that rearrangements were fundamentally different depending on the range of bubble sizes present in the foam. When the bubbles were roughly the same size, or monodisperse, they arranged themselves in a hexagonal, honeycomb formation. Upon adding water, the bubbles that moved tended to shuffle in the same direction over long distances, along the lines of the honeycomb. Conversely, when there were many small and large particles, the initial arrangement was much less ordered. Rearrangements in this polydisperse foam were random, with adjacent bubbles moving in all sorts of directions. The videos they took allowed the team to extract a dynamical correlation length, the length scale over which bubbles move in similar directions. Tracking how this length changes under different conditions is crucial to placing foam materials within the broad framework of condensed matter physics. Interestingly, the unique correlated motion observed in the hexagonal foam didn't depend on adjacent bubbles being in contact: they simply needed to be close enough to form well-ordered patterns.

The team went on to compare this behavior to simulations of packings of soft particles with different ranges of sizes. They found very similar behavior, showing clearly that this was not a quirk of liquid foams, but a general feature of soft particles that have been jammed together. These insights into how foams react to the subtlest of environmental cues may one day inform how foams are kept stable or fluid, and how soft jammed materials are handled in industrial processes.

Credit: 
Tokyo Metropolitan University

Silicon anode structure generates new potential for lithium-ion batteries

image: In chamber 1, the nanoparticles, made from tantalum metal, are grown. Within this chamber, individual tantalum atoms clump together, similar to the formation of rain droplets. In chamber 2, the nanoparticles are mass filtered, removing ones that are too large or too small. In chamber 3, a layer of nanoparticles is deposited. This layer is then "sprayed" with isolated silicon atoms, forming a silicon layer. This process can then be repeated to create a multi-layered structure.

Image: 
Schematic created by Pavel Puchenkov, OIST Scientific Computing & Data Analysis Section.

New research has identified a nanostructure that improves the anode in lithium-ion batteries

Instead of using graphite for the anode, the researchers turned to silicon: a material that stores more charge but is susceptible to fracturing

The team made the silicon anode by depositing silicon atoms on top of metallic nanoparticles

The resulting nanostructure formed arches, increasing the strength and structural integrity of the anode

Electrochemical tests showed the lithium-ion batteries with the improved silicon anodes had a higher charge capacity and longer lifespan

New research conducted by the Okinawa Institute of Science and Technology Graduate University (OIST) has identified a specific building block that improves the anode in lithium-ion batteries. The unique properties of the structure, which was built using nanoparticle technology, are revealed and explained today in Communications Materials.

Powerful, portable and rechargeable, lithium-ion batteries are crucial components of modern technology, found in smartphones, laptops and electric vehicles. In 2019, their potential to revolutionize how we store and consume power in the future, as we move away from fossil fuels, was notably recognized, with the Nobel Prize co-awarded to new OIST Board of Governors member, Dr. Akira Yoshino, for his work developing the lithium-ion battery.

Traditionally, graphite is used for the anode of a lithium-ion battery, but this carbon material has major limitations.

"When a battery is being charged, lithium ions are forced to move from one side of the battery - the cathode - through an electrolyte solution to the other side of the battery - the anode. Then, when a battery is being used, the lithium ions move back into the cathode and an electric current is released from the battery," explained Dr. Marta Haro, a former researcher at OIST and first author of the study. "But in graphite anodes, six atoms of carbon are needed to store one lithium ion, so the energy density of these batteries is low."

With science and industry currently exploring the use of lithium-ion batteries to power electric vehicles and aerospace craft, improving energy density is critical. Researchers are now searching for new materials that can increase the number of lithium ions stored in the anode.

One of the most promising candidates is silicon, which can bind four lithium ions for every one silicon atom.

"Silicon anodes can store ten times as much charge in a given volume than graphite anodes - a whole order of magnitude higher in terms of energy density," said Dr. Haro. "The problem is, as the lithium ions move into the anode, the volume change is huge, up to around 400%, which causes the electrode to fracture and break."

The large volume change also prevents stable formation of a protective layer that lies between the electrolyte and the anode. Every time the battery is charged, this layer therefore must continually reform, using up the limited supply of lithium ions and reducing the lifespan and rechargeability of the battery.

"Our goal was to try and create a more robust anode capable of resisting these stresses, that can absorb as much lithium as possible and ensure as many charge cycles as possible before deteriorating," said Dr. Grammatikopoulos, senior author of the paper. "And the approach we took was to build a structure using nanoparticles."

In a previous paper, published in 2017 in Advanced Science, the now-disbanded OIST Nanoparticles by Design Unit developed a cake-like layered structure, where each layer of silicon was sandwiched between tantalum metal nanoparticles. This improved the structural integrity of the silicon anode, preventing over-swelling.

While experimenting with different thicknesses of the silicon layer to see how it affected the material's elastic properties, the researchers noticed something strange.

"There was a point at a specific thickness of the silicon layer where the elastic properties of the structure completely changed," said Theo Bouloumis, a current PhD student at OIST who was conducting this experiment. "The material became gradually stiffer, but then quickly decreased in stiffness when the thickness of the silicon layer was further increased. We had some ideas, but at the time, we didn't know the fundamental reason behind why this change occurred."

Now, this new paper finally provides an explanation for the sudden spike in stiffness at one critical thickness.

Through microscopy techniques and computer simulations at the atomic level, the researchers showed that as the silicon atoms are deposited onto the layer of nanoparticles, they don't form an even and uniform film. Instead, they form columns in the shape of inverted cones, growing wider and wider as more silicon atoms are deposited. Eventually, the individual silicon columns touch each other, forming a vaulted structure.

"The vaulted structure is strong, just like an arch is strong in civil engineering," said Dr. Grammatikopoulos. "The same concept applies, just on a nanoscale."

Importantly, the increased strength of the structure also coincided with enhanced battery performance. When the scientists carried out electrochemical tests, they found that the lithium-ion battery had an increased charge capacity. The protective layer was also more stable, meaning the battery could withstand more charge cycles.

These improvements are only seen at the precise moment that the columns touch. Before this moment occurs, the individual pillars are wobbly and so cannot provide structural integrity to the anode. And if silicon deposition continues after the columns touch, it creates a porous film with many voids, resulting in a weak, sponge-like behavior.

This reveal of the vaulted structure and how it gains its unique properties not only acts as an important step forward towards the commercialization of silicon anodes in lithium-ion batteries, but also has many other potential applications within material sciences.

"The vaulted structure could be used when materials are needed that are strong and able to withstand various stresses, such as for bio-implants or for storing hydrogen," said Dr. Grammatikopoulos. "The exact type of material you need - stronger or softer, more flexible or less flexible - can be precisely made, simply by changing the thickness of the layer. That's the beauty of nanostructures."

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Pushed to the limit: A CMOS-based transceiver for beyond 5G applications at 300 GHz

image: -

Image: 
ISSCC 2021

Scientists at Tokyo Institute of Technology (Tokyo Tech) and NTT Corporation (NTT) develop a novel CMOS-based transceiver for wireless communications at the 300 GHz band, enabling future beyond-5G applications. Their design addresses the challenges of operating CMOS technology at its practical limit and represents the first wideband CMOS phased-array system to operate at such elevated frequencies.

Communication at higher frequencies is a perpetually sought-after goal in electronics because of the greater data rates that would be possible and to take advantage of underutilized portions of the electromagnetic spectrum. Many applications beyond 5G, as well as the IEEE802.15.3d standard for wireless communications, call for transmitters and receivers capable of operating close to or above 300 GHz.

Unfortunately, our trusty CMOS technology is not entirely suitable for such elevated frequencies. Near 300 GHz, amplification becomes considerably difficult. Although a few CMOS-based transceivers for 300 GHz have been proposed, they either lack enough output power, can only operate in direct line-of-sight conditions, or require a large circuit area to be implemented.

To address these issues, a team of scientists from Tokyo Tech, in collaboration with NTT, proposed an innovative design for a 300 GHz CMOS-based transceiver (Figure 1). Their work will be presented in the Digests of Technical Papers in the 2021 IEEE ISSCC (International Solid-State Circuits Conference), a conference where the latest advances in solid-state and integrated circuits are exposed.

One of the key features of the proposed design is that it is bidirectional; a great portion of the circuit, including the mixer, antennas, and local oscillator, is shared between the receiver and the transmitter (Figure 2). This means the overall circuit complexity and the total circuit area required are much lower than in unidirectional implementations.

Another important aspect is the use of four antennas in a phased array configuration. Existing solutions for 300 GHz CMOS transmitters use a single radiating element, which limits the antenna gain and the system's output power. An additional advantage is the beamforming capability of phased arrays, which allows the device to adjust the relative phases of the antenna signals to create a combined radiation pattern with custom directionality. The antennas used are stacked "Vivaldi antennas," which can be etched directly onto PCBs, making them easy to fabricate.

The proposed transceiver uses a subharmonic mixer, which is compatible with a bidirectional operation and requires a local oscillator with a comparatively lower frequency. However, this type of mixing results in low output power, which led the team to resort to an old yet functional technique to boost it. Professor Kenichi Okada from Tokyo Tech, who led the study, explains: "Outphasing is a method generally used to improve the efficiency of power amplifiers by enabling their operation at output powers close to the point where they no longer behave linearly--that is, without distortion. In our work, we used this approach to increase the transmitted output power by operating the mixers at their saturated output power." Another notable feature of the new transceiver is its excellent cancellation of local oscillator feedthrough (a "leakage" from the local oscillator through the mixer and onto the output) and image frequency (a common type of interference for the method of reception used).

The entire transceiver was implemented in an area as small as 4.17 mm2. It achieved maximum rates of 26 Gbaud for transmission and 18 Gbaud for reception, outclassing most state-of-the-art solutions. Excited about the results, Okada remarks: "Our work demonstrates the first implementation of a wideband CMOS phased-array system that operates at frequencies higher than 200 GHz." Let us hope this study helps us squeeze more juice out of CMOS technology for upcoming applications in wireless communications!

Credit: 
Tokyo Institute of Technology

Signs of burnout can be detected in sweat

We've all felt stressed at some point, whether in our personal or professional lives or in response to exceptional circumstances like the COVID-19 pandemic. But until now there has been no way to quantify stress levels in an objective manner.

That could soon change thanks to a small wearable sensor developed by engineers at EPFL's Nanoelectronic Devices Laboratory (Nanolab) and Xsensio. The device can be placed directly on a patient's skin and can continually measure the concentration of cortisol, the main stress biomarker, in the patient's sweat.

Cortisol: A double-edged sword

Cortisol is a steroid hormone made by our adrenal glands out of cholesterol. Its secretion is controlled by the adrenocorticotropic hormone (ACTH), which is produced by the pituitary gland. Cortisol carries out essential functions in our bodies, such as regulating metabolism, blood sugar levels and blood pressure; it also affects the immune system and cardiovascular functions.

When we're in a stressful situation, whether life-threatening or mundane, cortisol is the hormone that takes over. It instructs our bodies to direct the required energy to our brain, muscles and heart. "Cortisol can be secreted on impulse - you feel fine and suddenly something happens that puts you under stress, and your body starts producing more of the hormone," says Adrian Ionescu, head of Nanolab.

While cortisol helps our bodies respond to stressful situations, it's actually a double-edged sword. It's usually secreted throughout the day according to a circadian rhythm, peaking between 6am and 8am and then gradually decreasing into the afternoon and evening. "But in people who suffer from stress-related diseases, this circadian rhythm is completely thrown off," says Ionescu. "And if the body makes too much or not enough cortisol, that can seriously damage an individual's health, potentially leading to obesity, cardiovascular disease, depression or burnout."

Capturing the hormone to measure it

Blood tests can be used to take snapshot measurements of patients' cortisol levels. However, detectable amounts of cortisol can also be found in saliva, urine and sweat. Ionescu's team at Nanolab decided to focus on sweat as the detection fluid and developed a wearable smart patch with a miniaturized sensor.

The patch contains a transistor and an electrode made from graphene which, due to its unique proprieties, offers high sensitivity and very low detection limits. The graphene is functionalized through aptamers, which are short fragments of single-stranded DNA or RNA that can bind to specific compounds. The aptamer in the EPFL patch carries a negative charge; when it comes into contact with cortisol, it immediately captures the hormone, causing the strands to fold onto themselves and bringing the charge closer to the electrode surface. The device then detects the charge, and is consequently able to measure the cortisol concentration in the wearer's sweat.

So far, no other system has been developed for monitoring cortisol concentrations continuously throughout the circadian cycle. "That's the key advantage and innovative feature of our device. Because it can be worn, scientists can collect quantitative, objective data on certain stress-related diseases. And they can do so in a non-invasive, precise and instantaneous manner over the full range of cortisol concentrations in human sweat," says Ionescu.

Engineering improved healthcare

The engineers tested their device on Xsensio's proprietary Lab-on-SkinTM platform; the next step will be to place it in the hands of healthcare workers. Esmeralda Megally, CEO of Xsensio, says: "The joint R&D team at EPFL and Xsensio reached an important R&D milestone in the detection of the cortisol hormone. We look forward to testing this new sensor in a hospital setting and unlocking new insight into how our body works." The team has set up a bridge project with Prof. Nelly Pitteloud, chief of endocrinology, diabetes and metabolism at the Lausanne University Hospital (CHUV), for her staff to try out the continuous cortisol-monitoring system on human patients. These trials will involve healthy individuals as well as people suffering from Cushing's syndrome (when the body produces too much cortisol), Addison's disease (when the body doesn't produce enough) and stress-related obesity. The engineers believe their sensor can make a major contribution to the study of the physiological and pathological rhythms of cortisol secretion.

So what about psychological diseases caused by too much stress? "For now, they are assessed based only on patients' perceptions and states of mind, which are often subjective," says Ionescu. "So having a reliable, wearable system can help doctors objectively quantify whether a patient is suffering from depression or burnout, for example, and whether their treatment is effective. What's more, doctors would have that information in real time. That would mark a major step forward in the understanding of these diseases." And who knows, maybe one day this technology will be incorporated into smart bracelets. "The next phase will focus on product development to turn this exciting invention into a key part of our Lab-on-SkinTM sensing platform, and bring stress monitoring to next-generation wearables," says Megally.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Establishment testing standards for particulate photocatalysts in solar fuel production proposed

image: Efficiency accreditation and testing protocols for particulate photocatalysts toward solar fuel roduction

Image: 
DICP

Utilization of renewable solar energy is crucial for addressing the global energy and environmental concerns and achieving sustainable development in our society. In this regard, photocatalytic water splitting has attracted significant interest as a cost-effective means to convert sustainable solar energy into valuable chemicals.

However, efficiency is sensitive to reaction conditions and experimental setup, it is difficult to compare the results obtained by different research groups or provide a reliable guide for large-scale implementation. Due to the lack of testing standards, it is difficult to compare the results obtained by different research groups or provide a reliable guide for large-scale implementation.

Recently, a research team led by Prof. LI Can and Prof. LI Rengui from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences (CAS), in collaboration with Prof. Kazunari Domen from The University of Tokyo, Prof. Lianzhou Wang from The University of Queensland, Prof. Kazuhiro Sayama from the National Institution of Advanced Industrial Science and Technology, and Prof. Gang Liu from the Institute of Metal Research, CAS, initiated the establishment of international efficiency accreditation and testing protocols for particulate photocatalysts toward solar fuel production.

Their perspective, published in Joule, was expected to serve as a useful guide for developing a well-recognized testing standard and for further promoting research advances in the field of photocatalytic solar energy conversion.

The researchers discussed the protocols for the reliable determination of the efficiency of the overall photocatalytic water splitting reaction based on particulate photocatalysts.

They also proposed to establish accreditation research laboratories for efficiency certification toward the launch of a figure of merit - a "Best research photocatalyst efficiencies" chart.

This initiative would provide an important platform for establishing standard testing protocols for photocatalytic water splitting and for improving the solar-to-hydrogen conversion efficiency in practical applications.

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

Packing more juice in lithium-ion batteries through silicon anodes and polymeric coatings

image: PBS serves as an artificial solid-electrolyte interface with good lithium ion conduction and self-healing abilities to automatically repair any cracks that form during operation.

Image: 
Noriyoshi Matsumi from JAIST

Although silicon anodes could greatly boost the capacity of Li-ion batteries, their performance rapidly degrades with use. Polymeric coatings can help solve this problem, but very few studies have explored the underlying mechanisms. In a recent study, scientists from Japan Advanced Institute of Science and Technology investigate how a poly(borosiloxane) coating greatly stabilizes the capacity of silicon anodes, paving the way for better and more durable Li-ion batteries for electric cars and renewable energy harvesting.

Since their conception, lithium-ion batteries (LIBs) have been constantly improved and adapted so that they can become suitable for vastly different applications, from mobile devices and electric cars to storage units for renewable energy harvesters. In most larger-scale applications (such as the latter two), the focus of LIB research is placed on increasing their capacity and voltage limits without increasing their overall size. Of course, for that to be possible, the components and materials of the battery must be switched up.

Many researchers have placed their bets on the use of silicon anodes instead of the traditional graphite anodes. The anode is the part of the battery where lithium ions are stored when the battery is charged, which then flow through a medium called electrolyte to the cathode on the other end when the battery's charge is used. Although silicon is certainly a promising anode material that offers an almost tenfold capacity increase for LIBs, it brings with it a series of challenges that have to be overcome before silicone anodes can be commercialized.

In a recent study published in ACS Applied Energy Materials, a team of scientists from Japan Advanced Institute of Science and Technology (JAIST) tackled the problems of silicon anodes using a promising polymeric coating: poly(borosiloxane) (PBS). The study was led by Professor Noriyoshi Matsumi and also involved Dr. Sai Gourang Patnaik and Dr. Tejkiran Pindi Jayakumar, who were completing a doctoral course at JAIST at the time.

Polymeric coatings can solve one of the most serious drawbacks plaguing silicon anodes: the formation of an excessively large solid electrolyte interphase (SEI). The spontaneous formation of the SEI between the electrolyte and the anode is actually essential for the long-term performance of the battery. However, materials like silicon tend to expand greatly with use, which causes continuous SEI formation and the depletion of the available electrolyte. Needless to say, this hinders the performance of the battery and causes a massive drop in capacity over time.

This is where polymeric coatings come into play; they can prevent the excessive SEI formation on silicon and form an artificial and stable SEI (see Figure 1). Though researchers had already noted the potential of PBS as a coating for silicon anodes, previous studies did not offer clear explanations for the mechanisms at play, as Prof. Matsumi explains, "There are very few reports on well-defined PBS-based polymers that offer a mechanistic origin for their application and their effects. Thus, we wanted to evaluate and shed light on their contribution to silicon anodes as a self-healing artificial interface that also prevents detrimental volume expansion."

The team compared the short- and long-term performance of silicon anodes with and without polymeric coatings in terms of stability, capacity, and interfacial properties. They did this through a series of electrochemical measurements and theoretical calculations, which led them to understand how PBS helps stabilize the capacity of the silicon anode.

Compared to bare silicon anodes and anodes coated with poly(vinylidene fluoride) (a commercially used coating in LIBs), the self-healing properties of PBS and its reversible accommodation of lithium ions resulted in remarkably enhanced stability. This is partially due to the ability of PBS to fill in any cracks formed in the SEI during operation. As shown in Figure 2, the capacity of the PBS-coated silicon anode remained almost the same for over 300 hundred cycles, unlike that of the other two anodes.

By addressing the main issues associated with silicon anodes, this study paves the way to a new generation of LIBs with much higher capacity and durability. Satisfied with the results, Prof. Matsumi remarks, "The widespread adoption of high-capacity LIBs will allow electric cars reach longer distances, drones become larger, and renewable energy be stored more efficiently." He also adds that, within a decade, we might even see LIBs used as secondary energy sources in larger vehicles such as trains, ships, and aircrafts. Let us hope further research gets us there!

Credit: 
Japan Advanced Institute of Science and Technology

Novel immunotherapy approach to treat cat allergy

image: Black and orange cats

Image: 
Copyright: LIH

Researchers from the Department of Infection and Immunity of the Luxembourg Institute of Health (LIH) brought forward the potential of high doses of a specific adjuvant molecule, namely CpG oligonucleotide, in successfully modulating the immune system's allergic response to the main cat allergen Fel d 1, thereby inducing a tolerance-promoting reaction and reverting the main hallmarks of cat allergy. The researchers analysed the molecular mechanisms underlying this tolerance and proposed a pre-clinical allergen-specific immunotherapy approach to improve the treatment and control of this common type of allergy. The full study results were published recently in the renowned international journal Allergy, the official journal of the European Academy of Allergy and Clinical Immunology (EAACI) and one of the top two journals worldwide in the allergy field.

Cat allergy is a rapidly increasing phenomenon characterised by an hypersensitivity and excessive immune response to certain allergens associated with felines, particularly Fel d 1, a protein typically found in their saliva, glands, skin and fur. Cat allergy manifestations can range from mild symptoms to the development of severe conditions such as rhinitis and asthma, with potentially fatal outcomes. While pharmacotherapy is an option for the milder forms, only allergen-specific immunotherapy (AIT) can ensure an effective and longer lasting treatment in the more advanced cases. AIT typically consists in the subcutaneous injection of gradually increasing quantities of the allergen in question, until a critical dose is reached that induces long-term immune tolerance. Nevertheless, there is still the need to improve cat AIT in terms of efficacy and safety. The researchers hypothesised that the most effective cat AIT could be achieved by optimising the response of immune system T- and B-cells through immune adjuvants to induce the production of antibodies against Fel d 1 while minimising inflammatory reactions, thereby boosting immune tolerance to this allergen.

"We sought to explore new means of increasing the anti-inflammatory activity of AIT with the known immunomodulatory adjuvant CpG, but at a higher safe dose than previously used for this type of therapy", explains Dr Cathy Léonard, scientist within the Allergy and Clinical Immunology research group at the LIH Department of Infection and Immunity and co-corresponding first author of the publication.

To study the cellular and clinical effects of an AIT based on the injection of the Fel d 1 allergen in combination with a high dose of CpG adjuvant, the team challenged Fel d 1-allergic mice with the allergen, both in the presence and absence of AIT. The scientists observed that AIT-treated allergic mice showed a significantly improved lung resistance, similar to that of non-allergic control mice, when compared with untreated allergic mice, with signs of airway inflammation and hyper-responsiveness being considerably reduced. Indeed, when looking at the Fel d 1-specific antibodies, the team noticed that AIT-treated allergic mice displayed lower levels of IgE, which are commonly associated with allergic responses, and higher levels of IgA and IgG, which can have anti-inflammatory properties. In addition, AIT-treated allergic mice showed a reduction in the levels of certain pro-allergic cytokine molecules, produced by type 2 helper T cells (Th2), compared to untreated allergic animals. The researchers also noticed that, already very soon after AIT-injection, there was an increase in the tissues of AIT-treated mice in the abundance of immune cell types involved in allergy regulation and tolerance, namely plasmacytoid dendritic cells (pDCs), Natural Killer cells (NKs), regulatory T cells (T-regs) and regulatory B cells (B-regs). These cells were found to express higher levels of the Tumour Necrosis Factor alpha (TNF-α) receptor 2 (TNFR-2), with NK cells also producing the TNF-α cytokine, which are known to play a role in suppressing the allergen-specific immune response, thereby allowing these regulatory cells to act as a 'brake' on the immune system. "At a later stage, we observed a clear increase of TNF-α in the lungs. Interestingly, AIT also triggered the appearance of a novel and unique type of Tregs, known as biTregs, which is even better equipped to counterbalance the allergic and inflammatory reaction in response to the antigen", adds Dr Léonard. Collectively, these findings point towards the strong anti-inflammatory and anti-allergic effect induced by AIT with a high and safe dose of CpG adjuvant. Quite strikingly, however, the researchers found that the mechanism underlying this allergy-protective action varies according to whether the treatment is administered as a vaccine to mice that had never previously been exposed to the Fel d 1 antigen, and which therefore did not present an existing allergic state, or under already established allergic conditions, as is the case in AIT. The elucidation of these alternative pathways opens up new insights for the future design of preventive and curative allergy vaccines using CpG adjuvant.

Going further in the translation of these findings into applications for the pre-clinical setting, the scientists developed a delivery system based on the subcutaneous injection of the Fel d 1/CpG treatment, as opposed to the more invasive intraperitoneal administration route. The results equally demonstrated the reversal of all allergy hallmarks and confirmed the anti-allergic effects of the AIT.

"In essence, we propose a pre-clinical model of AIT for cat allergy, which mimics the conditions required for human AIT clinical trials and which is already optimised for future use in translational studies. Indeed, our study presents several novelties including the use of endotoxin-free Fel d 1 allergen, which is mandatory in the clinical setting, to prevent the onset of collateral inflammatory responses which could compromise the desired induction of the tolerance-promoting mechanisms. Moreover, we show for the first time that the use of the maximum dose of CpG tolerated in humans has the ability to modulate the allergic response when combined with Fel d 1 allergen, with very favourable safety profiles and through a well-established and medically-approved delivery mode. Based on our data, we believe that CpG deserves reconsideration as an effective AIT adjuvant in humans and that our work sets the bases for the development of novel successful immunotherapeutic treatments for allergies", concludes Prof Markus Ollert, Director of the LIH Department of Infection and Immunity and senior lead author of the study.

Credit: 
Luxembourg Institute of Health

Sensor and detoxifier in one

Ozone is a problematic air pollutant that causes serious health problems. A newly developed material not only quickly and selectively indicates the presence of ozone, but also simultaneously renders the gas harmless. As reported by Chinese researchers in Angewandte Chemie, the porous "2-in-one systems" also function reliably in very humid air.

Ozone (O(3)) can cause health problems, such as difficulty breathing, lung damage, and asthma attacks. Relevant occupational safety regulations therefore limit the concentrations of ozone allowable in the workplace. Previous methods for the detection of ozone, such as those based on semiconductors, have a variety of disadvantages, including high power consumption, low selectivity, and malfunction due to humid air. Techniques aimed at reducing the concentration of ozone have thus far been based mainly on activated charcoal, chemical absorption, or catalytic degradation.

A team led by Zhenjie Zhang at Nankai University (Tianjin, China) set themselves the goal of developing a material that can both rapidly detect and efficiently remove ozone. Their approach uses materials known as covalent organic frameworks (COFs). COFs are two- or three-dimensional organic solids with extended porous crystalline structures; their components are bound together by strong covalent bonds. COFs can be tailored to many applications through the selection of different components.

The researchers selected easily producible, highly crystalline COFs made of aromatic ring systems. The individual building blocks are bound through connective groups called imines (a nitrogen atom bound to a carbon atom by a double bond). These are at the center of the action.

The imine COFs indicate the presence of ozone through a rapid color change from yellow to orange-red, which can be seen with the naked eye and registered by a spectrometer. Unlike many other detectors, the imine COF also works very reliably, sensitively, and efficiently at high humidity and over a wide temperature range. In the presence of water, the water molecules will preferentially bind to the imine groups. Consequently, the researchers assert, a hydroxide ion (OH(?)) is released, which reacts with an ozone molecule. The positively charged hydrogen atom remains bound to the imine group, causing the color change. If more ozone than water is present (or the ozone-laden air is fully dry), the excess ozone binds to the imine groups and splits them. Each imine group degrades two molecules of ozone. This also causes a color change and the crystalline structure slowly begins to collapse. The imine COF thus doesn't just detect the ozone, but also reliably and efficiently breaks the harmful gas down. This makes it more effective than many of the traditional materials employed for this purpose.

Credit: 
Wiley

Energy harvesting: Printed thermoelectric generators for power generation

image: With the help of newly developed inks and special production techniques, such as origami, inexpensive thermoelectric generators can be produced for various applications.

Image: 
Andres Rösch, KIT

Thermoelectric generators, TEGs for short, convert ambient heat into electrical power. They enable maintenance-free, environmentally friendly, and autonomous power supply of the continuously growing number of sensors and devices for the Internet of Things (IoT) and recovery of waste heat. Scientists of Karlsruhe Institute of Technology (KIT) have now developed three-dimensional component architectures based on novel, printable thermoelectric materials. This might be a milestone on the way towards use of inexpensive TEGs. The results are reported in npj Flexible Electronics (DOI: 10.1038/s41528-020-00098-1) and ACS Energy Letters (DOI: 10.1021/acsenergylett.0c02159).

"Thermoelectric generators directly convert thermal into electrical energy. This technology enables operation of autonomous sensors for the Internet of Things or in wearables, such as smart watches, fitness trackers, or digital glasses without batteries," says Professor Uli Lemmer, Head of the Light Technology Institute (LTI) of KIT. In addition, they might be used for the recovery of waste heat in industry and heating systems or in the geothermal energy sector.

New Printing Processes Thanks to Customized Inks

"Conventional TEGs have to be assembled from individual components using relatively complex manufacturing methods," Lemmer says. "To avoid this, we studied novel printable materials and developed two innovative processes and inks based on organic as well as on inorganic nanoparticles." These processes and inks can be used to produce inexpensive, three-dimensional printed TEGs.

The first process uses screen printing to apply a 2D pattern onto an ultrathin flexible substrate foil using thermoelectric printing inks. Then, a generator having about the size of a sugar cube is folded by means of an origami technique. This method has been developed jointly by KIT researchers, the Heidelberg Innovation Lab, and a spinoff of KIT. The second process consists in printing a 3D scaffold, to the surfaces of which the thermoelectric ink is applied.

Cost Reduction by Printing Technologies

Lemmer is convinced that scalable production processes, such as roll-to-roll screen printing or modern additive manufacturing (3D printing) are key technologies. "The new production processes not only enable inexpensive scalable production of these TEGs. Printing technologies also allow the component to be adapted to the applications. We are now working on commercializing the printed thermoelectrical system.

Credit: 
Karlsruher Institut für Technologie (KIT)

Ural Federal University scientists developed a new way of synthesis of high-purity zircon

image: The zircon synthesized can be used as a reference sample for spectroscopic studies in mineralogy.

Image: 
Ural Federal University

The scientific novelty of the work of scientists from Ural Federal University, Institute of Solid State Chemistry and Geology and Geochemistry of the Ural Branch of the Russian Academy of Sciences lies in the fact that for the first time scientists solved the task of creating zircon with certain spectral properties. To this end, they have worked out the so-called sol-gel method.

It is distinguished by its technological simplicity, controllability of processes and allows synthesizing a larger volume of products with high purity than with other methods.

First, from carbonate of zirconium metal and an organosilicon compound, they obtained a sol - a dispersed medium with the presence of small solid particles, from it - a colloidal system, then, after drying and grinding, a precursor powder of a high degree of homogeneity, which was subjected to further grinding and calcination.

Second, it was found that upon mechanical stirring and sequential annealing - heating to 1550 ° C and further cooling the precursor to room temperature - the number of defects in the synthesized sample decreases and its high purity is achieved.

The range of applications of zircon obtained by the scientists from Yekaterinburg is very wide. Due to its high melting point (above 2000 ° C), chemical resistance, mechanical strength, low expansion coefficient at high temperatures and low thermal conductivity, zircon is useful as a refractory material (for example, for the manufacture of industrial furnaces) and a pigment for the production of heat-resistant paints. The presence of impurities and defects in the structure allows us to consider it as a standard for studying the mechanisms of defect formation in natural zircon crystals.

"Even a small concentration of impurities, such as iron, manganese, titanium, rare earth elements, significantly affects the luminescent properties of zircon, in some cases, the impurities enhance the glow in a certain range of electromagnetic waves. In other words, with the help of impurities, you can give zircon the necessary luminescent properties and use it as a phosphor or to detect the level of radiation damage, since the structure of zircon well "remembers" the radiation dose that it received, " says Dmitry Zamyatin, senior researcher, Research Laboratory "EXTRA TERRA CONSORTIUM", UrFU Institute of Physics and Technology.

Furthermore, the synthetic zircon matrix is able to contain large amounts of uranium and thorium. This allows the synthesized zircon to be used as a container for long-term storage and disposal of radioactive elements.

Credit: 
Ural Federal University

'Runway Roadkill' rapidly increasing at airports across the world, UCC study finds

image: An infographic outlining data from the study "Runway roadkill: a global review of mammal strikes with aircraft".

Image: 
Samantha Ball UCC

- World's wildlife, from giraffes to voles, kangaroos to coyotes being hit by aircraft.

- Study identifies incidences at airports in 47 countries across the globe.

- 'Runway Roadkill' increasing by up to 68% annually and has caused damage that has cost in excess of $103 million in the United States alone over a 30 year period.

- It is hoped study could pave way for international efforts to protect wildlife and reduce costly aircraft damage.

From giraffes to the world's smallest mammals, the world's wildlife is being increasingly struck by aircraft, a global study finds.

Airports from Sydney to London and the USA to Germany were examined by researchers who found that incidences of mammal strikes with aircraft - so-called 'runway roadkill' - are increasing significantly year-on-year, are costing aviation authorities millions per annum, but are under-reported internationally.

The international study led by University College Cork (UCC) researcher Samantha Ball, found that 'runway roadkill' has been increasing by up to 68% annually and have caused damage that has cost in excess of $103 million in the United States alone over a 30 year period.

The global review of mammal strikes with aircraft, is funded by the Irish Research Council and the Dublin Airport Authority and is published in Mammal Review.

It is hoped the findings of the study may aid aviation authorities worldwide to increase mitigation measures to protect wildlife and prevent costly damage.

Ms Ball of UCC's School of Biological, Earth & Environmental Sciences said mammals are incredibly diverse and those involved in strike events are no exception.

"As we identified 47 countries which have reported strikes with mammals, the species involved ranged from some of the world's smallest mammals, such as voles, all the way up to the mighty giraffe and included mammals of all sizes in between. As strike events can affect everything from passenger safety, airline economics and local conservation, understanding the species composition and ecology of the local fauna at an airfield is paramount for effective strike mitigation," she said.

However, most aircraft strikes involve birds, meaning there has been comparatively little research to date on collisions with mammals.

The airport environment can provide productive habitat for wildlife due to expanses of semi-natural grasslands, creating favourable ecological habitats, often in heavily urbanised areas.

Airport operators have a legal obligation to reduce wildlife hazard at airfields. It is therefore important for airports to understand the relative risk associated with each species, in order to prioritise and implement effective Wildlife Hazard Management Plans (WHMP).

By analysing published information and mammal strike data from national aviation authorities in Australia, Canada, France, Germany, the United Kingdom, and the United States, researchers found that bats accounted for the greatest proportion of strikes in Australia; rabbits and dog-like carnivores in Canada, Germany, and the United Kingdom; and bats and deer in the United States. Average mammal strikes per year ranged from 1.2 to 38.7 across the countries analysed, for every million aircraft movements.

Researchers identified:

- Reports of around 10 strikes a year with kangaroos

- Around 40 strikes a year with coyotes

- Around 60 strikes with skunks.

- Around 100 strikes a year with fruit bats in Australia

- Over 100 strikes annually in recent years with leporids (rabbits and hares) in only three countries in Europe (France, Germany, UK).

They also found that:

- More mammals were struck during the landing phase of an aircraft's rotation than any other phase.

- Dusk was identified as having the highest strike rate per hour for Australia and the USA and night conferred the greatest risk in Canada and Germany.

- In the USA, it is estimated that mammal strikes are five times more likely to cause damage to aircraft than bird strikes.

- Under-reporting of strikes is recognised on both an international and national level: estimates suggest that only 5-47% of wildlife strikes are reported to aviation authorities, and the reporting of strike events remains voluntary in many countries.

The researchers argue that the ecological and behavioural traits of mammal populations in proximity to and inhabiting airports need to be understood and integrated into WHMPs if effective management policies are to be developed and implemented.

"Therefore, mitigation measures developed in the USA for the specific fauna of North America may not be effective for high-risk species in other parts of the world. As air travel is a global industry, increased research efforts targeted at high risk mammal families outside the USA would benefit not only the national aviation authorities responsible for the research, but also international authorities and airline operators. A more thorough understanding of the ecology of mammal groups inhabiting and using airfields is required to maximise the efficacy of any mitigation measures," their paper argues.

Credit: 
University College Cork

The Ramanujan Machine

Using AI and computer automation, Technion researchers have developed a "conjecture generator" that creates mathematical conjectures, which are considered to be the starting point for developing mathematical theorems. They have already used it to generate a number of previously unknown formulas. The study, which was published in the journal Nature, was carried out by undergraduates from different faculties under the tutelage of Assistant Professor Ido Kaminer of the Andrew and Erna Viterbi Faculty of Electrical Engineering at the Technion.

The project deals with one of the most fundamental elements of mathematics - mathematical constants. A mathematical constant is a number with a fixed value that emerges naturally from different mathematical calculations and mathematical structures in different fields. Many mathematical constants are of great importance in mathematics, but also in disciplines that are external to mathematics, including biology, physics, and ecology. The golden ratio and Euler's number are examples of such fundamental constants. Perhaps the most famous constant is pi, which was studied in ancient times in the context of the circumference of a circle. Today, pi appears in numerous formulas in all branches of science, with many math aficionados competing over who can recall more digits after the decimal point: 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303820...

The Technion researchers proposed and examined a new idea: The use of computer algorithms to automatically generate mathematical conjectures that appear in the form of formulas for mathematical constants.

A conjecture is a mathematical conclusion or proposition that has not been proved; once the conjecture is proved, it becomes a theorem. Discovery of a mathematical conjecture on fundamental constants is relatively rare, and its source often lies in mathematical genius and exceptional human intuition. Newton, Riemann, Goldbach, Gauss, Euler, and Ramanujan are examples of such genius, and the new approach presented in the paper is named after Srinivasa Ramanujan.

Ramanujan, an Indian mathematician born in 1887, grew up in a poor family, yet managed to arrive in Cambridge at the age of 26 at the initiative of British mathematicians Godfrey Hardy and John Littlewood. Within a few years he fell ill and returned to India, where he died at the age of 32. During his brief life he accomplished great achievements in the world of mathematics. One of Ramanujan's rare capabilities was the intuitive formulation of unproven mathematical formulas. The Technion research team therefore decided to name their algorithm "the Ramanujan Machine," as it generates conjectures without proving them, by "imitating" intuition using AI and considerable computer automation.

According to Prof. Kaminer, "Our results are impressive because the computer doesn't care if proving the formula is easy or difficult, and doesn't base the new results on any prior mathematical knowledge, but only on the numbers in mathematical constants. To a large degree, our algorithms work in the same way as Ramanujan himself, who presented results without proof. It's important to point out that the algorithm itself is incapable of proving the conjectures it found - at this point, the task is left to be resolved by human mathematicians."

The conjectures generated by the Technion's Ramanujan Machine have delivered new formulas for well-known mathematical constants such as pi, Euler's number (e), Apéry's constant (which is related to the Riemann zeta function), and the Catalan constant. Surprisingly, the algorithms developed by the Technion researchers succeeded not only in creating known formulas for these famous constants, but in discovering several conjectures that were heretofore unknown. The researchers estimate this algorithm will be able to significantly expedite the generation of mathematical conjectures on fundamental constants and help to identify new relationships between these constants.

As mentioned, until now, these conjectures were based on rare genius. This is why in hundreds of years of research, only a few dozens of formulas were found. It took the Technion's Ramanujan Machine just a few hours to discover all the formulas for pi discovered by Gauss, the "Prince of Mathematics," during a lifetime of work, along with dozens of new formulas that were unknown to Gauss.

According to the researchers, "Similar ideas can in the future lead to the development of mathematical conjectures in all areas of mathematics, and in this way provide a meaningful tool for mathematical research."

The research team has launched a website, RamanujanMachine.com, which is intended to inspire the public to be more involved in the advancement of mathematical research by providing algorithmic tools that will be available to mathematicians and the public at large. Even before the article was published, hundreds of students, experts, and amateur mathematicians had signed up to the website.

The research study started out as an undergraduate project in the Rothschild Scholars Technion Program for Excellence with the participation of Gal Raayoni and George Pisha, and continued as part of the research projects conducted in the Andrew and Erna Viterbi Faculty of Electrical Engineering with the participation of Shahar Gottlieb, Yoav Harris, and Doron Haviv. This is also where the most significant breakthrough was made - by an algorithm developed by Shahar Gottlieb - which led to the article's publication in Nature. Prof. Kaminer adds that the most interesting mathematical discovery made by the Ramanujan Machine's algorithms to date relates to a new algebraic structure concealed within a Catalan constant. The structure was discovered by high school student Yahel Manor, who participated in the project as part of the Alpha Program for science-oriented youth. Prof. Kaminer added that, "Industry colleagues Uri Mendlovic and Yaron Hadad also participated in the study, and contributed greatly to the mathematical and algorithmic concepts that form the foundation for the Ramanujan Machine. It is important to emphasize that the entire project was executed on a voluntary basis, received no funding, and participants joined the team out of pure scientific curiosity."

Credit: 
Technion-Israel Institute of Technology