Tech

Advancement creates nanosized, foldable robots

image: Army-funded researchers create nanosized robots that fold themselves into 3D configurations and could enable locomotion, novel metamaterial design and highly-fidelity sensors.

Image: 
Courtesy Cornell University

RESEARCH TRIANGLE PARK, N.C. -- Army-funded researchers created nanosized robots that could enable locomotion, novel metamaterial design and high-fidelity sensors.

Cornell University researchers created micron-sized shape memory actuators that fold themselves into 3D configurations and allow atomically thin 2D materials with just a quick jolt of voltage. Once the material is bent, it holds its shape, even after the voltage is removed.

To demonstrate the technology, the team created what is potentially the world's smallest self-folding origami bird.

"The research team is pushing the boundary of how quickly and precisely we can control motion at the micro- and even nano-scales," said Dr. Dean Culver, program manager for Complex Dynamics and Systems at Army Research Office, an element of the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. "In addition to paving the way for nano-robots, the scientific advancements from this effort can enable smart material design and interaction with the molecular biological world that can assist the Army like never before."

The research may result in future applications 10 to 20 years from now, he said.

In a peer-reviewed article published in Science Robotics, researchers said this work could make it possible for a million fabricated microscopic robots releasing from a wafer to fold themselves into shape, crawl free, and go about their tasks, even assembling into more complicated structures.

"We humans, our defining characteristic is we've learned how to build complex systems and machines at human scales, and at enormous scales as well," said Prof. Paul McEuen, the John A. Newman Professor of Physical Science at Cornell University. "What we haven't learned how to do is build machines at tiny scales."

Micron-sized shape memory actuators could allow atomically thin two-dimensional materials to fold themselves into 3D configurations with just a quick jolt of voltage. Researchers create what is potentially the world's smallest self-folding origami bird.

This is a step in that basic, fundamental evolution in what humans can do, of learning how to construct machines that are as small as cells, he said.

The researchers ongoing collaboration has generated a throng of nanoscale machines and components, each seemingly faster, smarter and more elegant than the last.

"We want to have robots that are microscopic but have brains on board," said Prof. Itai Cohen, professor of physics at Cornell University. "That means you need to have appendages that are driven by complementary metal-oxide-semiconductor transistors, basically a computer chip on a robot that's 100 microns on a side. The hard part is making the materials that respond to the CMOS circuits."

This shape memory actuator developed by the research teams allows them to drive with voltage and make the materials hold a bent shape. The machines fold themselves fast-within 100 milliseconds. They can also flatten and refold themselves thousands of times and they only need a single volt to be powered to life.

"These are major advances over current state-of-the-art devices," Cohen said. "We're really in a class of our own."

Cornell researchers have created micron-sized shape memory actuators that enable atomically thin two-dimensional materials to fold themselves into 3D configurations. All they require is a quick jolt of voltage. And once the material is bent, it holds its shape - even after the voltage is removed. As a demonstration, the team created what is potentially the world's smallest self-folding origami bird.

These actuators can bend with a radius of curvature smaller than a micron-the highest curvatures of any voltage-driven actuator by an order of magnitude. This flexibility is important because one of the bedrock principles of microscopic robot manufacturing is that the robot size is determined by how small the various appendages can be made to fold. The tighter the bends, the smaller the folds, and the tinier the footprint for each machine. It's also important that these bends can be held by the robot, which minimizes the power consumption, a feature especially advantageous for microscopic robots and machines.

The devices consist of a nanometer-thin layer of platinum capped with a titanium or titanium dioxide film. Several rigid panels of silicon dioxide glass sit atop those layers. When a positive voltage is applied to the actuators, oxygen atoms are driven into the platinum and swap places with platinum atoms.

This process, called oxidation, causes the platinum to expand on one side in the seams between the inert glass panels, which bends the structure into its predesignated shape. The machines can hold that shape even after the voltage is removed because the embedded oxygen atoms bunch up to form a barrier, which prevents them from diffusing out.

By applying a negative voltage to the device, the researchers can remove the oxygen atoms and quickly restore the platinum to its pristine state. And by varying the pattern of the glass panels, and whether the platinum is exposed on the top or bottom, they can create a range of origami structures actuated by mountain and valley folds.

"One thing that's quite remarkable is that these little tiny layers are only about 30 atoms thick, compared to a sheet of paper, which might be 100,000 atoms thick. It's an enormous engineering challenge to figure out how to make something like that have the kind of functionalities we want," McEuen said.

The team is currently working to integrate their shape memory actuators with circuits to make walking robots with foldable legs as well as sheet-like robots that move by undulating forward. These innovations may someday lead to nanorobots that can clean bacterial infection from human tissue, microfactories that can transform manufacturing and robotic surgical instruments that are 10 times smaller than current devices, according to Cohen.

The team is also researching the principles that need to change in order to design, manufacture and operate machines at this scale.

Credit: 
U.S. Army Research Laboratory

Cancer immunotherapy may also treat certain autoimmune diseases

A team of researchers has found disrupting the interaction between cancer cells and certain immune cells is more effective at killing cancer cells than current immunotherapy treatments.

The findings, which include studies in cell lines and animal models, appeared in JCI Insight and focus on a protein called CD6 as a target for a new approach to immunotherapy.

Over the past two decades, new approaches to cancer treatment have been developed that block immune checkpoints, which are receptors on the surface of certain immune cells, like natural killer T cells. Cancer exploits these immune cells and render them dormant.

This treatment, called checkpoint inhibitor immunotherapy, gives these immune cells a chance to fight back. Unfortunately, though, patients that become cancer-free are often left with autoimmune conditions that, in some patients, can eventually be fatal.

Only approximately one third of patients with cancer ultimately benefit from currently available immune checkpoint inhibitors.

"I'm interested in how cancer cells interact with certain immune cells to control the immune response to cancer, and how the immune system interacts with organs and tissues to cause autoimmune diseases," says study senior author David Fox, M.D., a rheumatologist and cancer researcher at the University of Michigan Rogel Cancer Center. "How can researchers intervene to alter these interactions and simultaneously destroy cancers while preventing autoimmunity?"

Fox's lab and collaborators at the Cleveland Clinic Research Foundation have been studying the roles of CD6 and receptors it interacts with as it related to autoimmunity for many years. Previously, the research team was able to create man-made CD6 and CD318, a receptor that CD6 interacts with, to act like human antibodies in the immune system and fight off cancer cells.

This new study proved successful in combatting human breast cancer, lung cancer and prostate cancer in cell lines, indicating that the anti-CD6 antibody, known as UMCD6, could be useful in treating a wide range of cancer types.

They also grafted human breast cancer cells into immunocompromised mice, and followed up with transferring human immune cells into the mice. When given an injection of UMCD6, the tumors almost completely disappeared in just one week, compared to mice treated without UMCD6.

The findings have implications beyond this first description of a potential new approach to against cancer. The ability of UMCD6 to prevent and treat autoimmune diseases makes the potential implications for cancer immunotherapy especially intriguing, the researchers say.

It's been known that CD6 has to play a role in autoimmunity, since mice that don't have CD6 on their immune cells have major suppression of autoimmune diseases.

Prior research has shown that an antibody that binds to CD6 and pulls it from the cell surface to the inside of the cell can effectively treat autoimmune mouse models of three different human diseases: rheumatoid arthritis, an inflammatory disease that causes the immune system to inflame the membrane that lines the joints, multiple sclerosis, a disease that affects the central nervous system, brain and spinal cord, and uveitis, an eye disease that can cause blindness.

Now, when treated with UMCD6, Fox saw the mice show striking reductions in disease activity, autoimmunity and organ damage in mice.

"When UMCD6 binds to CD6 on these specific immune cells, it creates a CD6 cluster that dives into the interior of the cell, allowing no CD6 to remain on the cell surface" says Fox. "This causes the killer T cells to seek out and destroy the cancer cells much more aggressively. At the same time, removing CD6 from the surface of CD4 cells, with the same UMCD6 antibody, controls and limits the activity of the CD4 cells, which are the cells that instigate autoimmune diseases."

"Until now, we haven't been able to get immune cells to kill cancer cells without triggering an immune response that can be harmful to patients," he adds. "What we've created here completely challenges prevailing concepts."

So how close are researchers to studying UMCD6 in humans?

There are ongoing studies of anti-CD6 antibodies in India, where an anti-CD6 antibody has been approved for the treatment of psoriasis. However, in the United States, substantial research remains to be done to translate the discovery from laboratory models to human clinical trials.

"If UMCD6 is proven to successfully treat cancer and prevent recurrences, this could overcome the major current limitations to checkpoint inhibition success in human cancer immunotherapy," says Fox. "I look forward to seeing what lies ahead in this field of research."

Credit: 
Michigan Medicine - University of Michigan

Large-scale study finds AI-powered COVID-19 solution by RADlogics reduces turnaround time

MOSCOW, RUSSIA -
Moscow Center for Diagnostics & Telemedicine and RADLogics shared the results of a large-scale study (Moscow Experiment on the Computer Vision for the Analysis of Medical Images - mosmed.ai, NCT04489992) conducted by the Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department. The clinical research found that the introduction of RADLogics' AI-Powered solution into radiology workflow to analyze Chest-CT scans during the COVID-19 pandemic reduced report turnaround time by an average of 30 percent, which is equivalent to 7 minutes per case.

Presented by Dr. Tatiana Logunova, MD, of the Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Health Care Department during the recent ECR 2021 conference, the extensive research included a total of 128,350 Chest-CT scans, of which 36,358 were processed by RADLogics AI-Powered COVID-19 solution, reported by 570 participating radiologists at over 130 hospitals and outpatient clinics in Moscow.

"Early on in the pandemic, it was clear to us that COVID-19 required new healthcare management approaches, and effective clinical management depends more on disease severity than on the virus identification," said Dr. Sergey Morozov, MD, PhD, MPH, who serves as CEO of Moscow Diagnostics and Telemedicine Center. "As a result, the aim of our research was to determine the impact of the introduction of AI-services analyzing Chest-CTs for COVID-19 related findings on the radiologists' workflow and performance. In addition to finding that the integration of AI did not have a negative effect on the interpretation or report accuracy, our researchers found a significant improvement in productivity and report turnaround time by the expert radiologists that leveraged AI."

The study was conducted over two separate phases with the first taking place between April 30, 2020 to June 18, 2020 and the second taking place between June 18, 2020 to August 31, 2020. The study found that report turnaround time was significantly shorter for all time periods in a group of radiologists with available AI results that were seamlessly integrated into radiologists' current workflow, compared to a group with non-available AI results. In addition, in the shift between the two study time periods, additional clinical parameters were added to the standard of care, including the addition of a disease severity score. The added information created an increased workload on radiologists, which increased the average read time by more than 25 percent. In response, the RADLogics AI-Powered COVID-19 solution was enhanced to support the new clinical requirements. Results shown indicate that with the augmented AI solution, including all clinical measurements and severity scoring, was able to maintain the overall productivity gain of 30 percent.

"We applaud this significant real-world research by Dr. Morozov and his team, who were on the frontline of Moscow's successful fight against the COVID-19 pandemic while demonstrating the value of embracing new AI technologies to aid in these efforts," said Moshe Becker, CEO and Co-Founder of RADLogics. "This study - first of its kind in its scale - demonstrates the full potential of AI as a tool to augment radiologists to increase throughput, improve efficiency and reduce time-to-treatment. This research provides large-scale clinical validation to an earlier academic study by UCLA that was published in Academic Radiology, which conducted a time-motion study using our AI-powered solution to measure the impact of our solution on radiologists' productivity that found out using our solution saved up to 44 percent in radiologists' reading time."

Since the start of the pandemic, RADLogics has responded with the deployment of the company's AI-Powered medical image analysis solution worldwide. Designed for easy installation and integration both on-site and via the cloud, RADLogics' algorithms are supported by the company's patented software platform that enables rapid deployment at multiple hospitals, and seamless integration with existing workflows. In accordance with FDA guidance for imaging systems and software to address the COVID-19 public health emergency, RADLogics has made its FDA cleared CT and X-ray solutions available to hospitals and healthcare systems throughout the U.S. for patient triage and management. All the company's AI-Powered solutions are available worldwide through major OEM distribution partners including Nuance via the AI Marketplace in the U.S. market.

"In addition to the sheer scale of this research, it is important to note the demonstrated ability of our AI-powered solution to quickly adapt to the change in clinical requirements and maintain the overall performance as demonstrated in the second phase of the study," added Becker. "In the near-term, responsive and scalable AI algorithms could play a critical role as healthcare systems across the world contend with potential coronavirus surges as new variants spread - not to mention the tremendous burnout and economic pressures across the healthcare sector. In the long-term, this groundbreaking research also illustrates the tremendous benefit of adopting robust AI platforms that can be deployed rapidly at scale and seamlessly integrated into existing workflows to augment radiology teams."

Dr. Morozov's research team from the Moscow Center for Diagnostics & Telemedicine Center included Drs. T. Logunova, A. E. Andreychenko, V. Klyashtorny, K. M. Arzamasov, and A. Vladzymyrskyy. The presentation entitled "Artificial intelligence services impact on radiologist's performance in the context of the COVID-19 pandemic" is available for ECR 2021 registrants at https://connect.myesr.org/course/artificial-intelligence-ai-and-covid-19/.

Credit: 
Center of Diagnostics and Telemedicine

Living a stress-free life may have benefits, but also a downside

UNIVERSITY PARK, Pa. -- Stress is a universal human experience that almost everyone deals with from time to time. But a new study found that not only do some people report feeling no stress at all, but that there may be downsides to not experiencing stress.

The researchers found that people who reported experiencing no stressors were more likely to experience better daily well-being and fewer chronic health conditions. However, they were also more likely to have lower cognitive function, as well.

David M. Almeida, professor of human development and family studies at Penn State, said the study suggests that small, daily stressors could potentially benefit the brain, despite being an inconvenience.

"It's possible that experiencing stressors creates opportunities for you to solve a problem, for example, maybe fixing your computer that has suddenly broken down before an important Zoom meeting," Almeida said. "So experiencing these stressors may not be pleasant but they may force you to solve a problem, and this might actually be good for cognitive functioning, especially as we grow older."

According to the researchers, a large number of previous studies have linked stress with a greater risk for many negative outcomes, like chronic illness or worse emotional wellbeing. But Almeida said that while it may make sense to believe that the less stress someone experiences the more healthy they will be, he said little research has explored that assumption.

"The assumption has always been that stress is bad," Almeida said. "I took a step back and thought, what about the people who report never having stress? My previous work has focused on people who have higher versus lower levels of stress, but I'd never questioned what it looks like if people experience no stress. Are they the healthiest of all?"

The researchers used data from 2,711 participants for the study. Prior to the start of the study, the participants completed a short cognition test. Then, the participants were interviewed each night for eight consecutive nights, and answered questions about their mood, chronic conditions they may have, their physical symptoms -- such as headaches, coughs or sore throats -- and what they did during that day.

The participants also reported the number of stressors -- like disagreements with friends and family or a problem at work -- and the number of positive experiences, such as sharing a laugh with someone at home or work, they had experienced in the previous 24 hours.

After analyzing the data, the researchers found that there did appear to be benefits for those who reported no stressors throughout the study, about 10 percent of the participants. These participants were less likely to have chronic health conditions and experience better moods throughout the day.

However, those who reported no stressors also performed lower on the cognition test, with the difference equaling more than eight years of aging. Additionally, they were also less likely to report giving or receiving emotional support, as well as less likely to experience positive things happening throughout the day.

"I think there's an assumption that negative events and positive events are these polar opposites, but in reality they're correlated," Almeida said. "But really, I think experiencing small daily stressors like having an argument with somebody or having your computer break down or maybe being stuck in traffic, I think they might be a marker for someone who has a busy and maybe full life. Having some stress is just an indicator that you are engaged in life."

Almeida said the findings -- recently published in the journal
Emotion -- suggest that it may not be as important to avoid stress as it is to change how you respond to stress.

"Stressors are events that create challenges in our lives," Almeida said. "And I think experiencing stressors is part of life. There could be potential benefits to that. I think what's important is how people respond to stressors. Respond to a stressor by being upset and worried is more unhealthy than the number of stressors you encounter."

Credit: 
Penn State

How RNA editing affects the immune system

Three University of Colorado Cancer Center researchers are part of a team that recently published a paper offering new insight into how the immune system relates to cancer. Quentin Vicens, PhD, Jeffrey Kieft, PhD, and Beat Vögeli, PhD, are authors on the paper, which looks at how an enzyme called ADAR1 operates in pathways associated with cancer.

"In a cell, ADAR1 edits native RNA -- or self-RNA -- so that the cell recognizes it as its own. It's a key protection against autoimmune disorders," Kieft says. "But if a virus infects, viral RNA isn't edited by ADAR1, so the cell can recognize that and react. The cell knows it has foreign RNA, and it activates immune responses to fight off that infection."

For their paper published last month in the journal Nature Communications, Kieft, Vögeli, Vicens, and the rest of the team -- including Parker Nichols, a graduate student in the Structural Biology and Biochemistry program in the CU School of Medicine who works jointly in the Kieft and Vögeli labs -- looked at where specifically the ADAR1 binds to RNA to perform the editing process. They already knew a domain of ADAR1 known as Z-alpha binds to a form of RNA called Z-RNA, but they found that Z-alpha ADAR1 can bind to other RNA forms as well.

"The team asked, 'How are all these locations in RNA being recognized by Z-alpha if they supposedly don't form Z-RNA?'" Kieft says. "One of the take-home messages is that other forms of RNA can bind to Z-alpha ADAR1 and can even partially form Z-RNA. That was a surprise because it shows that RNA can form this specific Z structure in places we didn't recognize before."

The team is now proposing a model for how Z-alpha ADAR1 is able to bind to different types of RNA. It's an important finding in cancer research because of the role of ADAR1 in cancer regulation. A normally functioning immune system oftentimes can detect cancerous cells as being dangerous and then eliminate them, but if there's too much ADAR1 editing happening, a cell could be tamping down the immune response in an effort to protect itself.

"In a lot of cancers, there is upregulation of ADAR1; it is doing more than it should," Kieft says. "The excess ADAR1 presumably is leading to more RNA editing than is normal. This is going to misregulate things,affecting specific regions of RNA or types of RNA. The excess editing is going to throw off the normal immune response, but it probably has a lot of other affects in the cell as well. Cancer is a disease where gene regulation has gone awry, so if an important regulatory pathway like editing by ADAR has gone haywire, that can contribute to the cancer."

Knowing all the targets of ADAR1 in a cell is also a step toward more effective therapies, Kieft says. If researchers understand the pathways, they may be able to find a way to disrupt the overactive editing process and boost the immune response. It's a finding applicable to many other diseases as well -- Vögeli says since the paper was published, the researchers have heard from other scientists around the country interested in ADAR1.

"We have gotten a lot of feedback on the paper," he says. "There is a lot of interest in this field right now, and other people are interested in how they could use our structural information."

Vögeli and Vicens are now organizing a meeting focused on ADAR1 function and putting together special issues of the journals Molecules and International Journal of Molecular Sciences.

Vicens says the research project also illustrates the importance of collaborative work and being open to new directions. "I basically brought a new project and direction to the Kieft lab when I joined," Vicens says. "Both labs were open to supporting it intellectually and financially, and the resultant team effort enabled research that would not otherwise have been done."

Credit: 
University of Colorado Anschutz Medical Campus

Looking at optical Fano resonances under a new light

image: Conventional Fano-resonant metasurfaces can only reflect light with a specific frequency, a planar wavefront, and linear polarization; the newly proposed metasurfaces can be tailored to be reflective to light with an arbitrary wavefront shape and circular polarization (doi: 10.1117/1.AP.3.2.026002)

Image: 
Overvig and Alù, DOI: 10.1117/1.AP.3.2.026002

In 1961, physicist Ugo Fano provided the first theoretical explanation to an anomalous asymmetry observed in the spectral profiles of noble gases. He put forth an impactful interpretation of this phenomenon, now called "Fano resonance," stating that if a discrete excited state of a system falls within the energy range of a continuum of other possible states, these two can interfere with each other and give rise to abnormal peaks and dips in the system's frequency response.

Though Fano resonance can occur in various physical systems, recent progress in metasurfaces and nanotechnology has drawn attention to this phenomenon as a potentially powerful tool in optics. The conventional understanding of optical Fano resonances is that they are selective in the momentum-frequency domain; in other words, they can only be excited by planar light waves with specific frequencies and incidence angles, thus limiting their applicability. But could this picture actually be incomplete?

In a recent study published in Advanced Photonics, scientists Adam Overvig and Andrea Alù from the Advanced Research Center, City University of New York, USA, investigated Fano-resonant metasurfaces and discovered new properties that could unlock their true potential. Overvig and Alù went beyond the periodic metasurfaces conventionally used for eliciting Fano resonances, proving that strict periodicity is not actually required to enable this phenomenon, and as a result existing metasurfaces only account for a specific subset of the Fano resonances that can emerge in optical systems.

A general example is useful to get the overall gist of the study. A conventional, periodic Fano-resonant metasurface offers strong polarization, and both spectral and angular selectivity. This means that the system barely reflects light of any given frequency, incidence angle, and polarization unless they specifically match those of its Fano resonance (in which case, perfect reflection occurs). As stated before, another important aspect of such periodic metasurfaces is that they can only undergo Fano resonances if the incident light waves have a planar wavefront. In stark contrast with these limitations, the researchers proved that it is possible to craft a nonperiodic metasurface that achieves perfect reflection, curiously accompanied by phase conjugation of the incoming fields, for light waves with an arbitrarily tailored wavefront shape and form.

Overvig and Alù mathematically demonstrated that these metasurfaces can be built by strategically introducing nonperiodic perturbations in otherwise highly periodic photonic crystal slabs. Their work sheds light on yet-unexplored aspects of optical Fano resonance, extending the concept beyond conventional understanding.

The proposed strategy has multiple relevant applications, as summarized by Alù: "Our finding generalizes the concept of a Fano resonance, showing that it is not necessarily associated with a planar wavefront. In practice, this enables a new class of optical devices that are transparent and weakly interacting with the incoming light for most excitations but are somehow triggered by a specific wavefront form, frequency, and polarization, which can be selected by design. Only under this specific excitation condition, the device becomes highly reflective and sends back a time-reversed version of the specific input."

He elaborates on the functionality of such devices: "An example can be a transparent surface that can be illuminated from any angle and any frequency and polarization, and it is always transparent. However, if you illuminate it with a localized point source placed at a specific location only, with the precise frequency and polarization, all the input energy is reflected and focused back at the location of the source."

The introduced concept of generalized Fano resonances could pave the way for sophisticated metamaterials that manipulate light in novel ways, with exciting applications in a disparate number of scenarios not limited to optics, but also extendable to acoustics and other wave phenomena.

Credit: 
SPIE--International Society for Optics and Photonics

Technique based on artificial intelligence permits automation of crop seed analysis

In Brazil, researchers affiliated with the Center for Nuclear Energy in Agriculture (CENA) and the Luiz de Queiroz College of Agriculture (ESALQ), both part of the University of São Paulo (USP), have developed a methodology based on artificial intelligence to automate and streamline seed quality analysis, a process required by law and currently done manually by analysts accredited with the Ministry of Agriculture.

The group used light-based technology like that deployed in plant and cosmetics analysis to acquire images of the seeds. They then turned to machine learning to automate the image interpretation process, minimizing some of the difficulties of conventional methods. For example, for many species, optical imaging technology can be applied to an entire batch of seeds rather than just samples, as is the case currently. Furthermore, the technique is non-invasive and does not destroy the products analyzed or generate residues.

The light-based techniques consisted of chlorophyll fluorescence and multispectral imaging. Among plants that are relevant as both crops and experimental models, the researchers chose tomatoes and carrots produced in different countries and seasons and submitted to different storage conditions. They used seeds of the Gaucho and Tyna commercial tomato varieties produced in Brazil and the US, and seeds of the Brasilia and Francine carrot varieties produced in Brazil, Italy, and Chile.

The choice was based on the economic importance of these food crops, for which world demand is high and rising, and on the difficulties faced by growers in collecting their seeds. In both tomatoes and carrots, the ripening process is not uniform because the plants flower continuously and seed production is non-synchronous, so that seed lots may contain a mixture of immature and mature seeds. The presence of immature seeds is not easily detected by visual methods, and techniques based on machine vision can minimize this problem.

The researchers compared the results of their non-destructive analysis with those of traditional germination and vigor tests, which are destructive, time-consuming, and labor-intensive. In the germination test, seed analysts separate samples, sow them to germinate in favorable temperature, water, and oxygen conditions, and verify the final quantity of normal seedlings produced in accordance with the rules established by the Ministry of Agriculture. Vigor tests are complementary and more sophisticated. The most common are based on the seed's response to stress and seedling growth parameters.

Besides the difficulties mentioned, traditional methods are time-consuming. In the case of tomatoes and carrots, for example, it can take up to two weeks to obtain results, which are also largely subjective, depending on the analyst's interpretation. "Our proposal is to automate the process as much as possible using chlorophyll fluorescence and multispectral imaging to analyze seed quality. This will avoid all the usual bottlenecks," said Clíssia Barboza da Silva, a researcher at CENA-USP and one of the authors of an article on the study published in Frontiers in Plant Science.

Silva is the principal investigator for the project supported by São Paulo Research Foundation - FAPESP. The lead author of the article is Patrícia Galletti, who conducted the study as part of her master's research and won the Best Poster Award in 2019 at the 7th Seed Congress of the Americas, where she presented partial results of the project.

Chlorophyll as a marker of quality

Chlorophyll is present in seeds, where it supplies energy for the storage of nutrients needed for development (lipids, proteins, and carbohydrates). Once it has fulfilled this function, the chlorophyll breaks down. "However, if the seed doesn't complete the maturation process, this chlorophyll remains inside it. The less residual chlorophyll, the more advanced the maturation process and the more and higher-quality the nutrients in the seed. If there's a lot of chlorophyll, the seed is immature and its quality is poor," Silva said.

If light at a specific wavelength is shone on the chlorophyll in a seed, it does not transfer this energy to another molecule but instead re-emits the light at another wavelength, meaning that it fluoresces. This fluorescence can be measured, she explained. Red light can be used to excite chlorophyll and capture the fluorescence using a device that converts it into an electrical signal, producing an image comprising gray, black, and white pixels. The lighter areas correspond to higher levels of chlorophyll, indicating that the seed is immature and unlikely to germinate.

Artificial intelligence

In multispectral imaging, LEDs (light-emitting diodes) emit light in the visible portion of the spectrum as well as non-visible light (UV and near-infrared). To analyze seed quality based on reflectance, the researchers used 19 wavelengths and compared the results with quality assessment data obtained by traditional methods. The best results were obtained using near-infrared in the case of carrot seeds and UV in the case of tomato seeds.

Seeds contain proteins, lipids and sugars that absorb part of the light emitted by the LEDs and reflect the rest. The reflected light is captured by a multispectral camera, and the image captured is processed to separate the seeds from the support in the device, which corresponds to black pixels with zero value, while the seeds are gray-scale. The values of the pixels in the image of a seed correspond to its chemical composition.

"We don't work with an average result for a sample. We perform individualized extraction for each seed," Silva said. "The larger the amount of a given nutrient the seed contains, the more light of a specific wavelength it absorbs so that less is reflected. A seed with a smaller nutrient content contains fewer light-absorbing molecules. This means its reflectance is higher, although this varies according to its components, which behave differently depending on the light wavelength used."

An algorithm identifies the wavelength that obtains the best result. The process provides information about the seed's chemical composition, from which its quality can be inferred.

For the researchers, it was not enough to reach the imaging stage, as this is still an operation that requires human observation. "We then deployed chemometrics, a set of statistical and mathematical methods used to classify materials chemically," Silva said. "The idea was that the equipment should classify quality on the basis of the image it captured." The methods used by the scientists in this study are widely used in medicine and the food industry.

Next, they leveraged machine learning to test the models created using chemometrics. "We taught the model to identify high-quality and low-quality seeds. We used 70% of our data to train the model, and used the remaining 30% for validation," Silva said. Quality classification accuracy ranged from 86% to 95% in the case of tomato seeds, and from 88% to 97% in the case of carrot seeds.

The two main techniques were both accurate and time-saving, given the speed of image capture. The chlorophyll fluorescence instrument captured one image per second, while the multispectral imaging analyzer processed 19 images in five seconds.

Unexpected results

An unexpected result produced in the course of the project proved highly important. Chlorophyll fluorescence and multispectral imaging are also efficient techniques for plant variety screening, an essential part of seed lot evaluation to avoid economic losses. "Growers buy seeds with the expectation of a certain crop yield, but production will be affected if seeds with different genetic characteristics aren't properly separated," Silva said.

Screening is currently done by analysts trained in the skills needed to grade seeds by color, shape, and size, as well as molecular markers where possible. In the study, both techniques proved efficient to separate carrot varieties but multispectral imaging was unsatisfactory in the case of tomato varieties.

"The study produced novel results with regard to the use of fluorescence to screen varieties," Silva said. "We found no prior research in which fluorescence was used for this purpose. Some studies show multispectral imaging to be efficient for this purpose, but not with the instrument we used."

Instrument sharing

A good way to transfer the knowledge produced by the research to the productive sector, Silva said, would be to have firms develop the equipment for sale to seed producers. "It would be possible to use the results of our research to develop an instrument that used only UV light to characterize tomato seed quality and bring it to market, for example," she surmised.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Solving 'barren plateaus' is the key to quantum machine learning

image: A barren plateau is a trainability problem that occurs in machine learning optimization algorithms when the problem-solving space turns flat as the algorithm is run. Researchers at Los Alamos National Laboratory have developed theorems to prove that any given algorithm will avoid a barren plateau as it scales up to run on a quantum computer.

Image: 
Los Alamos National Laboratory

LOS ALAMOS, N.M., March 19, 2021--Many machine learning algorithms on quantum computers suffer from the dreaded "barren plateau" of unsolvability, where they run into dead ends on optimization problems. This challenge had been relatively unstudied--until now. Rigorous theoretical work has established theorems that guarantee whether a given machine learning algorithm will work as it scales up on larger computers.

"The work solves a key problem of useability for quantum machine learning. We rigorously proved the conditions under which certain architectures of variational quantum algorithms will or will not have barren plateaus as they are scaled up," said Marco Cerezo, lead author on the paper published in Nature Communications today by a Los Alamos National Laboratory team. Cerezo is a post doc researching quantum information theory at Los Alamos. "With our theorems, you can guarantee that the architecture will be scalable to quantum computers with a large number of qubits."

"Usually the approach has been to run an optimization and see if it works, and that was leading to fatigue among researchers in the field," said Patrick Coles, a coauthor of the study. Establishing mathematical theorems and deriving first principles takes the guesswork out of developing algorithms.

The Los Alamos team used the common hybrid approach for variational quantum algorithms, training and optimizing the parameters on a classical computer and evaluating the algorithm's cost function, or the measure of the algorithm's success, on a quantum computer.

Machine learning algorithms translate an optimization task--say, finding the shortest route for a traveling salesperson through several cities--into a cost function, said coauthor Lukasz Cincio. That's a mathematical description of a function that will be minimized. The function reaches its minimum value only if you solve the problem.

Most quantum variational algorithms initiate their search randomly and evaluate the cost function globally across every qubit, which often leads to a barren plateau.

"We were able to prove that, if you choose a cost function that looks locally at each individual qubit, then we guarantee that the scaling won't result in an impossibly steep curve of time versus system size, and therefore can be trained," Coles said.

A quantum variational algorithm sets up a problem-solving landscape where the peaks represent the high energy points of the system, or problem, and the valleys are the low energy values. The answer lies in the deepest valley. That's the ground state, represented by the minimized cost function. To find the solution, the algorithm trains itself about the landscape, thereby navigating to the low spot.

"People have been proposing quantum neural networks and benchmarking them by doing small-scale simulations of 10s (or fewer) few qubits," Cerezo said. "The trouble is, you won't see the barren plateau with a small number of qubits, but when you try to scale up to more qubits, it appears. Then the algorithm has to be reworked for a larger quantum computer."

A barren plateau is a trainability problem that occurs in machine learning optimization algorithms when the problem-solving space turns flat as the algorithm is run. In that situation, the algorithm can't find the downward slope in what appears to be a featureless landscape and there's no clear path to the energy minimum. Lacking landscape features, the machine learning can't train itself to find the solution.

"If you have a barren plateau, all hope of quantum speedup or quantum advantage is lost," Cerezo said.

The Los Alamos team's breakthrough takes an important step toward quantum advantage, when a quantum computer performs a task that would take infinitely long on a classical computer. Achieving quantum advantage hinges in the short term on scaling up variational quantum algorithms. These algorithms have the potential so solve practical problems when quantum computers of 100 qubits or more become available--hopefully soon. Quantum computers currently max out at 65 qubits. A qubit is the basic unit of information in a quantum computer, as bits are in a classical digital computer.

"The hottest topic in noisy intermediate-scale quantum computers is variational quantum algorithms, or quantum machine learning and quantum neural networks," Coles said. "They have been proposed for applications from solving the structure of a molecule in chemistry to simulating the dynamics of atoms and molecules and factoring numbers."

Credit: 
DOE/Los Alamos National Laboratory

Carbon uptake in regrowing Amazon forest threatened by climate and human disturbance

image: Secondary Forest in the Tapajós region of the Brazilian Amazon

Image: 
Ricardo Dalagnol

Large areas of forests regrowing in the Amazon to help reduce carbon dioxide in the atmosphere, are being limited by climate and human activity.

The forests, which naturally regrow on land previously deforested for agriculture and now abandoned, are developing at different speeds. Researchers at the University of Bristol have found a link between slower tree-growth and land previously scorched by fire.

The findings were published today [date] in Nature Communications, and suggest a need for a better protection of these forests if they are to help mitigate the effects of climate change.

Global forests are expected to contribute a quarter of pledged mitigation under the 2015 Paris Agreement. Many countries pledged in their Nationally Determined Contribution (NDC) to restore and reforest millions of hectares of land to help achieve the goals of the Paris Agreement. Until recently, this included Brazil, which in 2015 vowed to restore and reforest 12 million hectares, an area approximately equal to that of England.

Part of this reforestation can be achieved through the natural regrowth of secondary forests, which already occupy about 20% of deforested land in the Amazon. Understanding how the regrowth is affected by the environment and humans will improve estimates of the climate mitigation potential in the decade ahead that the United Nations has called the "Decade of Ecosystem Restoration".

Viola Heinrich, lead author and PhD student from the School of Geographical Sciences at the University of Bristol, said, "Our results show the strong effects of key climate and human factors on regrowth, stressing the need to safeguard and expand secondary forest areas if they are to have any significant role in the fight against climate change."

Annually, tropical secondary forests, which are growing on used land, can absorb carbon up to 11 times faster than old-growth forests. However, there are many driving factors that can influence the spatial patterns of regrowth rate, such as when forest land is burned either to clear for agriculture or when fire elsewhere has spread.

The research was led by researchers at the University of Bristol and Brazil's National Institute for Space Research (INPE) and included scientists from the Universities of Cardiff and Exeter, UK.

Scientists used a combination of satellite-derived images that detect changes in forest cover over time to identify secondary forest areas and their ages as well as satellite data that can monitor the aboveground carbon, environmental factors and human activity.

They found that the impact of disturbances such as fire and repeated deforestations prior to regrowth reduced the regrowth rate by 20% to 55% across different areas of the Amazon.

"The regrowth models we developed in this study will be useful for scientists, forest managers and policy makers, highlighting the regions that have the greatest regrowth potential." Said Heinrich.

The research team also calculated the contribution of Amazonian Secondary Forests to Brazil's net emissions reduction target and found that by preserving the current area, secondary forests can contribute to 6% of Brazil's net emissions reduction targets. However, this value reduces rapidly to less than 1% if only secondary forests older than 20 years are preserved.

In December 2020, Brazil amended its pledge (NDC) under the Paris Agreement such that there is now no mention of the 12 million hectares of forest restoration or eliminating illegal deforestation as was pledged in Brazil's original NDC target in 2015.

Co-author Dr Jo House, University of Bristol said "The findings in our study highlight the carbon benefits of forest regrowth and the negative impact of human action if these forests are not protected. In the run up to the 26th Conference of the Parties, this is a time when countries should be raising their climate ambitions for protecting and restoring forest ecosystems, not lowering them as Brazil seems to have done."

Co-author Dr Luiz Aragão, National Institute of Space Research in Brazil, added "Across the tropics several areas could be used to regrow forests to remove CO2 from the atmosphere. Brazil is likely to be the tropical country with the largest potential for this kind of Nature-based solution, which can generate income to landowners, reestablish ecosystems services and place the country again as a global leader in the fight against climate change."

The team will now focus their next steps on applying their methods to estimate the regrowth of secondary forest across the tropics.

Credit: 
University of Bristol

New findings about immune system reaction to malaria and sickle cell disease

Scientists have discovered in more detail than ever before how the human body's immune system reacts to malaria and sickle cell disease.

The researchers from the universities of Aberdeen, Edinburgh, Exeter and Imperial College, London have published their findings in Nature Communications.

Every year there are ~200 million cases of malaria, which causes ~400,000 deaths.

As it causes resistance against malaria, the sickle cell disease mutation has spread widely, especially in people from Africa.

But if a child inherits a double dose of the gene - from both mother and father - they will develop sickle cell disease. Around 20,000 children are born with sickle cell disease every year and it is now the commonest single-gene disorder among the UK population. Despite this, much about it remains poorly understood.

The researchers discovered that sugars called mannoses are expressed on the surfaces of both red blood cells infected with malarial parasites and also affected by sickle cell disease. The mannoses cause both infected cells and sickle cells to be eaten in the spleen.

The study was funded by the Wellcome Trust and the University of Aberdeen Development Trust.
The team hope the findings will eventually help inform new treatments for malaria.

Lead investigator, Professor Mark Vickers, Chair in Applied Medicine at the University of Aberdeen, said: "Malaria and sickle cell disease are responsible for hundreds of thousands of deaths a year but many aspects of how the body's immune system reacts to these diseases are not fully understood.

"This collaborative project has revealed more than ever before about the chains of events that occur in these diseases and can hopefully contribute to research into new treatments."

Co-author Professor Gordon Brown, at the University of Exeter, said: "This is a truly seminal discovery that sheds light on how abnormal red blood cells are recognised and cleared by the immune system, with exciting implications for future therapeutic approaches to treat a range of diseases including malaria and sickle cell disease described here"

Credit: 
University of Exeter

Taking a look at the last millennium shows: Droughts in Germany could become more extreme

In the future, droughts could be even more severe than those that struck parts of Germany in 2018. An analysis of climate data from the last millennium shows that several factors have to coincide to produce a megadrought: not only rising temperatures, but also the amount of solar radiation, as well as certain meteorological and ocean-circulation conditions in the North Atlantic, like those expected to arise in the future. A group of researchers led by the Alfred Wegener Institute have just released their findings in the journal Communications Earth & Environment.

Despite the precipitation this winter, which in some cases was considerable, many parts of Germany still haven't recovered from the past three, extremely dry years; the forests and other vegetation are suffering as a result. Some have speculated that 2018 was the driest year in modern history. Yet a look at the climate data from the last millennium shows that this "record-breaking" year, just like the extremely dry years 2003 and 2015, was within the limits of natural variability. There were periods of extreme drought between the years 1400 and 1480, and between 1770 and 1840. However, they affected very different landscapes, with a much higher percentage of natural mixed forests, riparian zones and wetlands.

"We have to be prepared for the fact that, because of climate change, in the future Germany might experience extreme droughts that do enormous damage to our modern agriculture and forestry," says Dr Monica Ionita-Scholz from the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI). She and her team analysed historical data from the last millennium in order to reconstruct droughts. "In our study, for the first time we sought to identify the driving factors for droughts in Central Europe in the last millennium," the climate expert explains. To do so, they used e.g. historical records on temperature, precipitation and the water levels in rivers, and analysed currents in the North Atlantic and atmospheric pressure patterns - two key factors that determine the weather. The study's conclusion: there have repeatedly been megadroughts in Central Europe, whenever several factors coincided. The periods of extreme drought in the last millennium were characterised by a weak or negative phase of the Atlantic Multidecadal Oscillation, reduced solar radiation, and frequently occurring, stable atmospheric pressure systems over the central North Atlantic and North Sea.

"Right now, most forecasts for future drought scenarios are concentrating on the rising temperatures produced by anthropogenic climate change, together with aridity due to pronounced evaporation," says Ionita-Scholz. "But if we want to prepare for the future, it's imperative that we also take into account further natural and anthropogenic factors in our calculations." The general consensus of the scientific community is that ocean circulation in the North Atlantic will likely weaken. If this comes to pass, and there is also a phase of reduced solar activity due to natural variability, the result could be decades-long megadroughts like those experienced in the last millennium - posing tremendous social and political challenges.

Credit: 
Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research

Reactive boride infusion stabilizes ni-rich cathodes for lithium-ion batteries

image: Distinguished Professor Jaephil Cho and his research team in the School of Energy and Chemical Engineering at UNIST.

Image: 
UNIST

A new coating for lithium-ion batteries (LIBs), developed by scientists at UNIST promises extended driving for future electric vehicles (EVs). The coating, described in a paper published in the journal Nature Energy, when applied to LIBs is shown to have improved cycling stability even after being charged and discharged more than 500 times. As a result, the development of EV batteries that can drive longer distances with a single battery charge has gained considerable momentum.

Distinguished Professor Jaephil Cho and his research team in the School of Energy and Chemical Engineering at UNIST unveiled a new coating technology that can greatly suppress intergranular cracking, chemical side reactions, and impedance growth. According to the research team, this is extraordinary given that it happens at room temperature, and the secondary particle does not alter the crystalline bulk, but produces drastic changes in the grain boundaries (GBs) as they are infiltrated by reactive wetting.

Nickel-rich (Ni-rich) materials are considered promising cathode materials, as they can deliver higher capacity at lower costs. However, conventional Ni-rich cathodes have been limited in terms of short lifespans, caused by microcracking and the side reactions of the electrolyte due to repeated charging/discharging operation. For this reason, in order to prevent electrolyte degradation, a protective coating is being applied onto the surfaces of all materials that are currently being produced with heat treatment at 700°C or higher. However, there had been problems with poor performance and high production costs.

In the study, the research team has presented a room-temperature synthesis route to achieve a full surface coverage of secondary particles and facile infusion into grain boundaries, and thus offer a complete 'coating-plus-infusion' strategy. Through this method, they constructed a high-quality cobalt boride (CoxB) metallic glass infusion of NCM secondary particles by reactive wetting. Under the strong driving force of an interfacial chemical reaction, nanoscale cobalt boride (CoxB) metallic glass not only completely wraps around the secondary particle surfaces, but also infuses into the grain boundaries (GBs) between primary particles. This is extraordinary given that it happens at room temperature, and the secondary particle does not alter the crystalline bulk, but produces drastic changes in the GBs as they are infiltrated by reactive wetting. Consequently, it offers superior electrochemical performance and better safety by mitigating the entwined cathode-side intergranular SCC, microstructural degradation, and side reactions, as well as the TM crossover effect to the anode.

Their findings reveal that the battery, constructed with the new coating method exhibited an impressive 95% capacity retention over 500 cycles, which is about 20% improved life retention rate compared to the existing Ni-rich materials. Not only that it has also dramatically improved the rate capability and cycling stability of NCM, including under high-discharge rate and high-temperature (45?°C) conditions, as it greatly suppressed intergranular cracking, side reactions and impedance growth.

Credit: 
Ulsan National Institute of Science and Technology(UNIST)

How do humpback whales rest?

image: The research was carried out off the Tromsø coast in Norway.

Image: 
Takashi Iwata

An international research collaboration has used an omnidirectional camera attached to humpback whale to reveal how these creatures rest underwater. These findings demonstrate how wide-angle lens cameras can be useful tools for illuminating the ecology of difficult-to-observe animals in detail.

The research group consisted of Assistant Professor Takashi Iwata of Kobe University's Graduate School of Maritime Sciences, Researcher Martin Biuw of the Norwegian Institute of Marine Research, Assistant Professor Kagari Aoki and Professor Katsufumi Sato of the Atmosphere and Ocean Research Institute, the University of Tokyo, and Professor Patrick Miller of the University of St. Andrews.

These research results were published online in Behavioural Processes on February 25, 2021.

Main Points

The researchers attached an omnidirectional (360°) camera to a humpback whale and discovered that these animals rest while drifting underwater. Whales can rest either on the surface or underwater, and it is believed that they choose which of these different environments to rest in depending on the situation.

The omnidirectional camera recorded a wide range of information on the environment surrounding the tagged whale, revealing that humpback whales rest in groups rather than on their own.

These results have demonstrated that animal-borne omnidirectional cameras are useful for learning more about animals that are difficult to observe.

Research Background

It is difficult to observe the ecology of marine animals directly as they spend the majority of their lives underwater. However, studies on the ecology of difficult-to-observe marine animals have been recently conducted using a method called bio-logging. This method involves attaching a camera to an animal and recording environmental information related to their behavior and surroundings. Various kinds of data can be recorded and measured, and this information can be used to understand aspects such as animal behavior and diving physiology. Such data includes depth, swimming speed, acceleration (which can be used to understand the animal's posture and detailed movements), vocalizations, heart rate and GPS (Global Positioning System) location data.

Cameras in particular are a powerful tool as they enable researchers to view the individual animal's surroundings, which in turn helps them to understand the animal's behavior. However, the camera's limited field of view has been an issue with animal-borne cameras up until now. For example, research using a camera attached to a humpback whale (Megaptera novaeangliae) revealed that the whale would quickly move away from foraging sites if a competitor was present. However, the competitor was not visible due to the limited scope of the camera, therefore its presence was merely assumed. A camera with a wide-angle lens is therefore necessary to film the animal's entire surroundings.

This research focused on the humpback whale, a species of baleen whale that is found in oceans around the globe. Using bio-logging, researchers have learned more about humpback whales' foraging habits, however little is known about their resting behaviors. Foraging events can be identified from the recorded depth, swimming speed and acceleration (movement) of the whale that are characteristic signs that it is chasing prey. However, researchers have not identified the characteristic signs of resting, and it is not understood what the differences are between resting and swimming slowly. Information about an animal's resting behavior is necessary in order to understand their ecology. For example, if we consider animal behaviors in terms of their time budget, the percentage of time for other activities such as foraging decreases if their resting periods increase. Even though information about resting behaviors is essential for understanding animal ecology, hardly anything is known about baleen whales' resting habits.

This research group used an omnidirectional camera (with a 360° field-of-view on land and a 270° field-of-view underwater) and a behavioral data logger in order to illuminate the resting behavior of humpback whales.

Research Methodology and Findings

RICOH supplied the basic THETA camera module for this research, which was made pressure-resistant and waterproofed using epoxy glue by Little Leonardo Corp., leading to the development of a new type of animal-borne omnidirectional camera. A suction cup tag was made out of buoyant materials that could be attached to the whale. The tag contained an omnidirectional camera, a behavioral data logger and a radio transmitter.

The field study was conducted in January 2016, off the Tromsø coast in Norway (Figure 1). To tag the whale, the researchers approached it in a small vessel (5-6m) and used a 6m pole to attach the tag to the animal (Figure 2 and this Youtube video). The tag was designed so that it would fall off naturally after several hours and float up to the surface. The tag was then recovered by determining its location via the signal from the transmitter.

The research team were able to tag one individual, obtaining around one hour of video data and approximately eleven hours of behavioral data. From the behavioral data, the researchers discovered that the whale was inactive during the first half of the recorded period and demonstrated active behavior in the latter half (Figure 3).

Based on past research, it was assumed that this active movement in the latter half was foraging activity. The video data was captured during the first half of the behavioral data recording period when the whale did not move much. In this videoed period, the tagged whale's deepest dive was 11m on average and its average swimming speed (cruising speed) was 0.75m/s-1. It has been reported that humpback whales' regular swimming speed is 1.45m/s-1, however the tagged whale was moving much more slowly during this period. Whales usually move their flukes (tails) when they swim but there were no signs that the individual whale moved its fluke in the behavioral data recorded during the videoed period. In the footage, two other whales that are drifting underwater without moving their flukes are visible. It was determined that the tagged individual was also drifting underwater from its slow swimming speed, lack of fluke movement and the continued presence in the video footage of other individuals that were drifting. Seal species, sperm whales and loggerhead turtles are known to drift underwater while they are resting. Therefore, it is believed that the tagged humpback whale in this study was also resting. Previous research has reported that baleen whale species rest on the surface but this study has revealed that they also rest while drifting underwater. It is thought that whales consider factors such as marine conditions and their own physical condition when choosing from the two different resting environments: on the surface or underwater. In addition, the footage from the omnidirectional camera shows that whales rest underwater in a group rather than on their own.

Further Research

Researchers have been using animal-borne cameras as a tool to investigate the ecology of marine animals. For example, a backwards-facing camera attached to a mother seal recorded images of a pup swimming behind her. However, to ascertain the significance of these images (for example, whether or not the mother was teaching the pup how to hunt) it is necessary to use a camera with a wide field of view so that we can obtain knowledge about the surrounding environment. Still camera images of touching behaviors between whales have also been recorded; however, a wide-lens camera would aid researchers in determining the frequency at which this behavior occurs. These examples show how necessary wide-lens cameras, such as omnidirectional cameras, are for investigating the ecology of marine animals. Such cameras enable researchers to record the environment surrounding the tagged animal, enabling them to determine whether other individuals (such as competitors, collaborators, or predators) are present or not, and understand the frequency and distribution of food sources.

This research group inferred that the tagged whale was resting based on the captured footage of nearby individuals at rest, demonstrating the usefulness of omnidirectional cameras. It is hoped that these cameras can be utilized to illuminate the ecology of marine animals that are difficult to observe.

Credit: 
Kobe University

Is it worth investing in solar PV with batteries at home?

image: A typical PV installation compared to a PV-storage system.

Image: 
Adam Islaam - International Institute for Applied Systems Analysis (IIASA)

Solar energy is a clean, renewable source of electricity that could potentially play a significant part in fulfilling the world's energy requirements, but there are still some challenges to fully capitalizing on this potential. Researchers looked into some of the issues that hamper the uptake of solar energy and proposed different policies to encourage the use of this technology.

Installing solar panels to offset energy costs and reduce the environmental impact of their homes has been gaining popularity with homeowners in recent years. On a global scale, an increasing number of countries are similarly encouraging the installation of solar photovoltaics (PV) at residential buildings to increase the share of renewable energy in their energy mix and enhance energy security. Despite the promising advantages this mode of electricity generation offers there are still a number of challenges that need to be overcome.

Batteries to store excess electricity

Solar PV electricity generation peaks during the day when electricity demand is low, resulting in overproduction - especially on weekdays when people are usually not at home. Currently, this excess electricity supply is typically exported to the central electricity grid, but ideally, homes that have solar panels should be able to store overproduction of solar electricity, for example, using batteries, and consume it in the evening when demand is high and there is no solar electricity generation. The problem is that the investment cost for batteries is currently quite high, which makes it economically unprofitable for consumers to pair their solar PV with a battery. In their new study published in the journal Applied Energy, researchers from IIASA, University College London, UK, and Aalto University, Finland, looked into this challenge and proposed different policies to encourage residential electricity consumers to pair solar PV with battery energy storage.

"We wanted to determine whether investing in residential solar PV combined with battery energy storage could be profitable under current market conditions for residential consumers and what kind of support policies can be used to enhance the profitability of stand-alone batteries or PV-battery systems. On top if this, we also wanted to compare the system (or regulatory) cost of each PV-battery policy to the benefit of that particular policy for residential consumers who invest in these technologies," explains lead author Behnam Zakeri, a researcher with the IIASA Energy, Climate, and Environment Program.

Benefits of using battery storage

The study shows that without a battery, homeowners only use 30-40% of the electricity from their solar PV panels, while the rest of the electricity is exported to the grid with very little to no benefit for the owner. With a home battery, the self-consumption of solar PV in the building almost doubles, allowing the residents to reduce electricity imports from the grid by up to 84%, which can in turn help the owner to become less dependent on the grid and electricity prices. In addition, the researchers found that while PV-batteries are presently not really profitable for residential consumers, they can become so with the implementation of slightly different policies and regulations, even in high-latitude countries where solar irradiation is relatively low.

Energy policies for a decentralized energy system

The authors propose some novel energy storage polices that offer a positive return on investment between 40% and 70% for residential PV-battery storage, depending on the policy. These include, among others that national renewable energy policies adopt more innovative incentives to enhance the economic profitability of decentralized green energy solutions based on the contribution of these systems to the grid. The results indicate that this can be easily achieved by, for example, rewarding consumers for using their solar PV generation onsite, instead of encouraging them to export the excess solar energy they produce to the grid.

The researchers further posit that the way utility companies and electricity distribution firms generate income today may itself be a hindrance to promoting the self-consumption of renewable energy in buildings, as these companies generally charge consumers for each unit of electricity imported from the grid. If consumers therefore become independent from the grid, grid operators and utility companies would lose a significant part of their income. Such a scenario calls for new business models and operating modes to guarantee that central utilities do not see decentralized solutions as a threat to their revenues.

In today's renewable electricity generation environment, capital subsidies are one option to partly pay for investment in batteries. The study points out that these policies are costly for the system, and may not automatically result in system-level benefits as they do not reward the optimal use of batteries. In this regard, Zakeri and his colleagues propose a "storage policy" that rewards residential battery owners to store and discharge electricity whenever the system needs it. The profitability of PV-battery systems of course also depends on the type of retail pricing mechanism in the system. The findings indicate that dynamic electricity pricing at the consumer side, such as hourly electricity prices with an enhanced gap between off-peak and peak prices, will encourage consumers to use home batteries to benefit from charging at low price hours and discharging the battery when the electricity price is high. This way of operating a home battery could help reduce the pressure on the electricity grid at peak times, which has significant benefits for the system.

"Traditional, central energy structures are transitioning to new systems based on decentralized, renewable energy solutions. This requires more flexible, modern, and effective policies that can guarantee the social and economic benefits of the energy transition. We hope our analysis contributes to a better understanding of the role of some energy policies that can promote decentralized energy solutions," Zakeri concludes.

Credit: 
International Institute for Applied Systems Analysis

TPU scientists offer new plasmon energy-based method to remove CO2 from atmosphere

Researchers from Tomsk Polytechnic University jointly with their colleagues from the Czech Republic have found a method to synthesize cyclic carbonates from atmospheric CO2. Cyclic carbonates are organic compounds, used as electrolytes for lithium-ion batteries, green solvents as well as in pharmaceutical drugs manufacturing. The scientists managed to synthetize carbonates under sunlight and at room temperature, while conventional methods require synthesis under high pressure and temperatures. The research findings are published in Journal of Materials Chemistry A (IF:11,301; Q1).

"The increase in CO2 levels in the atmosphere is a global environmental problem. The solutions of the problem are usually focused on measures to reduce CO2 emissions. An alternative method is to use the CO2 already existing in the atmosphere for useful chemical transformations. Thus, we offered a new method allowing to obtain widely sought-after cyclic carbonates under sunlight. Most often, such reactions are carried out at high temperatures ranging from 60°? to 150°? and high CO2 pressure up to 25 atm. It means the technological chain requires additional equipment for CO2 compression and heating. In other words, it is impossible to simply extract it from the air," Olga Guselnikova, Research Fellow of the TPU Research School of Chemistry and Applied Biomedical Sciences, one of the authors, says.

As a result of the experiments, the scientists synthesized cyclic carbonates from the interaction of CO2 and epoxides, used as starting materials.

"To begin with, we had to capture CO2. In order to do that, we used gold nanoparticles grafted with organic nucleobases. They served as traps for CO2 molecules and, at the same time, remained non-reactive with other substances. The experiments showed that these traps efficiently captured CO2 from the air. We mixed the suspension from the nanoparticles and captured CO2 with epoxides," Pavel Postnikov, Associate Professor of the TPU Research School of Chemistry and Applied Biomedical Sciences, says.

Then, the researchers irradiated this mixture with infrared light.

"The gold nanoparticles possess a plasmonic effect. It means the incident light excites plasmonic quasiparticles next to gold nanoparticles and the plasmonic quasiparticles trigger the reaction. They convert light energy into the energy required for the chemical reaction. These properties allowed conducting the reaction under ambient conditions. By the way, the matter of plasmonic chemistry mechanisms, how plasmons actually trigger chemical processes and how it works is a trending scientific topic. A number of our previous articles relate to this field of research. Control experiments allowed us to suggest that plasmon excitation on particles leads to the transfer of energy to the captured CO2 molecule without heating," Olga Guselnikova says.

As the authors of the article note, the synthesis process is comparable with similar methods, however, it does not require special technologically sophisticated equipment.

"The entire process takes about 24 hours, while regular indicators for other methods vary from 12 to 24 hours. We started from small volumes and received a few milliliters of cyclic carbonates. However, we explicated in the article that the method can be scaled up at least fivefold and nanoparticles themselves can be reused with the same efficiency. At the same time, the catalytic indicators of our plasmonic system are among the highest recorded ones for the reaction. The most important is to demonstrate an opportunity to conduct the reaction directly with the air without prior purification or CO2 concentration under ambient conditions and sunlight. Ultimately, it always makes the synthesis more simple and eco-friendly," Pavel Postnikov adds.

Credit: 
Tomsk Polytechnic University