Tech

Two new attacks break PDF certification

image: IT experts at RUB have found several security issues with digital signatures for PDF documents over the past years.

Image: 
RUB, Kramer

A security issue in the certification signatures of PDF documents has been discovered by researchers at Ruhr-Universität Bochum. This special form of signed PDF files can be used, for instance, to conclude contracts. Unlike a normal PDF signature, the certification signature permits certain changes to be made in the document after it has actually been signed. This is necessary to allow the second contractual party to also sign the document. The team from the Horst Görtz Institute for IT Security in Bochum showed that the second contractual party can also change the contract text unnoticed when they add their digital signature, without this invalidating the certification. The researchers additionally discovered a weakness in Adobe products that enables attackers to implant malicious code into the documents.

Simon Rohlmann, Dr. Vladislav Mladenov, Dr. Christian Mainka and Professor Jörg Schwenk from the Chair for Network and Data Security are presenting the results at the 42nd IEEE Symposium on Security and Privacy, which is taking place as an online conference from 24 to 27 May 2021. The team has also published the results on the website https://pdf-insecurity.org.

24 out of 26 applications affected

When using certification signatures, the party who issues the document and signs it first can determine which changes the other party can then make. For instance, it is possible to add comments, insert text into special fields, or add a second digital signature at the bottom of the document. The Bochum group circumvented the integrity of the protected PDF documents with two new attacks - called Sneaky Signature Attack (SSA) and Evil Annotation Attack (EAA). The researchers were thus able to display fake content in the document instead of the certified content, without this rendering the certification invalid or triggering a warning from the PDF applications.

The IT security experts tested 26 PDF applications, in 24 of which they were able to break the certification with at least one of the attacks. In eleven of the applications, the specifications for PDF certifications were also implemented incorrectly. The detailed results have been published online.

Malicious code can be implanted into Adobe documents

In addition to the security loopholes described above, the team from the Horst Görtz Institute also discovered a weakness specifically in Adobe products. Certified Adobe documents can execute JavaScript code, such as accessing URLs to verify the identity of a user. The researchers showed that attackers could use this mechanism to implant malicious code into a certified document. This makes it possible, for instance, for a user's privacy to be exposed by sending his IP address and information about the PDF applications used to an attacker when the document is opened.

Credit: 
Ruhr-University Bochum

Light-emitting MXene quantum dots

image: Synthesis methods of light-emitting MQDs.

Image: 
Opto-Electronic Advances

In a new publication from Opto-Electronic Advances; DOI https://doi.org/10.29026/oea.2021.200077, Researchers led by Professor Jeongyong Kim at the Department of Energy Science, Sungkyunkwan University, Suwon, Republic of Korea review light-emitting MXene quantum dots.

MXenes have found wide-ranging applications in energy storage devices, sensors, catalysis, etc. owing to their high electronic conductivity and wide range of optical absorption. However, the absence of semiconducting MXenes has limited their applications related to light emission.

Extensively reviewing current relevant research, the authors summarise recent advances in MXene quantum dot (MQD) research on the synthesis, optical properties and applications of MQDs as light emitting quantum materials. Research has shown that quantum dots (QDs) derived from MXene (MQDs) not only retain the properties of the parent MXene but also demonstrate significant improvement on light emission and quantum yield.

The authors provide an overview of light emitting MQDs and their synthesis methods, optical properties, and applications in various optical, sensory, and imaging devices. Future prospects for light emitting MQDs are also discussed to provide insight to help further advance research.

Article reference: Sharbirin AS, Akhtar S, Kim JY. Light-emitting MXene quantum dots. Opto-Electron Adv 4, 200077 (2021) . doi: 10.29026/oea.2021.200077

Keywords: MXene, quantum dots, light emission, MAX phase, 2D materials

Professor Kim's research group carries out cutting-edge research on 2D materials focusing on their light emission properties. From 2014, the group has presented pioneering results on spatially and spectrally identifying the exciton complexes of 2D-transition metal dichalcogenides such as MoS2 and WS2. Using a variety of chemical or physical treatments or specific device configurations the group has demonstrated how to engineer or improve the emission characteristics of these 2D direct bandgap semiconductors. By recently extending the scope of the study on 2D materials into MXenes, the research group is now exploring non-toxic, bio-compatible and high-efficiency light emission by MQDs.

Credit: 
Compuscript Ltd

Superflimsy graphene turned ultrastiff by optical forging

image: Top - Atomic force microscopy images of the suspended graphene drum skin before and after optical forging. Bottom - analogue presentation of how a material can become stiffer when it is corrugated.

Image: 
University of Jyväskylä/Pekka Koskinen, Vesa-Matti Hiltunen

Graphene is an ultrathin material characterized by its ultrasmall bending modulus, superflimsiness. Now the researchers at the Nanoscience Center of the University of Jyväskylä have demonstrated how an experimental technique called optical forging can make graphene ultrastiff, increase its stiffness by several orders of magnitude. The research was published in npj 2D Materials and Applications in May 2021.

Graphene is an atomically thin carbon material loaded with excellent properties, such as large charge carrier mobility, superb thermal conductivity, and high optical transparency. Its impermeability and tensile strength that is 200 times that of steel make it suitable for nanomechanical applications. Unfortunately, its exceptional flimsiness makes any three-dimensional structures notoriously unstable and difficult to fabricate.

These difficulties may now be over, as a research group at the Nanoscience Center of the University of Jyväskylä has demonstrated how to make graphene ultrastiff using a specifically developed laser treatment. This stiffening opens up whole new application areas for this wonder material.

The same group has previously prepared three-dimensional graphene structures using a pulsed femtosecond laser patterning method called optical forging. The laser irradiation causes defects in the graphene lattice, which in turn expands the lattice, causing stable three-dimensional structures. Here the group used optical forging to modify a monolayer graphene membrane suspended like a drum skin and measured its mechanical properties using nanoindentation.

The measurements revealed that the bending stiffness of graphene increased up to five orders of magnitude compared to pristine graphene, which is a new world record.

"At first, we did not even comprehend our results. It took time to digest what optical forging had actually done for graphene. However, gradually the full gravity of the implications started to dawn on us," says Dr. Andreas Johansson, who led the work on characterizing the properties of the optically forged graphene.

Stiffened graphene opens up avenues for novel applications

Analysis revealed that the increase in bending stiffness was induced during optical forging by strain-engineering corrugations in the graphene layer. As part of the study, thin-sheet elasticity modeling of the corrugated graphene membranes was performed, showing that the stiffening happens on both the micro- and nanoscales, at the level of the induced defects in the graphene lattice.

"The overall mechanism is clear but unraveling the full atomistic details of defect-making still needs further research," says Professor Pekka Koskinen, who performed the modeling.

Stiffened graphene opens up avenues for novel applications, such as fabrication of microelectromechanical scaffold structures or manipulating mechanical resonance frequency of graphene membrane resonators up to the GHz regime. With graphene being light, strong and impermeable, one potential is to use optical forging on graphene flakes to make micrometer-scale cage structures for intravenous drug transport.

"The optical forging method is particularly powerful because it allows direct writing of stiffened graphene features precisely at the locations where you want them," says Professor Mika Pettersson, who oversees the development of the new technique, and continues, "Our next step will be to stretch our imagination, play around with optical forging, and see what graphene devices we can make."

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

Immune cells imperfect at distinguishing between friend and foe, study suggests

When it comes to distinguishing a healthy cell from an infected one that needs to be destroyed, the immune system's killer T cells sometimes make mistakes.

This discovery, described today in eLife, upends a long-held belief among scientists that T cells were nearly perfect at discriminating friend from foe. The results may point to new ways to treat autoimmune diseases that cause the immune system to attack the body, or lead to improvements in cutting-edge cancer treatments.

It is widely believed that T cells can discriminate perfectly between infected cells and healthy ones based on how tightly they are able to bind to molecules called antigens on the surface of each. They bind tightly to antigens derived from viruses or bacteria, but less tightly to our own antigens on normal cells. But recent studies by scientists looking at autoimmune diseases suggest that T cells can attack otherwise normal cells if they express unusually large numbers of our own antigens, even though these bind only weakly.

"We set out to resolve this discrepancy between the idea that T cells are near perfect at discriminating between healthy and infected cells based on the antigen binding strength, and clinical results that suggests otherwise," says co-first author Johannes Pettmann, a D.Phil student at the Sir William Dunn School of Pathology and Radcliffe Department of Medicine, University of Oxford, UK. "We did this by very precisely measuring the binding strength of different antigens."

The team measured exactly how tightly receptors on T cells bind to a large number of different antigens, and then measured how T cells from healthy humans responded to cells loaded with different amounts of these antigens. "Our methods, combined with computer modelling, showed that the T cell's receptors were better at discrimination compared to other types of receptors," says co-first author Anna Huhn, also a D.Phil student at the Sir William Dunn School of Pathology, University of Oxford. "But they weren't perfect - their receptors compelled T cells to respond even to antigens that showed only weak binding."

"This finding completely changes how we view T cells," adds Enas Abu-Shah, Postdoctoral Fellow at the Kennedy Institute and the Sir William Dunn School of Pathology, University of Oxford, and also a co-first author of the study. "Instead of thinking of them as near-perfect discriminators of the antigen binding strength, we now know that they can respond to normal cells that simply have more of our own weakly binding antigens."

The authors say that technical issues with measuring the strength of T cell receptor binding in previous studies likely led to the mistaken conclusion that T cells are perfect discriminators, highlighting the importance of using more precise measurements.

"Our work suggests that T cells might begin to attack healthy cells if those cells produce abnormally high numbers of antigens," says senior author Omer Dushek, Associate Professor at the Sir William Dunn School of Pathology, University of Oxford, and a Senior Research Fellow in Basic Biomedical Sciences at the Wellcome Trust, UK. "This contributes to a major paradigm shift in how we think about autoimmunity, because instead of focusing on defects in how T cells discriminate between antigens, it suggests that abnormally high levels of our own antigens may be responsible for the mistaken autoimmune T-cell response. On the other hand, this ability could be helpful to kill cancer cells that mutate to express abnormally high levels of our antigens."

Dushek adds that the work also opens up new avenues of research to improve the discrimination abilities of T cells, which could be helpful to reduce the autoimmune side-effects of many T-cell-based therapies without reducing the ability of these cells to kill cancer cells.

Credit: 
eLife

"Bite" defects in bottom-up graphene nanoribbons

image: Left panel: STM image of bottom-up zigzag graphene nanoribbons. Right panel: Spin-density in the vicinity of a "bite" defect in a zigzag graphene nanoribbon.

Image: 
Empa / EPFL (adapted with permission from J. Phys. Chem. Lett. 2021,12, 4692-4696, Copyright 2021 American Chemical Society)

Graphene nanoribbons (GNRs), narrow strips of single-layer graphene, have interesting physical, electrical, thermal, and optical properties because of the interplay between their crystal and electronic structures. These novel characteristics have pushed them to the forefront in the search for ways to advance next-generation nanotechnologies.

While bottom-up fabrication techniques now allow the synthesis of a broad range of graphene nanoribbons that feature well-defined edge geometries, widths, and heteroatom incorporations, the question of whether or not structural disorder is present in these atomically precise GNRs, and to what extent, is still subject to debate. The answer to this riddle is of critical importance to any potential applications or resulting devices.

Collaboration between Oleg Yazyev's Chair of Computational Condensed Matter Physics theory group at EPFL and Roman Fasel's experimental nanotech@surfaces Laboratory at Empa has produced two papers that look at this issue in armchair-edged and zigzag-edged graphene nanoribbons.

"In these two works, we focused on characterizing "bite-defects" in graphene nanoribbons and their implications on GNR properties", explains Gabriela Borin Barin from Empa's nanotech@surfaces lab. "We observed that even though the presence of these defects can disrupt GNRs' electronic transport, they could also yield spin-polarized currents. These are important findings in the context of the potential applications of GNRs in nanoelectronics and quantum technology."

Armchair graphene nanoribbons

The paper "Quantum electronic transport across "bite" defects in graphene nanoribbons," recently published in 2D Materials, specifically looks at 9-atom wide armchair graphene nanoribbons (9-AGNRs). The mechanical robustness, long-term stability under ambient conditions, easy transferability onto target substrates, scalability of fabrication, and suitable band-gap width of these GNRs has made them one of the most promising candidates for integration as active channels in field-effect transistors (FETs). Indeed, among the graphene-based electronic devices realized so far, 9-AGNR-FETs display the highest performance.

While the detrimental role of defects on electronic devices is well known, Schottky barriers, potential energy barriers for electrons formed at metal-semiconductor junctions, both limit the performance of current GNR-FETs and prevent experimental characterization of the impact of defects on device performance. In the 2D Materials paper, the researchers combine experimental and theoretical approaches to investigate defects in bottom-up AGNRs.

Scanning-tunnelling and atomic-force microscopies first allowed the researchers to identify missing benzene rings at the edges as a very common defect in 9-AGNRs and to estimate both the density and spatial distribution of these imperfections, which they have dubbed "bite" defects. They quantified the density and found that they have a strong tendency to aggregate. The researchers then used first-principles calculations to explore the effect of such defects on quantum charge transport, finding that these imperfections significantly disrupt it at the band edges by reducing conductance.

These theoretical findings are then generalized to wider nanoribbons in a systematic manner, allowing the researchers to establish practical guidelines for minimizing the detrimental role of these defects on charge transport, an instrumental step towards the realization of novel carbon-based electronic devices.

Zigzag graphene nanoribbons

In the paper "Edge disorder in bottom-up zigzag graphene nanoribbons: implications for magnetism and quantum electronic transport," recently published in the Journal of Physical Chemistry Letters, the same team of researchers combines scanning probe microscopy experiments and first-principles calculations to examine structural disorder and its effect on magnetism and electronic transport in so-called bottom-up zigzag GNRs (ZGNRs).

ZGNRs are unique because of their unconventional metal-free magnetic order that, according to predictions, is preserved up to room temperature. They possess magnetic moments that are coupled ferromagnetically along the edge and antiferromagnetically across it and it has been shown that the electronic and magnetic structures can be modulated to a large extent by, for example, charge doping, electric fields, lattice deformations, or defect engineering. The combination of tunable magnetic correlations, sizable band gap width and weak spin?orbit interactions has made these ZGNRs promising candidates for spin logic operations. The study specifically looks at six-carbon zigzag lines wide graphene nanoribbons (6-ZGNRs), the only width of ZGNRs that has been achieved with a bottom-up approach so far.

Again using scanning-tunnelling and atomic-force microscopies, the researchers first identify the presence of ubiquitous carbon vacancy defects located at the edges of the nanoribbons and then resolve their atomic structure. Their results indicate that each vacancy comprises a missing m-xylene unit, that is, another "bite" defect, which, as with those seen in AGNRs, comes from the scission of the C?C bond that occurs during the cyclodehydrogenation process of the reaction. Researchers estimate the density of "bite" defects in the 6-ZGNRs to be larger than that of the equivalent defects in bottom-up AGNRs.

The effect of these bite defects on the electronic structure and quantum transport properties of 6-ZGNRs is again examined theoretically. They find that the introduction of the defect, similarly to AGNRs, causes a significant disruption of the conductance. Furthermore, in this nanostructure, these unintentional defects induce sublattice and spin imbalance, causing a local magnetic moment. This, in turn, gives rise to spin-polarized charge transport that makes defective zigzag nanoribbons optimally suited for applications in all-carbon logic spintronics in the ultimate limit of scalability.

A comparison between ZGNRs and AGNRs of equal width shows that transport across the former is less sensitive to the introduction of both single and multiple defects than in the latter. Overall, the research provides a global picture of the impact of these ubiquitous "bite" defects on the low-energy electronic structure of bottom-up graphene nanoribbons. Future research might focus on the investigation of other types of point defects experimentally observed at the edges of such nanoribbons, the researchers said.

Credit: 
Swiss Federal Laboratories for Materials Science and Technology (EMPA)

Technique to evaluate wind turbines may boost wind power production

With a global impetus toward utilizing more renewable energy sources, wind presents a promising, increasingly tapped resource. Despite the many technological advancements made in upgrading wind-powered systems, a systematic and reliable way to assess competing technologies has been a challenge.

In a new case study, researchers at Texas A&M University, in collaboration with international energy industry partners, have used advanced data science methods and ideas from the social sciences to compare the performance of different wind turbine designs.

"Currently, there is no method to validate if a newly created technology will increase wind energy production and efficiency by a certain amount," said Dr. Yu Ding, Mike and Sugar Barnes Professor in the Wm Michael Barnes '64 Department of Industrial and Systems Engineering. "In this study, we provided a practical solution to a problem that has existed in the wind industry for quite some time."

The results of their study are published in the journal Renewable Energy.

Wind turbines convert the energy transferred from air hitting their blades to electrical energy. As of 2020, about 8.4% of the total electricity produced in the United States comes from wind energy. Further, over the next decade, the Department of Energy plans to increase the footprint of wind energy in the electricity sector to 20% to meet the nation's ambitious climate goals.

In keeping with this target, there has been a surge of novel technologies, particularly to the blades that rotate in the wind. These upgrades promise an improvement in the performance of wind turbines and, consequently, power production. However, testing whether or how much these quantities will go up is arduous.

One of the many reasons that make performance evaluation difficult is simply because of the sheer size of wind turbines that are often several hundred feet tall. Testing the efficiency of these gigantic machines in a controlled environment, like a laboratory, is not practical. On the other hand, using scaled-down versions of wind turbines that fit into laboratory-housed wind tunnels yield inaccurate values that do not capture the performance of the actual-size wind turbines. Also, the researchers noted that replicating the multitude of air and weather conditions that occur in the open field is hard in the laboratory.

Hence, Ding and his team chose to collect data from inland wind farms for their study by collaborating with an industry that owned wind farms. For their analysis, they included 66 wind turbines on a single farm. These machines were fitted with sensors to continuously track different items, like the power produced by the turbines, wind speeds, wind directions and temperature. In totality, the researchers collected data over four-and-a-half years, during which time the turbines received three technological upgrades.

To measure the change in power production and performance before and after the upgrade, Ding and his team could not use standard pre-post intervention analyses, such as those used in clinical trials. Briefly, in clinical trials, the efficacy of a certain medicine is tested using randomized experiments with test groups that get the medication and controls that did not. The test and the control groups are carefully chosen to be otherwise comparable so that the effect of the medicine is the only distinguishing factor between the groups. However, in their study, the wind turbines could not be neatly divided into the test and control-like groups as needed for randomized experiments.

"The challenge we have here is that even if we choose 'test' and 'control' turbines similar to what is done in clinical trials, we still cannot guarantee that the input conditions, like the winds that hit the blades during the recording period, were the same for all the turbines," said Ding. "In other words, we have a set of factors other than the intended upgrades that are also different pre- and post-upgrade."

Hence, Ding and his team turned to an analytical procedure used by social scientists for natural experiments, called causal inference. Here, despite the confounding factors, the analysis still allows one to infer how much of the observed outcome is caused by the intended action, which in the case of the turbines, was the upgrade.

For their causal inference-inspired analysis, the researchers included turbines only after their input conditions were matched. That is, these machines were subject to similar wind velocities, air densities, or turbulence conditions during the recording period. Next, using an advanced data comparison methodology that Ding jointly developed with Dr. Rui Tuo, assistant professor in the industrial and systems engineering department, the research team reduced the uncertainty in quantifying if there was an improvement in wind turbine performance.

Although the method used in the study requires many months of data collection, Ding said that it provides a robust and accurate way of determining the merit of competing technologies. He said this information will be beneficial to wind operators who need to decide if a particular turbine technology is worthy of investment.

"Wind energy is still subsidized by the federal government, but this will not last forever and we need to improve turbine efficiency and boost their cost-effectiveness," said Ding. "So, our tool is important because it will help wind operators identify best practices for choosing technologies that do work and weed out those that don't."

Ding received a Texas A&M Engineering Experiment Station Impact Award in 2018 for innovations in data and quality science impacting the wind energy industry.

Other contributors to the research include Nitesh Kumar, Abhinav Prakash and Adaiyibo Kio from the industrial and systems engineering department and technical staff of the collaborating wind company.

Credit: 
Texas A&M University

Probing deeper into origins of cosmic rays

image: Schematic representation of cosmic rays propagating through magnetic clouds.

Image: 
Salvatore Buonocore

WASHINGTON, May 25, 2021 -- Cosmic rays are high-energy atomic particles continually bombarding Earth's surface at nearly the speed of light. Our planet's magnetic field shields the surface from most of the radiation generated by these particles. Still, cosmic rays can cause electronic malfunctions and are the leading concern in planning for space missions.

Researchers know cosmic rays originate from the multitude of stars in the Milky Way, including our sun, and other galaxies. The difficulty is tracing the particles to specific sources, because the turbulence of interstellar gas, plasma, and dust causes them to scatter and rescatter in different directions.

In AIP Advances, by AIP Publishing, University of Notre Dame researchers developed a simulation model to better understand these and other cosmic ray transport characteristics, with the goal of developing algorithms to enhance existing detection techniques.

Brownian motion theory is generally employed to study cosmic ray trajectories. Much like the random motion of pollen particles in a pond, collisions between cosmic rays within fluctuating magnetic fields cause the particles to propel in different directions.

But this classic diffusion approach does not adequately address the different propagation rates affected by diverse interstellar environments and long spells of cosmic voids. Particles can become trapped for a time in magnetic fields, which slow them down, while others are thrust into higher speeds through star explosions.

To address the complex nature of cosmic ray travel, the researchers use a stochastic scattering model, a collection of random variables that evolve over time. The model is based on geometric Brownian motion, a classic diffusion theory combined with a slight trajectory drift in one direction.

In their first experiment, they simulated cosmic rays moving through interstellar space and interacting with localized magnetized clouds, represented as tubes. The rays travel undisturbed over a long period of time. They are interrupted by chaotic interaction with the magnetized clouds, resulting in some rays reemitting in random directions and others remaining trapped.

Monte Carlo numerical analysis, based on repeated random sampling, revealed ranges of density and reemission strengths of the interstellar magnetic clouds, leading to skewed, or heavy-tailed, distributions of the propagating cosmic rays.

The analysis denotes marked superdiffusive behavior. The model's predictions agree well with known transport properties in complex interstellar media.

"Our model provides valuable insights on the nature of complex environments crossed by cosmic rays and could help advance current detection techniques," author Salvatore Buonocore said.

Credit: 
American Institute of Physics

Parents abused as children may pass on emotional issues

Childhood abuse and trauma are linked to many health issues in adulthood. New research from the University of Georgia suggests that a history of childhood mistreatment could have negative ramifications for the children of people who experienced abuse or neglect in childhood.

Teaching your children how to manage their emotions is an integral part of parenting. For people who experienced childhood abuse, that can become a difficult task. People who were frequently mistreated as children may find it hard to identify their emotions and implement strategies to regulate them. This difficulty, in turn, can harm their kids' emotional development.

The study, published in the Journal of Psychopathology and Behavioral Assessment, found that parents with a history of childhood abuse or neglect often had difficulty accepting negative emotions, controlling impulsive responses and using emotional regulation strategies, among other emotion regulation issues. Further, many of those parents with emotional regulation difficulties passed that trait down to their children.

"Parents implicitly and explicitly teach their children how to regulate their emotions. I've worked with young toddlers, and when you're teaching them about their emotions, you can see how malleable that skill is," said Kimberly Osborne, lead author of the study and a doctoral candidate in the Department of Human Development and Family Science. "It's a lot harder to train someone to manage their emotions later in life. If we can understand the transmission pathways and the risks of regulation difficulties later in life, then we can use this research for prevention and to equip people with better skills so that the pattern doesn't continue."

Measuring emotional regulation

The study focused on 101 youth and their primary caregivers. The parents took a questionnaire to measure childhood neglect, trauma and abuse, along with a survey that gauged their own ability to control their emotions. Researchers measured children's heart rate variability, an established measure of emotional regulation, at rest and during a stressful activity using an electrocardiogram while their parent watched.

The female participants showed emotional regulation difficulties under stress regardless of their parents' history of childhood trauma or emotion regulation skills. At the same time, boys were specifically more vulnerable to emotional regulation difficulties when their parents also struggled with emotion regulation.

"I think that that speaks to the gendered way our society socializes emotion in boys versus girls," Osborne said. "We don't have data to test this, so I'm pulling more from theory and past research, but I think that girls receive more coaching on how to regulate their emotions from teachers, older siblings and peers than boys do. So if boys are not receiving that from their parents, then they may be at greater risk for self-regulation difficulties."

In particular, parents who reported being unable to set aside negative emotions to pursue their goals--such as getting work done despite being in a bad mood--were more likely to have children who likewise found it difficult to regulate their emotions during stressful experiences.

Modeling healthy responses to stress

Although having a history of childhood trauma doesn't predestine an individual to pass down their experiences to their children, Osborne said it is something to be aware of. Modeling habits like taking a pause before reacting to stressful situations to assess how you're feeling can go a long way in teaching children how they should respond to challenges.

"From a very young age, the best thing to do is to just reflect back to the child what they are experiencing," Osborne said. "If you see a child crying, instead of saying, 'Oh, I'm so sorry. What happened?' you can say, 'You're crying. I can see that you're sad. What made you sad?' That A, defines the emotion for them so it's helping them identify that emotion, and B, it encourages them to reflect on what happened and to tell you in their own words what caused the emotion.

"It's similar to how if you had a parent with alcoholism, you may have learned to stay away from alcohol and may want to teach your kids to do the same. It's important to tell them, 'We have a tendency not to regulate our emotions well, so we are going to keep tabs on it together to make sure that this doesn't develop into something more harmful for you later.'"

Credit: 
University of Georgia

Candid cosmos: eROSITA cameras set benchmark for astronomical imaging

image: A team of scientists from the Max-Planck-Institut für extraterrestrische Physik, Germany, built an x-ray telescope called eROSITA consisting of an array of co-aligned focal plane cameras with one in the center and six surrounding it.

Image: 
P. Friedrich, doi 10.1117/1.JATIS.7.2.025004.

Recently, the eROSITA (extended Roentgen Survey with an Imaging Telescope Array) x-ray telescope, an instrument developed by a team of scientists at Max-Planck-Institut für Extraterrestrische Physik (MPE), has gained attention among astronomers. The instrument performs an all-sky survey in the x-ray energy band of 0.2-8 kilo electron volts aboard the Spectrum-Roentgen-Gamma (SRG) satellite that was launched in 2019 from the Baikonur cosmodrome in Kazakhstan.

"The eROSITA has been designed to study the large-scale structure of the universe and test cosmological models, including dark energy, by detecting galaxy clusters with redshifts greater than 1, corresponding to a cosmological expansion faster than the speed of light," said Dr. Norbert Meidinger from MPE, a part of the team that developed the instrument. "We expect eROSITA to revolutionize our understanding of the evolution of supermassive black holes." The details of the developmental work have been published in SPIE's Journal of Astronomical Telescopes, Instruments, and Systems (JATIS).

eROSITA is not one telescope, but an array of seven identical, co-aligned telescopes, with each one composed of a mirror system and a focal-plane camera. The camera assembly, in turn, consists of the camera head, camera electronics, and filter wheel. The camera head is made up of the detector and its housing, a proton shield, and a heat pipe for detector cooling. The camera electronics include supply, control, and data acquisition electronics for detector operation. The filter wheel is mounted above the camera head and has four positions including an optical and UV blocking filter to reduce signal noise, a radioactive x-ray source for calibration, and a closed position that allows instrumental background measurements.

"It's exciting to read about these x-ray cameras that are in orbit and enabling a broad set of scientific investigations on a major astrophysics mission," says Megan Eckart of Lawrence Livermore National Laboratory, USA, who is the deputy editor of JATIS. "Dr. Meidinger and his team provide a clear description of the hardware development and ground testing, and wrap up the paper with a treat: first-light images from eROSITA and an assessment of onboard performance. Astrophysicists around the world will analyze data from these cameras for years to come."

The eROSITA telescope is well on its way to becoming a game changer for x-ray astronomy.

Credit: 
SPIE--International Society for Optics and Photonics

Dual impacts of extreme heat, ozone disproportionately hurt poorer areas

Scientists at UC San Diego, San Diego State University and colleagues find that extreme heat and elevated ozone levels, often jointly present during California summers, affect certain ZIP codes more than others.

Those areas across the state most adversely affected tend to be poorer areas with greater numbers of unemployed people and more car traffic. The science team based this finding on data about the elevated numbers of people sent to the hospital for pulmonary distress and respiratory infections in lower-income ZIP codes.

The study identified hotspots throughout the Central Valley, areas of San Diego County east of downtown San Diego, and places like San Bernardino, where Los Angeles basin smog is often trapped by surrounding mountain ranges, among others.

Results appear the week of May 24 in the journal Proceedings of the National Academy of Sciences. The Office of Environmental Health Hazard Assessment, a division of the California Environmental Protection Agency, funded the research.

"This information can be used to activate measures to protect populations in areas which we know will be at increased risk of experiencing a health burden from these co-occurring environmental events and maximize public health benefits," said study lead author Lara Schwarz, a graduate student who is in a joint doctoral program at San Diego State and the Herbert Wertheim School of Public Health and Human Longevity Science at UC San Diego.

In places like California, these public health hazards are expected to appear in unison more frequently as the climate continues to warm and heat waves become more prevalent and long-lasting. The study could enable more targeted public health efforts because of its unprecedented consideration of two common hazards in tandem and its relatively high-resolution breakdown of where they are most likely to cause problems. Previous studies had tended only to evaluate city- or regional-level health trends.

"Understanding the health impacts of compounding environmental events such as extreme heat and various air pollutants like tropospheric ozone becomes a priority in a changing climate," said study co-author Tarik Benmarhnia, a climate change epidemiologist with appointments at UC San Diego's Scripps Institution of Oceanography and Herbert Wertheim School of Public Health and Human Longevity Science. "Such events are more frequent, intense and tend to co-occur, potentially creating synergistic effects on population health impacting the most vulnerable communities."

The work could inform early warning systems and prioritize resources more efficiently than at present, the researchers said.

Ozone, a gas and a variant molecular form of oxygen, is formed in the lower atmosphere by the reaction of various hydrocarbons to sunlight, especially during hot days. Car exhaust produces such hydrocarbons. Ozone can exacerbate asthma and other respiratory conditions among vulnerable people and is more prevalent in urban areas with more traffic.

Extreme heat can similarly affect respiratory health by itself or in combination with high ozone levels.

Schwarz's team notes that vulnerability to the excessive heat/ozone combination seems to be diminished in wealthier ZIP codes and correlated results with factors that include better access to healthcare, lower stress levels, and more exercise.

"When considering the ZIP code level, certain areas
observed strong joint-effects," said the study authors. "A lower median income, higher percentage of unemployed
residents and exposure to other air pollutants within a ZIP code drove stronger joint-effects; a higher percentage of commuters who walk/bicycle, a marker for neighborhood wealth, showed decreased effects."

Credit: 
University of California - San Diego

Evacuating under dire wildfire scenarios

image: In 2018, the Camp Fire ripped through the town of Paradise, California at an unprecedented rate. Much of the town was destroyed in the tragedy.

Image: 
The White House via Wikicommons

In 2018, the Camp Fire ripped through the town of Paradise, California at an unprecedented rate. Officials had prepared an evacuation plan that required 3 hours to get residents to safety. The fire, bigger and faster than ever before, spread to the community in only 90 minutes.

As climate change intensifies, wildfires in the West are behaving in ways that were unimaginable in the past--and the common disaster response approaches are woefully unprepared for this new reality. In a recent study, a team of researchers led by the University of Utah proposed a framework for simulating dire scenarios, which the authors define as scenarios where there is less time to evacuate an area than is required. The paper, published on April 21, 2021 in the journal Natural Hazards Review, found that minimizing losses during dire scenarios involves elements that are not represented in current simulation models, among them improvisation and altruism.

"The world is dealing with situations that exceed our worst case scenarios," said lead author Thomas Cova, professor of geography at the U. "Basically we're calling for planning for the unprecedented, which is a tough thing to do."

Most emergency officials in fire-prone regions develop evacuation plans based on the assumptions that wildfires and residents will behave predictably based on past events. However, recent devastating wildfires in Oregon, California and other western states have shown that those assumptions may no longer hold.

"Wildfires are really becoming more unpredictable due to climate change. And from a psychological perspective, we have people in the same area being evacuated multiple times in the past 10 years. So, when evacuation orders come, people think, 'Well, nothing happened the last few times. I'm staying,'" said Frank Drews, professor of psychology at the U and co-author of the study. "Given the reality of climate change, it's important to critically assess where we are and say, 'Maybe we can't count on certain assumptions like we did in the past.'"

How to predict the unprecedented

The framework allows planners to create a dire wildfire scenario--when the lead time, defined as the time before the fire reaches a community to respond, is less than the time required to evacuate. The authors developed a scoring system that categorizes each scenario as routine, dire, very dire or extremely dire based on many different factors.

One big factor affecting the direness is the ignition location, as one closer to a community offers less time than one farther away. A second major factor is the wildfire detection time. During the day, plumes of smoke can cue a quick response, but if it starts at night when everyone is asleep, it could take longer to get people moving. Officials may delay their decisions to avoid disrupting the community unnecessarily, but a last-minute evacuation order can cause traffic jams or put a strain on low-mobility households.

Alert system technologies can create dire circumstances if residents do not receive the warning in time due to poor cellphone coverage or low subscription rates to reverse 911 warning systems. If the community has many near misses with wildfire, the public's response could be to enact a wait-and-see approach before they leave their homes.

Using a dire scenario dashboard, the user assigns various factors an impediment level--low, minor or major--that can change at any point to lessen or increase a situation's direness.

"Usually when we run computer simulations, nothing ever goes wrong. But in the real world, things can get much worse half-way through an evacuation," said Cova. "So, what happens when you don't have enough time? The objective switches from getting everyone out to instead minimizing casualties. It's dark."

"More people began working remotely from home during the pandemic, which then led to them moving out of large cities into rural areas," explained assistant professor Dapeng Li of the South Dakota State University Department of Geography and Geospatial Sciences, a co-author and U alumnus who helped develop the computer simulations. "These rural communities typically have fewer resources and face challenges in rapidly evacuating a larger number of residents in this type of emergency situation."

Reducing dire scenarios

Simulating dire wildfire scenarios can improve planning and the outcomes in cases where everything goes wrong. For example, creating fire shelters and safety zones inside of a community can protect residents who can't get out, while reducing traffic congestion for others who can evacuate. During the 2018 Camp Fire, people improvised temporary refuges in parking lots and community buildings. Modeling could help city planners construct permanent safety areas ahead of time.

A common human response during wildfires are improvisations and creative thinking, which are difficult to model but can be literally lifesaving. For example, during the 2020 Creek Fire in California, a nearby military base sent a helicopter to rescue trapped campers. Another crucial component is individuals helping others, such as people giving others rides or warning neighbors who missed the official alert. During the Camp Fire, Joe Kennedy used his bulldozer to singlehandedly clear abandoned cars that were blocking traffic.

"It is very common for families and neighbors to assume a first responder role and help each other during disasters," said Laura Siebeneck, associate professor of emergency management and disaster science at the University of North Texas and co-author of the study. "Many times, individuals and groups come together, cooperate, and improvise solutions as needed. Though it is difficult to capture improvisation and altruism in the modeling environment, better understanding human behavior during dire events can potentially lead to better protective actions and preparedness to dire wildfire events."

Studying and modeling dire scenarios is necessary to improve the outcomes of unprecedented changes in fire occurrence and behavior. This study is the first attempt to develop a simulation framework for these scenarios, and more research is needed to incorporate the unpredictable elements that create increasingly catastrophic wildfires.

Credit: 
University of Utah

Sterilizing skeeters

Mosquitoes are one of humanity's greatest nemeses, estimated to spread infections to nearly 700 million people per year and cause more than one million deaths.

UC Santa Barbara Distinguished Professor Craig Montell has made a breakthrough in one technique for controlling populations of Aedes aegypti, a mosquito that transmits dengue, yellow fever, Zika and other viruses. The study, published in the Proceedings of the National Academy of Sciences, documents the first use of CRISPER/Cas9 gene editing to target a specific gene tied to fertility in male mosquitoes. The researchers were then able to discern how this mutation can suppress the fertility of female mosquitoes.

Montell and his coauthors were working to improve a vector-control practice called the sterile insect technique (SIT). To manage populations, scientists raise a lot of sterile male insects. They then release these males in numbers that overwhelm their wild counterparts. The idea is that females that mate with sterile males before finding a fertile one are themselves rendered infertile, thereby decreasing the size of the next generation. Repeating this technique several times has the potential to crash the population. What's more, because each generation is smaller than the last, releasing a similar number of sterile males has a stronger effect over time.

SIT has proven effective in managing a number of agricultural pests, including the medfly (Mediterranean fruit fly), a major pest in California. It has also been attempted with A. aegypti mosquitoes, which originated in Africa, but have since become invasive across many parts of the world, due in no small part to climate change and global travel.

In the past, scientists used chemicals or radiation to sterilize male A. aegypti. "There are enough genes that affect fertility that just a random approach of blasting a large number of genes will cause the males to be infertile," said Montell, the Duggan Professor of Molecular, Cellular, and Developmental Biology. However, the chemicals or radiation impacted the animals' health to such an extent that they were less successful in mating with females, which undercuts the effectiveness of the sterile insect technique.

Montell figured there had to be a more targeted approach with less collateral damage. He and his colleagues, including co-first authors Jieyan Chen and Junjie Luo, set out to mutate a gene in mosquitoes that specifically caused male sterility without otherwise impacting the insects' health. The best candidate they found was b2-tubulin (B2t); mutation of the related B2t gene in fruit flies is known to caused male sterility.

Using CRISPER/Cas9, the researchers knocked out B2t in male A. aegypti. They found that the mutant males produced no sperm, but unlike in previous efforts, the sterile studs were otherwise completely healthy. There was some debate over whether sperm -- albeit defective sperm from the sterile males -- was needed to render female mosquitoes infertile, or whether transfer of seminal fluid was all it took.

In one experiment, the researchers introduced 15 mutant males into a group of 15 females for 24 hours. Then they swapped the B2t males for 15 wild-type males, and left them there. "Essentially, all of the females remained sterile," Montell said. This confirmed that B2t males could suppress female fertility without producing sperm.

Next the team set out to determine how timing played into the effect. They exposed the females to mutant males for different lengths of time. The scientists noticed little difference after 30 minutes, but female fertility quickly dropped after that. Montell noted that females copulated twice on average even during the first 10 minutes. This indicated to him that females have to mate with many sterile males before being rendered infertile themselves.

Combining the females with the B2t males for four hours cut female fertility to 20% of normal levels. After eight hours the numbers began leveling out around 10%.

With the insights from the time trials, the team sought to approximate SIT under more natural conditions. They added different ratios of B2t and wild-type males at the same time to a population of 15 females for one week, and recorded female fertility. A ratio of about 5 or 6 sterile males to one wild-type male reduced female fertility by half. A ratio of 15 to 1 suppressed fertility to about 20%, where it leveled off.

Now, Aedes aegypti populations could easily bounce back from an 80% drop in fertility, Montell remarked. The success of SIT comes from subsequent, successive releases of sterile males, where each release will be more effective than the last as sterile males account for an ever-growing proportion of the population.

Montell plans to continue investigating mosquito mating behaviors and fertility. They are devising a way to maintain stocks of B2t males so they are only sterile in the wild and not in the lab. In addition, they are characterizing male mating behavior to uncover new ways to suppress mosquito populations.

"We've become very interested in studying many aspects of behavior in Aedes aegypti because these mosquitoes impact the health of so many people," said Montell, who has conducted a lot of research using fruit flies in the past. "There is a pandemic every year from mosquito-borne diseases."

"When CRISPER/Cas9 came out several years ago it just offered new opportunities to do things that you couldn't do before," he continued. "So, the time seemed right to for us to start working on Aedes aegypti."

Credit: 
University of California - Santa Barbara

Corn ethanol reduces carbon footprint, greenhouse gases

A study conducted by researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory reveals that the use of corn ethanol is reducing the carbon footprint and diminishing greenhouse gases.

The study, recently published in Biofuels, Bioproducts and Biorefining, analyzes corn ethanol production in the United States from 2005 to 2019, when production more than quadrupled. Scientists assessed corn ethanol’s greenhouse gas (GHG) emission intensity (sometimes known as carbon intensity, or CI) during that period and found a 23% reduction in CI.

According to Argonne scientists, corn ethanol production increased over the period, from 1.6 to 15 billion gallons (6.1 to 57 billion liters). Supportive biofuel policies — such as the Environmental Protection Agency’s Renewable Fuel Standard and California’s Low-Carbon Fuel Standard — helped generate the increase. Both of those federal and state programs evaluate the life-cycle GHG emissions of fuel production pathways to calculate the benefits of using renewable fuels.

To assess emissions, scientists use a process called life-cycle analysis, or LCA — the standard method for comparing relative GHG emission impacts among different fuel production pathways.

“Since the late 1990s, LCA studies have demonstrated the GHG emission reduction benefits of corn ethanol as a gasoline alternative,” noted Argonne senior scientist Michael Wang, who leads the Systems Assessment Center in the laboratory’s Energy Systems division and is one of the study’s principal investigators. “This new study shows the continuous downtrend of corn ethanol GHG emissions.”

“The corn ethanol production pathway — both in terms of corn farming and biorefineries — has evolved greatly since 2005,” observed Argonne analyst Uisung Lee, first author of the study. Lee pointed out that the study relied on comprehensive statistics of corn farming from the U.S. Department of Agriculture and of corn ethanol production from industry benchmark data.

Hoyoung Kwon, a coauthor, stated that U.S. corn grain yields improved by 15%, reaching 168 bushels per acre despite fertilizer inputs remaining constant and resulting in a decreased intensity in fertilizer input per bushel of corn harvested: reductions of 7% in nitrogen use and 18% in potash use.

May Wu, another co-author, added that ethanol yields increased 6.5%, with a 24% reduction in ethanol plant energy use.

“With the increased total volume and the reduced CI values of corn ethanol between 2005 and 2019, corn ethanol has resulted in a total GHG reduction of more than 500 million tons between 2005 and 2019,” Wang emphasized. “For the United States, biofuels like corn ethanol can play a critical role in reducing our carbon footprint.”

The Argonne team used Argonne’s GREET® model for this study. Argonne developed GREET (the Greenhouse gases, Regulated Emissions, and Energy use in Technologies) model, a one-of-a-kind LCA analytical tool that simulates the energy use and emissions output of various vehicle and fuel combinations. Government, industry, and other researchers worldwide use GREET® for LCA modeling of corn ethanol and other biofuels.

Credit: 
DOE/Argonne National Laboratory

Rubisco proton production can enhance CO2 acquisition

image: Figure 1. Carboxysome evolution pathways

Image: 
Ben Long || The Australian National University

Rubisco is arguably the most abundant--and most important--protein on Earth. This enzyme drives photosynthesis, the process that plants use to convert sunlight into energy to fuel crop growth and yield. Rubisco's role is to capture and fix carbon dioxide (CO2) into sugar that fuels the plant's activities. However, as much as Rubisco benefits plant growth, it also can operate at a notoriously slow pace that creates a hindrance to photosynthetic efficiency. 

About 20 percent of the time Rubisco fixes oxygen (O2) molecules instead of CO2, costing the plant energy that could have been utilized to create yield. This time- and energy-consuming process is called photorespiration, where the plant sends its enzymes through three different compartments within the plant cell. 

"However, many photosynthetic organisms have evolved mechanisms to overcome some of Rubisco's limitations," said Ben Long who led this recent study published in PNAS for a research project called Realizing Increased Photosynthetic Efficiency (RIPE). RIPE, which is led by Illinois in partnership with the Australian National University (ANU), is engineering crops to be more productive by improving photosynthesis. RIPE is supported by the Bill & Melinda Gates Foundation, Foundation for Food & Agriculture Research, and U.K. Foreign, Commonwealth & Development Office

"Among these organisms are microalgae and cyanobacteria from aquatic environments, which have efficiently functioning Rubisco enzymes sitting inside liquid protein droplets and protein compartments called pyrenoids and carboxysomes," said lead researcher Long from the ANU Research School of Biology.

How these protein compartments assist in the Rubisco function is not entirely known. The team from ANU aimed to find the answer by using a mathematical model that focused on the chemical reaction Rubisco carries out. As it collects CO2 from the atmosphere, Rubisco also releases positively charged protons.

"Inside Rubisco compartments, these protons can speed up Rubisco by increasing the amount of CO2 available. The protons do this by helping the conversion of bicarbonate into CO2," said Long. "Bicarbonate is the major source of CO2 in aquatic environments and photosynthetic organisms that use bicarbonate can tell us a lot about how to improve crop plants." 

The mathematical model gives the ANU team a better idea as to why these special Rubisco compartments might improve the enzyme's function and it also gives them more insight into how they may have evolved. One hypothesis from the study suggests that periods of low CO2 in the earth's ancient atmosphere may have been the trigger for the cyanobacteria and microalgae to evolve these specialized compartments, while they might also be beneficial for organisms that grow in dim light environments. 

ANU members of the Realizing Increased Photosynthetic Efficiency (RIPE) project are trying to build these specialized Rubisco compartments in crop plants to assist in increasing yield. 

"The outcomes of this study," explained Long, "provide an insight into the correct function of specialized Rubisco compartments and give us a better understanding of how we expect them to perform in plants."

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Columbia Engineering team builds first hacker-resistant cloud software system

image: Microverification of cloud hypervisors

Image: 
Jason Nieh and Ronghui Gu/Columbia Engineering

New York, NY--May 24, 2021--Whenever you buy something on Amazon, your customer data is automatically updated and stored on thousands of virtual machines in the cloud. For businesses like Amazon, ensuring the safety and security of the data of its millions of customers is essential. This is true for large and small organizations alike. But up to now, there has been no way to guarantee that a software system is secure from bugs, hackers, and vulnerabilities.

Columbia Engineering researchers may have solved this security issue. They have developed SeKVM, the first system that guarantees--through a mathematical proof--the security of virtual machines in the cloud. In a new paper to be presented on May 26, 2021, at the 42nd IEEE Symposium on Security & Privacy, the researchers hope to lay the foundation for future innovations in system software verification, leading to a new generation of cyber-resilient system software.

SeKVM is the first formally verified system for cloud computing. Formal verification is a critical step as it is the process of proving that software is mathematically correct, that the program's code works as it should, and there are no hidden security bugs to worry about.

"This is the first time that a real-world multiprocessor software system has been shown to be mathematically correct and secure," said Jason Nieh, professor of computer science and co-director of the Software Systems Laboratory. "This means that users' data are correctly managed by software running in the cloud and are safe from security bugs and hackers."

The construction of correct and secure system software has been one of the grand challenges of computing. |Nieh has worked on different aspects of software systems since joining Columbia Engineering in 1999. When Ronghui Gu, the Tang Family Assistant Professor of Computer Science and an expert in formal verification, joined the computer science department in 2018, he and Nieh decided to collaborate on exploring formal verification of software systems.

Their research has garnered major interest: both researchers won an Amazon Research Award, multiple grants from the National Science Foundation, as well as a multi-million dollar Defense Advanced Research Projects Agency (DARPA) contract to further development of the SeKVM project. In addition, Nieh was awarded a Guggenheim Fellowship for this work.

Over the past dozen years, there has been a good deal of attention paid to formal verification, including work on verifying multiprocessor operating systems. "But all of that research has been conducted on small toy-like systems that nobody uses in real life," said Gu. "Verifying a multiprocessor commodity system, a system in wide use like Linux, has been thought to be more or less impossible."

The exponential growth of cloud computing has enabled companies and users to move their data and computation off-site into virtual machines running on hosts in the cloud. Cloud computing providers, like Amazon, deploy hypervisors to support these virtual machines.

A hypervisor is the key piece of software that makes cloud computing possible. The security of the virtual machine's data hinges on the correctness and trustworthiness of the hypervisor. Despite their importance, hypervisors are complicated -- they can include an entire Linux operating system. Just a single weak link in the code -- one that is virtually impossible to detect via traditional testing -- can make a system vulnerable to hackers. Even if a hypervisor is written 99% correctly, a hacker can still sneak into that particular 1% set-up and take control of the system.

Nieh and Gu's work is the first to verify a commodity system, specifically the widely-used KVM hypervisor, which is used to run virtual machines by cloud providers such as Amazon. They proved that SeKVM, which is KVM with some small changes, is secure and guarantees that virtual computers are isolated from one another.

"We've shown that our system can protect and secure private data and computing uploaded to the cloud with mathematical guarantees," said Xupeng Li, Gu's PhD student and co-lead author of the paper. "This has never been done before."

SeKVM was verified using MicroV, a new framework for verifying the security properties of large systems. It is based on the hypothesis that small changes to the system can make it significantly easier to verify, a new technique the researchers call microverification. This novel layering technique retrofits an existing system and extracts the components that enforce security into a small core that is verified and guarantees the security of the entire system.

The changes needed to retrofit a large system are quite modest--the researchers demonstrated that if the small core of the larger system is intact, then the system is secure and no private data will be leaked. This is how they were able to verify a large system such as KVM, which was previously thought to be impossible.

"Think of a house--a crack in the drywall doesn't mean that the integrity of the house is at risk," Nieh explained. "It's still structurally sound and the key structural system is good."

Shih-Wei Li, Nieh's PhD student and co-lead author of the study, added, "SeKVM will serve as a safeguard in various domains, from banking systems and Internet of Things devices to autonomous vehicles and cryptocurrencies."

As the first verified commodity hypervisor, SeKVM could change how cloud services should be designed, developed, deployed, and trusted. In a world where cybersecurity is a growing concern, this resiliency is highly in demand. Major cloud companies are already exploring how they can leverage SeKVM to meet this demand.

Credit: 
Columbia University School of Engineering and Applied Science