Running multiple OSs on a single hardware platform can pose difficult software design challenges, particularly in applications where multiple real-time processing subsystems are involved. A virtual machine approach makes it possible. Making such an environment work, from a software point of view, requires the creation of multiple virtual machines. Each OS runs unmodified on its own virtual machine, using the hardware features of the CPU to keep each virtual machine from affecting the other. Dual-OS virtual machine systems exist today. The INtime RTOS has been deployed with realtime applications operating at cycle times of 500 to 1000 microseconds on single-core desktop and industrial motherboard platforms, by sharing the CPU with Windows and its applications. For realtime applications that require faster cycle times, the optimum solution was using a faster processor, but there are also limitations due to the overhead required to switch tasks. When two virtual machines share a CPU, as is the case with single-core processor designs, they must save a full machine context when switching between the two operating systems. Saving and restoring these contexts results in compromising event response latency and cycle times. Such compromises can contribute 10 to 30 microseconds to the worst-case timer interrupt jitter. For a cycle time of one millisecond, 10-30 microseconds of worst-case interrupt latency represents a jitter variation of only a few percent. But as cycle times decrease, for example from 200 to 50 microseconds, 10-30 microseconds of timer jitter becomes a significant number. When jitter is a significant percentage of the cycle time it adversely affects the stability and quality of the control algorithm. This degrades the stability margin of closed-loop control systems, especially naturally unstable systems like position-feedback motion control loops.
展开▼