VM Lock Virtual-Real Synergy Optimization
Pain Points
Some locks in the kernel enter the spin wait state during lock preemption. If the vCPU that holds the lock is scheduled by the hypervisor to another task, the vCPUs in the waiting state may waste a lot of time due to spin wait. In the VM overcommitment scenario, this problem significantly affects the overall performance of the VM.
Solution
To resolve this problem, this feature uses shared memory to relay vCPU preemption status from the hypervisor to VMs. In this way, A spinning vCPU aborts its wait state upon detecting that the lock holder has been preempted, avoiding unnecessary wait.
The preempted state value tracks whether the vCPU has been preempted on the host. Specifically, a value of 0 indicates that the hypervisor is actively scheduling the vCPU, while 1 signifies the vCPU has been stopped by the hypervisor.
Application Scenarios
- The VM lock optimization feature targets vCPU overcommitment scenarios with core-bound vCPUs, where multiple vCPU threads contend for resources on shared physical cores.
- Using a coordinated frontend-backend approach, the feature tracks vCPU thread preemption to refine scheduling and lock efficiency. It improves performance in CPU scheduling operations such as mutex_spin_on_owner, mutex_can_spin_on_owner, rtmutex_spin_on_owner, osq_lock, and available_idle_cpu.