From: Marcelo Tosatti Date: Wed, 13 Mar 2013 01:36:43 +0000 (-0300) Subject: KVM: MMU: make kvm_mmu_available_pages robust against n_used_mmu_pages > n_max_mmu_pages X-Git-Tag: v3.10-rc1~87^2~100 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=5d218814328da91a27e982748443e7e375e11396;p=platform%2Fkernel%2Flinux-exynos.git KVM: MMU: make kvm_mmu_available_pages robust against n_used_mmu_pages > n_max_mmu_pages As noticed by Ulrich Obergfell , the mmu counters are for beancounting purposes only - so n_used_mmu_pages and n_max_mmu_pages could be relaxed (example: before f0f5933a1626c8df7b), resulting in n_used_mmu_pages > n_max_mmu_pages. Make code robust against n_used_mmu_pages > n_max_mmu_pages. Reviewed-by: Xiao Guangrong Signed-off-by: Marcelo Tosatti Signed-off-by: Gleb Natapov --- diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 6987108..3b1ad00 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -57,8 +57,11 @@ int kvm_init_shadow_mmu(struct kvm_vcpu *vcpu, struct kvm_mmu *context); static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm) { - return kvm->arch.n_max_mmu_pages - - kvm->arch.n_used_mmu_pages; + if (kvm->arch.n_max_mmu_pages > kvm->arch.n_used_mmu_pages) + return kvm->arch.n_max_mmu_pages - + kvm->arch.n_used_mmu_pages; + + return 0; } static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)