From: Praveen Paneri Date: Mon, 2 May 2016 08:40:29 +0000 (+0530) Subject: drm/i915: Add rpm get/put in oom and vmap notifier X-Git-Tag: v4.8-rc1~62^2~45^2~123 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=ea9d9768a497e23713366a0e2ca290332bc1ed81;p=platform%2Fkernel%2Flinux-exynos.git drm/i915: Add rpm get/put in oom and vmap notifier i915_gem_shrink() will scan the bound list only if device is not suspended but in OOM failure scenario it becomes absolutely necessary to release as much memory as possible. Also in allocation failure from vmap address space, it is incumbent on the Driver to reap all its vmaps. So, adding rpm get/put in i915_gem_shrinker_oom() and i915_gem_shrinker_vmap() to ensure shrinking of bound objects as well. Signed-off-by: Praveen Paneri Reviewed-by: Chris Wilson Signed-off-by: Chris Wilson Link: http://patchwork.freedesktop.org/patch/msgid/1462178429-13449-2-git-send-email-praveen.paneri@intel.com Signed-off-by: Chris Wilson --- diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c index 9a24415..79004f3 100644 --- a/drivers/gpu/drm/i915/i915_gem_shrinker.c +++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c @@ -357,7 +357,9 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr) if (!i915_gem_shrinker_lock_uninterruptible(dev_priv, &slu, 5000)) return NOTIFY_DONE; + intel_runtime_pm_get(dev_priv); freed_pages = i915_gem_shrink_all(dev_priv); + intel_runtime_pm_put(dev_priv); /* Because we may be allocating inside our own driver, we cannot * assert that there are no objects with pinned pages that are not @@ -410,11 +412,13 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr if (ret) goto out; + intel_runtime_pm_get(dev_priv); freed_pages += i915_gem_shrink(dev_priv, -1UL, I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_ACTIVE | I915_SHRINK_VMAPS); + intel_runtime_pm_put(dev_priv); /* We also want to clear any cached iomaps as they wrap vmap */ list_for_each_entry_safe(vma, next,