drm/scheduler: use job count instead of peek
authorChristian König <christian.koenig@amd.com>
Fri, 9 Aug 2019 15:27:21 +0000 (17:27 +0200)
committerAlex Deucher <alexander.deucher@amd.com>
Wed, 14 Aug 2019 20:45:53 +0000 (15:45 -0500)
The spsc_queue_peek function is accessing queue->head which belongs to
the consumer thread and shouldn't be accessed by the producer

This is fixing a rare race condition when destroying entities.

Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Reviewed-by: Monk.liu@amd.com
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
drivers/gpu/drm/scheduler/sched_entity.c

index 35ddbec..671c90f 100644 (file)
@@ -95,7 +95,7 @@ static bool drm_sched_entity_is_idle(struct drm_sched_entity *entity)
        rmb(); /* for list_empty to work without lock */
 
        if (list_empty(&entity->list) ||
-           spsc_queue_peek(&entity->job_queue) == NULL)
+           spsc_queue_count(&entity->job_queue) == 0)
                return true;
 
        return false;
@@ -281,7 +281,7 @@ void drm_sched_entity_fini(struct drm_sched_entity *entity)
        /* Consumption of existing IBs wasn't completed. Forcefully
         * remove them here.
         */
-       if (spsc_queue_peek(&entity->job_queue)) {
+       if (spsc_queue_count(&entity->job_queue)) {
                if (sched) {
                        /* Park the kernel for a moment to make sure it isn't processing
                         * our enity.