Since kmalloc() will round up the allocation to the next slab size or
page, it will normally return a pointer to a memory block bigger than we
asked for. We can query for the actual size of the allocated block using
ksize() and expand our variable size reservation_list to take advantage
of that extra space.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Christian König <christian.koenig@amd.com>
Cc: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190712080314.21018-1-chris@chris-wilson.co.uk
RCU_INIT_POINTER(new->shared[j++], fence);
}
new->shared_count = j;
- new->shared_max = max;
+ new->shared_max =
+ (ksize(new) - offsetof(typeof(*new), shared)) /
+ sizeof(*new->shared);
preempt_disable();
write_seqcount_begin(&obj->seq);
return 0;
/* Drop the references to the signaled fences */
- for (i = k; i < new->shared_max; ++i) {
+ for (i = k; i < max; ++i) {
struct dma_fence *fence;
fence = rcu_dereference_protected(new->shared[i],