habanalabs: prevent soft lockup during unmap
authorOded Gabbay <ogabbay@kernel.org>
Mon, 11 Jan 2021 15:49:30 +0000 (17:49 +0200)
committerOded Gabbay <ogabbay@kernel.org>
Tue, 12 Jan 2021 13:00:10 +0000 (15:00 +0200)
commit9488307a5559255f2fc9a3ab61e1c31e243ca7c6
treead3d6a9d5a2175e2346c0766a0388cb6efa432b1
parentaa6df6533b8f9ead98889baa92e2b19793b1c77e
habanalabs: prevent soft lockup during unmap

When using Deep learning framework such as tensorflow or pytorch, there
are tens of thousands of host memory mappings. When the user frees
all those mappings at the same time, the process of unmapping and
unpinning them can take a long time, which may cause a soft lockup
bug.

To prevent this, we need to free the core to do other things during
the unmapping process. For now, we chose to do it every 32K unmappings
(each unmap is a single 4K page).

Signed-off-by: Oded Gabbay <ogabbay@kernel.org>
drivers/misc/habanalabs/common/habanalabs.h
drivers/misc/habanalabs/common/memory.c
drivers/misc/habanalabs/common/mmu.c