mm/hmm: add a test for cross device private faults
authorRalph Campbell <rcampbell@nvidia.com>
Mon, 25 Jul 2022 18:36:15 +0000 (11:36 -0700)
committerakpm <akpm@linux-foundation.org>
Sat, 30 Jul 2022 01:07:18 +0000 (18:07 -0700)
Add a simple test case for when hmm_range_fault() is called with the
HMM_PFN_REQ_FAULT flag and a device private PTE is found for a device
other than the hmm_range::dev_private_owner.  This should cause the page
to be faulted back to system memory from the other device and the PFN
returned in the output array.

Also, remove a piece of code that unnecessarily unmaps part of the buffer.

Link: https://lkml.kernel.org/r/20220727000837.4128709-3-rcampbell@nvidia.com
Link: https://lkml.kernel.org/r/20220725183615.4118795-3-rcampbell@nvidia.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Alistair Popple <apopple@nvidia.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Cc: Philip Yang <Philip.Yang@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
tools/testing/selftests/vm/hmm-tests.c

index 716b62c..939a33d 100644 (file)
@@ -1603,9 +1603,19 @@ TEST_F(hmm2, double_map)
        for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
                ASSERT_EQ(ptr[i], i);
 
-       /* Punch a hole after the first page address. */
-       ret = munmap(buffer->ptr + self->page_size, self->page_size);
+       /* Migrate pages to device 1 and try to read from device 0. */
+       ret = hmm_dmirror_cmd(self->fd1, HMM_DMIRROR_MIGRATE, buffer, npages);
+       ASSERT_EQ(ret, 0);
+       ASSERT_EQ(buffer->cpages, npages);
+
+       ret = hmm_dmirror_cmd(self->fd0, HMM_DMIRROR_READ, buffer, npages);
        ASSERT_EQ(ret, 0);
+       ASSERT_EQ(buffer->cpages, npages);
+       ASSERT_EQ(buffer->faults, 1);
+
+       /* Check what device 0 read. */
+       for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i)
+               ASSERT_EQ(ptr[i], i);
 
        hmm_buffer_free(buffer);
 }