parallels: Handle failure for potentially large allocations
authorKevin Wolf <kwolf@redhat.com>
Tue, 20 May 2014 11:32:14 +0000 (13:32 +0200)
committerKevin Wolf <kwolf@redhat.com>
Fri, 15 Aug 2014 13:07:15 +0000 (15:07 +0200)
Some code in the block layer makes potentially huge allocations. Failure
is not completely unexpected there, so avoid aborting qemu and handle
out-of-memory situations gracefully.

This patch addresses the allocations in the parallels block driver.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
block/parallels.c

index 1a5bd35..7325678 100644 (file)
@@ -105,7 +105,11 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
         ret = -EFBIG;
         goto fail;
     }
-    s->catalog_bitmap = g_malloc(s->catalog_size * 4);
+    s->catalog_bitmap = g_try_malloc(s->catalog_size * 4);
+    if (s->catalog_size && s->catalog_bitmap == NULL) {
+        ret = -ENOMEM;
+        goto fail;
+    }
 
     ret = bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4);
     if (ret < 0) {