Address space for heap segments is reserved in a mmap call with
MAP_ANONYMOUS | MAP_PRIVATE and protection flags PROT_NONE. This
reservation does not count against the RSS limit of the process or
system. Backing memory is allocated using mprotect in alloc_new_heap
and grow_heap, and at this point, the allocator expects the kernel
to provide memory (subject to memory overcommit).
The SIGSEGV that might generate due to MAP_NORESERVE (according to
the mmap manual page) does not seem to occur in practice, it's always
SIGKILL from the OOM killer. Even if there is a way that SIGSEGV
could be generated, it is confusing to applications that this only
happens for secondary heaps, not for large mmap-based allocations,
and not for the main arena.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
#if HAVE_TUNABLES
if (__glibc_unlikely (mp_.hp_pagesize != 0))
{
- /* MAP_NORESERVE is not used for huge pages because some kernel may
- not reserve the mmap region and a subsequent access may trigger
- a SIGBUS if there is no free pages in the pool. */
heap_info *h = alloc_new_heap (size, top_pad, mp_.hp_pagesize,
mp_.hp_flags);
if (h != NULL)
return h;
}
#endif
- return alloc_new_heap (size, top_pad, GLRO (dl_pagesize), MAP_NORESERVE);
+ return alloc_new_heap (size, top_pad, GLRO (dl_pagesize), 0);
}
/* Grow a heap. size is automatically rounded up to a
# define MAP_ANONYMOUS MAP_ANON
#endif
-#ifndef MAP_NORESERVE
-# define MAP_NORESERVE 0
-#endif
-
#define MMAP(addr, size, prot, flags) \
__mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0)