7 #if 0 /* Moved to malloc.h */
8 /* ---------- To make a malloc.h, start cutting here ------------ */
11 A version of malloc/free/realloc written by Doug Lea and released to the
12 public domain. Send questions/comments/complaints/performance data
15 * VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee)
17 Note: There may be an updated version of this malloc obtainable at
18 ftp://g.oswego.edu/pub/misc/malloc.c
19 Check before installing!
21 * Why use this malloc?
23 This is not the fastest, most space-conserving, most portable, or
24 most tunable malloc ever written. However it is among the fastest
25 while also being among the most space-conserving, portable and tunable.
26 Consistent balance across these factors results in a good general-purpose
27 allocator. For a high-level description, see
28 http://g.oswego.edu/dl/html/malloc.html
30 * Synopsis of public routines
32 (Much fuller descriptions are contained in the program documentation below.)
35 Return a pointer to a newly allocated chunk of at least n bytes, or null
36 if no space is available.
38 Release the chunk of memory pointed to by p, or no effect if p is null.
39 realloc(Void_t* p, size_t n);
40 Return a pointer to a chunk of size n that contains the same data
41 as does chunk p up to the minimum of (n, p's size) bytes, or null
42 if no space is available. The returned pointer may or may not be
43 the same as p. If p is null, equivalent to malloc. Unless the
44 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
45 size argument of zero (re)allocates a minimum-sized chunk.
46 memalign(size_t alignment, size_t n);
47 Return a pointer to a newly allocated chunk of n bytes, aligned
48 in accord with the alignment argument, which must be a power of
51 Equivalent to memalign(pagesize, n), where pagesize is the page
52 size of the system (or as near to this as can be figured out from
53 all the includes/defines below.)
55 Equivalent to valloc(minimum-page-that-holds(n)), that is,
56 round up n to nearest pagesize.
57 calloc(size_t unit, size_t quantity);
58 Returns a pointer to quantity * unit bytes, with all locations
61 Equivalent to free(p).
62 malloc_trim(size_t pad);
63 Release all but pad bytes of freed top-most memory back
64 to the system. Return 1 if successful, else 0.
65 malloc_usable_size(Void_t* p);
66 Report the number usable allocated bytes associated with allocated
67 chunk p. This may or may not report more bytes than were requested,
68 due to alignment and minimum size constraints.
70 Prints brief summary statistics.
72 Returns (by copy) a struct containing various summary statistics.
73 mallopt(int parameter_number, int parameter_value)
74 Changes one of the tunable parameters described below. Returns
75 1 if successful in changing the parameter, else 0.
80 8 byte alignment is currently hardwired into the design. This
81 seems to suffice for all current machines and C compilers.
83 Assumed pointer representation: 4 or 8 bytes
84 Code for 8-byte pointers is untested by me but has worked
85 reliably by Wolfram Gloger, who contributed most of the
86 changes supporting this.
88 Assumed size_t representation: 4 or 8 bytes
89 Note that size_t is allowed to be 4 bytes even if pointers are 8.
91 Minimum overhead per allocated chunk: 4 or 8 bytes
92 Each malloced chunk has a hidden overhead of 4 bytes holding size
93 and status information.
95 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
96 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
98 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
99 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
100 needed; 4 (8) for a trailing size field
101 and 8 (16) bytes for free list pointers. Thus, the minimum
102 allocatable size is 16/24/32 bytes.
104 Even a request for zero bytes (i.e., malloc(0)) returns a
105 pointer to something of the minimum allocatable size.
107 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
108 8-byte size_t: 2^63 - 16 bytes
110 It is assumed that (possibly signed) size_t bit values suffice to
111 represent chunk sizes. `Possibly signed' is due to the fact
112 that `size_t' may be defined on a system as either a signed or
113 an unsigned type. To be conservative, values that would appear
114 as negative numbers are avoided.
115 Requests for sizes with a negative sign bit when the request
116 size is treaded as a long will return null.
118 Maximum overhead wastage per allocated chunk: normally 15 bytes
120 Alignnment demands, plus the minimum allocatable size restriction
121 make the normal worst-case wastage 15 bytes (i.e., up to 15
122 more bytes will be allocated than were requested in malloc), with
124 1. Because requests for zero bytes allocate non-zero space,
125 the worst case wastage for a request of zero bytes is 24 bytes.
126 2. For requests >= mmap_threshold that are serviced via
127 mmap(), the worst case wastage is 8 bytes plus the remainder
128 from a system page (the minimal mmap unit); typically 4096 bytes.
132 Here are some features that are NOT currently supported
134 * No user-definable hooks for callbacks and the like.
135 * No automated mechanism for fully checking that all accesses
136 to malloced memory stay within their bounds.
137 * No support for compaction.
139 * Synopsis of compile-time options:
141 People have reported using previous versions of this malloc on all
142 versions of Unix, sometimes by tweaking some of the defines
143 below. It has been tested most extensively on Solaris and
144 Linux. It is also reported to work on WIN32 platforms.
145 People have also reported adapting this malloc for use in
146 stand-alone embedded systems.
148 The implementation is in straight, hand-tuned ANSI C. Among other
149 consequences, it uses a lot of macros. Because of this, to be at
150 all usable, this code should be compiled using an optimizing compiler
151 (for example gcc -O2) that can simplify expressions and control
154 __STD_C (default: derived from C compiler defines)
155 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
156 a C compiler sufficiently close to ANSI to get away with it.
157 DEBUG (default: NOT defined)
158 Define to enable debugging. Adds fairly extensive assertion-based
159 checking to help track down memory errors, but noticeably slows down
161 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
162 Define this if you think that realloc(p, 0) should be equivalent
163 to free(p). Otherwise, since malloc returns a unique pointer for
164 malloc(0), so does realloc(p, 0).
165 HAVE_MEMCPY (default: defined)
166 Define if you are not otherwise using ANSI STD C, but still
167 have memcpy and memset in your C library and want to use them.
168 Otherwise, simple internal versions are supplied.
169 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
170 Define as 1 if you want the C library versions of memset and
171 memcpy called in realloc and calloc (otherwise macro versions are used).
172 At least on some platforms, the simple macro versions usually
173 outperform libc versions.
174 HAVE_MMAP (default: defined as 1)
175 Define to non-zero to optionally make malloc() use mmap() to
176 allocate very large blocks.
177 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
178 Define to non-zero to optionally make realloc() use mremap() to
179 reallocate very large blocks.
180 malloc_getpagesize (default: derived from system #includes)
181 Either a constant or routine call returning the system page size.
182 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
183 Optionally define if you are on a system with a /usr/include/malloc.h
184 that declares struct mallinfo. It is not at all necessary to
185 define this even if you do, but will ensure consistency.
186 INTERNAL_SIZE_T (default: size_t)
187 Define to a 32-bit type (probably `unsigned int') if you are on a
188 64-bit machine, yet do not want or need to allow malloc requests of
189 greater than 2^31 to be handled. This saves space, especially for
191 INTERNAL_LINUX_C_LIB (default: NOT defined)
192 Defined only when compiled as part of Linux libc.
193 Also note that there is some odd internal name-mangling via defines
194 (for example, internally, `malloc' is named `mALLOc') needed
195 when compiling in this case. These look funny but don't otherwise
197 WIN32 (default: undefined)
198 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
199 LACKS_UNISTD_H (default: undefined if not WIN32)
200 Define this if your system does not have a <unistd.h>.
201 LACKS_SYS_PARAM_H (default: undefined if not WIN32)
202 Define this if your system does not have a <sys/param.h>.
203 MORECORE (default: sbrk)
204 The name of the routine to call to obtain more memory from the system.
205 MORECORE_FAILURE (default: -1)
206 The value returned upon failure of MORECORE.
207 MORECORE_CLEARS (default 1)
208 true (1) if the routine mapped to MORECORE zeroes out memory (which
210 DEFAULT_TRIM_THRESHOLD
212 DEFAULT_MMAP_THRESHOLD
214 Default values of tunable parameters (described in detail below)
215 controlling interaction with host system routines (sbrk, mmap, etc).
216 These values may also be changed dynamically via mallopt(). The
217 preset defaults are those that give best performance for typical
219 USE_DL_PREFIX (default: undefined)
220 Prefix all public routines with the string 'dl'. Useful to
221 quickly avoid procedure declaration conflicts and linker symbol
222 conflicts with existing memory allocation routines.
239 #endif /*__cplusplus*/
244 #if (__STD_C || defined(WIN32))
252 #include <stddef.h> /* for size_t */
254 #include <sys/types.h>
261 #include <stdio.h> /* needed for malloc_stats */
272 Because freed chunks may be overwritten with link fields, this
273 malloc will often die when freed memory is overwritten by user
274 programs. This can be very effective (albeit in an annoying way)
275 in helping track down dangling pointers.
277 If you compile with -DDEBUG, a number of assertion checks are
278 enabled that will catch more memory errors. You probably won't be
279 able to make much sense of the actual assertion errors, but they
280 should help you locate incorrectly overwritten memory. The
281 checking is fairly extensive, and will slow down execution
282 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
283 attempt to check every non-mmapped allocated and free chunk in the
284 course of computing the summmaries. (By nature, mmapped regions
285 cannot be checked very much automatically.)
287 Setting DEBUG may also be helpful if you are trying to modify
288 this code. The assertions in the check routines spell out in more
289 detail the assumptions and invariants underlying the algorithms.
294 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
295 of chunk sizes. On a 64-bit machine, you can reduce malloc
296 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
297 at the expense of not being able to handle requests greater than
298 2^31. This limitation is hardly ever a concern; you are encouraged
299 to set this. However, the default version is the same as size_t.
302 #ifndef INTERNAL_SIZE_T
303 #define INTERNAL_SIZE_T size_t
307 REALLOC_ZERO_BYTES_FREES should be set if a call to
308 realloc with zero bytes should be the same as a call to free.
309 Some people think it should. Otherwise, since this malloc
310 returns a unique pointer for malloc(0), so does realloc(p, 0).
314 /* #define REALLOC_ZERO_BYTES_FREES */
318 WIN32 causes an emulation of sbrk to be compiled in
319 mmap-based options are not currently supported in WIN32.
324 #define MORECORE wsbrk
327 #define LACKS_UNISTD_H
328 #define LACKS_SYS_PARAM_H
331 Include 'windows.h' to get the necessary declarations for the
332 Microsoft Visual C++ data structures and routines used in the 'sbrk'
335 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
336 Visual C++ header files are included.
338 #define WIN32_LEAN_AND_MEAN
344 HAVE_MEMCPY should be defined if you are not otherwise using
345 ANSI STD C, but still have memcpy and memset in your C library
346 and want to use them in calloc and realloc. Otherwise simple
347 macro versions are defined here.
349 USE_MEMCPY should be defined as 1 if you actually want to
350 have memset and memcpy called. People report that the macro
351 versions are often enough faster than libc versions on many
352 systems that it is better to use them.
366 #if (__STD_C || defined(HAVE_MEMCPY))
369 void* memset(void*, int, size_t);
370 void* memcpy(void*, const void*, size_t);
373 /* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */
384 /* The following macros are only invoked with (2n+1)-multiples of
385 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
386 for fast inline execution when n is small. */
388 #define MALLOC_ZERO(charp, nbytes) \
390 INTERNAL_SIZE_T mzsz = (nbytes); \
391 if(mzsz <= 9*sizeof(mzsz)) { \
392 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
393 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
395 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
397 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
402 } else memset((charp), 0, mzsz); \
405 #define MALLOC_COPY(dest,src,nbytes) \
407 INTERNAL_SIZE_T mcsz = (nbytes); \
408 if(mcsz <= 9*sizeof(mcsz)) { \
409 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
410 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
411 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
412 *mcdst++ = *mcsrc++; \
413 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
414 *mcdst++ = *mcsrc++; \
415 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
416 *mcdst++ = *mcsrc++; }}} \
417 *mcdst++ = *mcsrc++; \
418 *mcdst++ = *mcsrc++; \
420 } else memcpy(dest, src, mcsz); \
423 #else /* !USE_MEMCPY */
425 /* Use Duff's device for good zeroing/copying performance. */
427 #define MALLOC_ZERO(charp, nbytes) \
429 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
430 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
431 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
433 case 0: for(;;) { *mzp++ = 0; \
434 case 7: *mzp++ = 0; \
435 case 6: *mzp++ = 0; \
436 case 5: *mzp++ = 0; \
437 case 4: *mzp++ = 0; \
438 case 3: *mzp++ = 0; \
439 case 2: *mzp++ = 0; \
440 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
444 #define MALLOC_COPY(dest,src,nbytes) \
446 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
447 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
448 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
449 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
451 case 0: for(;;) { *mcdst++ = *mcsrc++; \
452 case 7: *mcdst++ = *mcsrc++; \
453 case 6: *mcdst++ = *mcsrc++; \
454 case 5: *mcdst++ = *mcsrc++; \
455 case 4: *mcdst++ = *mcsrc++; \
456 case 3: *mcdst++ = *mcsrc++; \
457 case 2: *mcdst++ = *mcsrc++; \
458 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
466 Define HAVE_MMAP to optionally make malloc() use mmap() to
467 allocate very large blocks. These will be returned to the
468 operating system immediately after a free().
476 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
477 large blocks. This is currently only possible on Linux with
478 kernel versions newer than 1.3.77.
482 #ifdef INTERNAL_LINUX_C_LIB
483 #define HAVE_MREMAP 1
485 #define HAVE_MREMAP 0
493 #include <sys/mman.h>
495 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
496 #define MAP_ANONYMOUS MAP_ANON
499 #endif /* HAVE_MMAP */
502 Access to system page size. To the extent possible, this malloc
503 manages memory from the system in page-size units.
505 The following mechanics for getpagesize were adapted from
506 bsd/gnu getpagesize.h
509 #ifndef LACKS_UNISTD_H
513 #ifndef malloc_getpagesize
514 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
515 # ifndef _SC_PAGE_SIZE
516 # define _SC_PAGE_SIZE _SC_PAGESIZE
519 # ifdef _SC_PAGE_SIZE
520 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
522 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
523 extern size_t getpagesize();
524 # define malloc_getpagesize getpagesize()
527 # define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
529 # ifndef LACKS_SYS_PARAM_H
530 # include <sys/param.h>
532 # ifdef EXEC_PAGESIZE
533 # define malloc_getpagesize EXEC_PAGESIZE
537 # define malloc_getpagesize NBPG
539 # define malloc_getpagesize (NBPG * CLSIZE)
543 # define malloc_getpagesize NBPC
546 # define malloc_getpagesize PAGESIZE
548 # define malloc_getpagesize (4096) /* just guess */
561 This version of malloc supports the standard SVID/XPG mallinfo
562 routine that returns a struct containing the same kind of
563 information you can get from malloc_stats. It should work on
564 any SVID/XPG compliant system that has a /usr/include/malloc.h
565 defining struct mallinfo. (If you'd like to install such a thing
566 yourself, cut out the preliminary declarations as described above
567 and below and save them in a malloc.h file. But there's no
568 compelling reason to bother to do this.)
570 The main declaration needed is the mallinfo struct that is returned
571 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
572 bunch of fields, most of which are not even meaningful in this
573 version of malloc. Some of these fields are are instead filled by
574 mallinfo() with other numbers that might possibly be of interest.
576 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
577 /usr/include/malloc.h file that includes a declaration of struct
578 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
579 version is declared below. These must be precisely the same for
584 /* #define HAVE_USR_INCLUDE_MALLOC_H */
586 #if HAVE_USR_INCLUDE_MALLOC_H
587 #include "/usr/include/malloc.h"
590 /* SVID2/XPG mallinfo structure */
593 int arena; /* total space allocated from system */
594 int ordblks; /* number of non-inuse chunks */
595 int smblks; /* unused -- always zero */
596 int hblks; /* number of mmapped regions */
597 int hblkhd; /* total space in mmapped regions */
598 int usmblks; /* unused -- always zero */
599 int fsmblks; /* unused -- always zero */
600 int uordblks; /* total allocated space */
601 int fordblks; /* total non-inuse space */
602 int keepcost; /* top-most, releasable (via malloc_trim) space */
605 /* SVID2/XPG mallopt options */
607 #define M_MXFAST 1 /* UNUSED in this malloc */
608 #define M_NLBLKS 2 /* UNUSED in this malloc */
609 #define M_GRAIN 3 /* UNUSED in this malloc */
610 #define M_KEEP 4 /* UNUSED in this malloc */
614 /* mallopt options that actually do something */
616 #define M_TRIM_THRESHOLD -1
618 #define M_MMAP_THRESHOLD -3
619 #define M_MMAP_MAX -4
622 #ifndef DEFAULT_TRIM_THRESHOLD
623 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
627 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
628 to keep before releasing via malloc_trim in free().
630 Automatic trimming is mainly useful in long-lived programs.
631 Because trimming via sbrk can be slow on some systems, and can
632 sometimes be wasteful (in cases where programs immediately
633 afterward allocate more large chunks) the value should be high
634 enough so that your overall system performance would improve by
637 The trim threshold and the mmap control parameters (see below)
638 can be traded off with one another. Trimming and mmapping are
639 two different ways of releasing unused memory back to the
640 system. Between these two, it is often possible to keep
641 system-level demands of a long-lived program down to a bare
642 minimum. For example, in one test suite of sessions measuring
643 the XF86 X server on Linux, using a trim threshold of 128K and a
644 mmap threshold of 192K led to near-minimal long term resource
647 If you are using this malloc in a long-lived program, it should
648 pay to experiment with these values. As a rough guide, you
649 might set to a value close to the average size of a process
650 (program) running on your system. Releasing this much memory
651 would allow such a process to run in memory. Generally, it's
652 worth it to tune for trimming rather tham memory mapping when a
653 program undergoes phases where several large chunks are
654 allocated and released in ways that can reuse each other's
655 storage, perhaps mixed with phases where there are no such
656 chunks at all. And in well-behaved long-lived programs,
657 controlling release of large blocks via trimming versus mapping
660 However, in most programs, these parameters serve mainly as
661 protection against the system-level effects of carrying around
662 massive amounts of unneeded memory. Since frequent calls to
663 sbrk, mmap, and munmap otherwise degrade performance, the default
664 parameters are set to relatively high values that serve only as
667 The default trim value is high enough to cause trimming only in
668 fairly extreme (by current memory consumption standards) cases.
669 It must be greater than page size to have any useful effect. To
670 disable trimming completely, you can set to (unsigned long)(-1);
676 #ifndef DEFAULT_TOP_PAD
677 #define DEFAULT_TOP_PAD (0)
681 M_TOP_PAD is the amount of extra `padding' space to allocate or
682 retain whenever sbrk is called. It is used in two ways internally:
684 * When sbrk is called to extend the top of the arena to satisfy
685 a new malloc request, this much padding is added to the sbrk
688 * When malloc_trim is called automatically from free(),
689 it is used as the `pad' argument.
691 In both cases, the actual amount of padding is rounded
692 so that the end of the arena is always a system page boundary.
694 The main reason for using padding is to avoid calling sbrk so
695 often. Having even a small pad greatly reduces the likelihood
696 that nearly every malloc request during program start-up (or
697 after trimming) will invoke sbrk, which needlessly wastes
700 Automatic rounding-up to page-size units is normally sufficient
701 to avoid measurable overhead, so the default is 0. However, in
702 systems where sbrk is relatively slow, it can pay to increase
703 this value, at the expense of carrying around more memory than
709 #ifndef DEFAULT_MMAP_THRESHOLD
710 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
715 M_MMAP_THRESHOLD is the request size threshold for using mmap()
716 to service a request. Requests of at least this size that cannot
717 be allocated using already-existing space will be serviced via mmap.
718 (If enough normal freed space already exists it is used instead.)
720 Using mmap segregates relatively large chunks of memory so that
721 they can be individually obtained and released from the host
722 system. A request serviced through mmap is never reused by any
723 other request (at least not directly; the system may just so
724 happen to remap successive requests to the same locations).
726 Segregating space in this way has the benefit that mmapped space
727 can ALWAYS be individually released back to the system, which
728 helps keep the system level memory demands of a long-lived
729 program low. Mapped memory can never become `locked' between
730 other chunks, as can happen with normally allocated chunks, which
731 menas that even trimming via malloc_trim would not release them.
733 However, it has the disadvantages that:
735 1. The space cannot be reclaimed, consolidated, and then
736 used to service later requests, as happens with normal chunks.
737 2. It can lead to more wastage because of mmap page alignment
739 3. It causes malloc performance to be more dependent on host
740 system memory management support routines which may vary in
741 implementation quality and may impose arbitrary
742 limitations. Generally, servicing a request via normal
743 malloc steps is faster than going through a system's mmap.
745 All together, these considerations should lead you to use mmap
746 only for relatively large requests.
752 #ifndef DEFAULT_MMAP_MAX
754 #define DEFAULT_MMAP_MAX (64)
756 #define DEFAULT_MMAP_MAX (0)
761 M_MMAP_MAX is the maximum number of requests to simultaneously
762 service using mmap. This parameter exists because:
764 1. Some systems have a limited number of internal tables for
766 2. In most systems, overreliance on mmap can degrade overall
768 3. If a program allocates many large regions, it is probably
769 better off using normal sbrk-based allocation routines that
770 can reclaim and reallocate normal heap memory. Using a
771 small value allows transition into this mode after the
772 first few allocations.
774 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
775 the default value is 0, and attempts to set it to non-zero values
776 in mallopt will fail.
781 USE_DL_PREFIX will prefix all public routines with the string 'dl'.
782 Useful to quickly avoid procedure declaration conflicts and linker
783 symbol conflicts with existing memory allocation routines.
787 /* #define USE_DL_PREFIX */
792 Special defines for linux libc
794 Except when compiled using these special defines for Linux libc
795 using weak aliases, this malloc is NOT designed to work in
796 multithreaded applications. No semaphores or other concurrency
797 control are provided to ensure that multiple malloc or free calls
798 don't run at the same time, which could be disasterous. A single
799 semaphore could be used across malloc, realloc, and free (which is
800 essentially the effect of the linux weak alias approach). It would
801 be hard to obtain finer granularity.
806 #ifdef INTERNAL_LINUX_C_LIB
810 Void_t * __default_morecore_init (ptrdiff_t);
811 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
815 Void_t * __default_morecore_init ();
816 Void_t *(*__morecore)() = __default_morecore_init;
820 #define MORECORE (*__morecore)
821 #define MORECORE_FAILURE 0
822 #define MORECORE_CLEARS 1
824 #else /* INTERNAL_LINUX_C_LIB */
827 extern Void_t* sbrk(ptrdiff_t);
829 extern Void_t* sbrk();
833 #define MORECORE sbrk
836 #ifndef MORECORE_FAILURE
837 #define MORECORE_FAILURE -1
840 #ifndef MORECORE_CLEARS
841 #define MORECORE_CLEARS 1
844 #endif /* INTERNAL_LINUX_C_LIB */
846 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
848 #define cALLOc __libc_calloc
849 #define fREe __libc_free
850 #define mALLOc __libc_malloc
851 #define mEMALIGn __libc_memalign
852 #define rEALLOc __libc_realloc
853 #define vALLOc __libc_valloc
854 #define pvALLOc __libc_pvalloc
855 #define mALLINFo __libc_mallinfo
856 #define mALLOPt __libc_mallopt
858 #pragma weak calloc = __libc_calloc
859 #pragma weak free = __libc_free
860 #pragma weak cfree = __libc_free
861 #pragma weak malloc = __libc_malloc
862 #pragma weak memalign = __libc_memalign
863 #pragma weak realloc = __libc_realloc
864 #pragma weak valloc = __libc_valloc
865 #pragma weak pvalloc = __libc_pvalloc
866 #pragma weak mallinfo = __libc_mallinfo
867 #pragma weak mallopt = __libc_mallopt
872 #define cALLOc dlcalloc
874 #define mALLOc dlmalloc
875 #define mEMALIGn dlmemalign
876 #define rEALLOc dlrealloc
877 #define vALLOc dlvalloc
878 #define pvALLOc dlpvalloc
879 #define mALLINFo dlmallinfo
880 #define mALLOPt dlmallopt
881 #else /* USE_DL_PREFIX */
882 #define cALLOc calloc
884 #define mALLOc malloc
885 #define mEMALIGn memalign
886 #define rEALLOc realloc
887 #define vALLOc valloc
888 #define pvALLOc pvalloc
889 #define mALLINFo mallinfo
890 #define mALLOPt mallopt
891 #endif /* USE_DL_PREFIX */
895 /* Public routines */
899 Void_t* mALLOc(size_t);
901 Void_t* rEALLOc(Void_t*, size_t);
902 Void_t* mEMALIGn(size_t, size_t);
903 Void_t* vALLOc(size_t);
904 Void_t* pvALLOc(size_t);
905 Void_t* cALLOc(size_t, size_t);
907 int malloc_trim(size_t);
908 size_t malloc_usable_size(Void_t*);
910 int mALLOPt(int, int);
911 struct mallinfo mALLINFo(void);
922 size_t malloc_usable_size();
925 struct mallinfo mALLINFo();
930 }; /* end of extern "C" */
933 /* ---------- To make a malloc.h, end cutting here ------------ */
934 #endif /* 0 */ /* Moved to malloc.h */
941 static void malloc_update_mallinfo (void);
942 void malloc_stats (void);
944 static void malloc_update_mallinfo ();
949 DECLARE_GLOBAL_DATA_PTR;
952 Emulation of sbrk for WIN32
953 All code within the ifdef WIN32 is untested by me.
955 Thanks to Martin Fong and others for supplying this.
961 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
962 ~(malloc_getpagesize-1))
963 #define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
965 /* resrve 64MB to insure large contiguous space */
966 #define RESERVED_SIZE (1024*1024*64)
967 #define NEXT_SIZE (2048*1024)
968 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
970 struct GmListElement;
971 typedef struct GmListElement GmListElement;
979 static GmListElement* head = 0;
980 static unsigned int gNextAddress = 0;
981 static unsigned int gAddressBase = 0;
982 static unsigned int gAllocatedSize = 0;
985 GmListElement* makeGmListElement (void* bas)
988 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
1002 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1003 if (gAddressBase && (gNextAddress - gAddressBase))
1005 rval = VirtualFree ((void*)gAddressBase,
1006 gNextAddress - gAddressBase,
1012 GmListElement* next = head->next;
1013 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1021 void* findRegion (void* start_address, unsigned long size)
1023 MEMORY_BASIC_INFORMATION info;
1024 if (size >= TOP_MEMORY) return NULL;
1026 while ((unsigned long)start_address + size < TOP_MEMORY)
1028 VirtualQuery (start_address, &info, sizeof (info));
1029 if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1030 return start_address;
1033 /* Requested region is not available so see if the */
1034 /* next region is available. Set 'start_address' */
1035 /* to the next region and call 'VirtualQuery()' */
1038 start_address = (char*)info.BaseAddress + info.RegionSize;
1040 /* Make sure we start looking for the next region */
1041 /* on the *next* 64K boundary. Otherwise, even if */
1042 /* the new region is free according to */
1043 /* 'VirtualQuery()', the subsequent call to */
1044 /* 'VirtualAlloc()' (which follows the call to */
1045 /* this routine in 'wsbrk()') will round *down* */
1046 /* the requested address to a 64K boundary which */
1047 /* we already know is an address in the */
1048 /* unavailable region. Thus, the subsequent call */
1049 /* to 'VirtualAlloc()' will fail and bring us back */
1050 /* here, causing us to go into an infinite loop. */
1053 (void *) AlignPage64K((unsigned long) start_address);
1061 void* wsbrk (long size)
1066 if (gAddressBase == 0)
1068 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1069 gNextAddress = gAddressBase =
1070 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1071 MEM_RESERVE, PAGE_NOACCESS);
1072 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
1075 long new_size = max (NEXT_SIZE, AlignPage (size));
1076 void* new_address = (void*)(gAddressBase+gAllocatedSize);
1079 new_address = findRegion (new_address, new_size);
1081 if (new_address == 0)
1084 gAddressBase = gNextAddress =
1085 (unsigned int)VirtualAlloc (new_address, new_size,
1086 MEM_RESERVE, PAGE_NOACCESS);
1087 /* repeat in case of race condition */
1088 /* The region that we found has been snagged */
1089 /* by another thread */
1091 while (gAddressBase == 0);
1093 assert (new_address == (void*)gAddressBase);
1095 gAllocatedSize = new_size;
1097 if (!makeGmListElement ((void*)gAddressBase))
1100 if ((size + gNextAddress) > AlignPage (gNextAddress))
1103 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1104 (size + gNextAddress -
1105 AlignPage (gNextAddress)),
1106 MEM_COMMIT, PAGE_READWRITE);
1110 tmp = (void*)gNextAddress;
1111 gNextAddress = (unsigned int)tmp + size;
1116 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1117 /* Trim by releasing the virtual memory */
1118 if (alignedGoal >= gAddressBase)
1120 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1122 gNextAddress = gNextAddress + size;
1123 return (void*)gNextAddress;
1127 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1129 gNextAddress = gAddressBase;
1135 return (void*)gNextAddress;
1150 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1151 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1152 struct malloc_chunk* fd; /* double links -- used only if free. */
1153 struct malloc_chunk* bk;
1154 } __attribute__((__may_alias__)) ;
1156 typedef struct malloc_chunk* mchunkptr;
1160 malloc_chunk details:
1162 (The following includes lightly edited explanations by Colin Plumb.)
1164 Chunks of memory are maintained using a `boundary tag' method as
1165 described in e.g., Knuth or Standish. (See the paper by Paul
1166 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1167 survey of such techniques.) Sizes of free chunks are stored both
1168 in the front of each chunk and at the end. This makes
1169 consolidating fragmented chunks into bigger chunks very fast. The
1170 size fields also hold bits representing whether chunks are free or
1173 An allocated chunk looks like this:
1176 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1177 | Size of previous chunk, if allocated | |
1178 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1179 | Size of chunk, in bytes |P|
1180 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1181 | User data starts here... .
1183 . (malloc_usable_space() bytes) .
1185 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1187 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1190 Where "chunk" is the front of the chunk for the purpose of most of
1191 the malloc code, but "mem" is the pointer that is returned to the
1192 user. "Nextchunk" is the beginning of the next contiguous chunk.
1194 Chunks always begin on even word boundries, so the mem portion
1195 (which is returned to the user) is also on an even word boundary, and
1196 thus double-word aligned.
1198 Free chunks are stored in circular doubly-linked lists, and look like this:
1200 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1201 | Size of previous chunk |
1202 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1203 `head:' | Size of chunk, in bytes |P|
1204 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1205 | Forward pointer to next chunk in list |
1206 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1207 | Back pointer to previous chunk in list |
1208 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1209 | Unused space (may be 0 bytes long) .
1212 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1213 `foot:' | Size of chunk, in bytes |
1214 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1216 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1217 chunk size (which is always a multiple of two words), is an in-use
1218 bit for the *previous* chunk. If that bit is *clear*, then the
1219 word before the current chunk size contains the previous chunk
1220 size, and can be used to find the front of the previous chunk.
1221 (The very first chunk allocated always has this bit set,
1222 preventing access to non-existent (or non-owned) memory.)
1224 Note that the `foot' of the current chunk is actually represented
1225 as the prev_size of the NEXT chunk. (This makes it easier to
1226 deal with alignments etc).
1228 The two exceptions to all this are
1230 1. The special chunk `top', which doesn't bother using the
1231 trailing size field since there is no
1232 next contiguous chunk that would have to index off it. (After
1233 initialization, `top' is forced to always exist. If it would
1234 become less than MINSIZE bytes long, it is replenished via
1237 2. Chunks allocated via mmap, which have the second-lowest-order
1238 bit (IS_MMAPPED) set in their size fields. Because they are
1239 never merged or traversed from any other chunk, they have no
1240 foot size or inuse information.
1242 Available chunks are kept in any of several places (all declared below):
1244 * `av': An array of chunks serving as bin headers for consolidated
1245 chunks. Each bin is doubly linked. The bins are approximately
1246 proportionally (log) spaced. There are a lot of these bins
1247 (128). This may look excessive, but works very well in
1248 practice. All procedures maintain the invariant that no
1249 consolidated chunk physically borders another one. Chunks in
1250 bins are kept in size order, with ties going to the
1251 approximately least recently used chunk.
1253 The chunks in each bin are maintained in decreasing sorted order by
1254 size. This is irrelevant for the small bins, which all contain
1255 the same-sized chunks, but facilitates best-fit allocation for
1256 larger chunks. (These lists are just sequential. Keeping them in
1257 order almost never requires enough traversal to warrant using
1258 fancier ordered data structures.) Chunks of the same size are
1259 linked with the most recently freed at the front, and allocations
1260 are taken from the back. This results in LRU or FIFO allocation
1261 order, which tends to give each chunk an equal opportunity to be
1262 consolidated with adjacent freed chunks, resulting in larger free
1263 chunks and less fragmentation.
1265 * `top': The top-most available chunk (i.e., the one bordering the
1266 end of available memory) is treated specially. It is never
1267 included in any bin, is used only if no other chunk is
1268 available, and is released back to the system if it is very
1269 large (see M_TRIM_THRESHOLD).
1271 * `last_remainder': A bin holding only the remainder of the
1272 most recently split (non-top) chunk. This bin is checked
1273 before other non-fitting chunks, so as to provide better
1274 locality for runs of sequentially allocated chunks.
1276 * Implicitly, through the host system's memory mapping tables.
1277 If supported, requests greater than a threshold are usually
1278 serviced via calls to mmap, and then later released via munmap.
1282 /* sizes, alignments */
1284 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1285 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1286 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1287 #define MINSIZE (sizeof(struct malloc_chunk))
1289 /* conversion from malloc headers to user pointers, and back */
1291 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1292 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1294 /* pad request bytes into a usable size */
1296 #define request2size(req) \
1297 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1298 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1299 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1301 /* Check if m has acceptable alignment */
1303 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1309 Physical chunk operations
1313 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1315 #define PREV_INUSE 0x1
1317 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1319 #define IS_MMAPPED 0x2
1321 /* Bits to mask off when extracting size */
1323 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1326 /* Ptr to next physical malloc_chunk. */
1328 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1330 /* Ptr to previous physical malloc_chunk */
1332 #define prev_chunk(p)\
1333 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1336 /* Treat space at ptr + offset as a chunk */
1338 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1344 Dealing with use bits
1347 /* extract p's inuse bit */
1350 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1352 /* extract inuse bit of previous chunk */
1354 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1356 /* check for mmap()'ed chunk */
1358 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1360 /* set/clear chunk as in use without otherwise disturbing */
1362 #define set_inuse(p)\
1363 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1365 #define clear_inuse(p)\
1366 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1368 /* check/set/clear inuse bits in known places */
1370 #define inuse_bit_at_offset(p, s)\
1371 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1373 #define set_inuse_bit_at_offset(p, s)\
1374 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1376 #define clear_inuse_bit_at_offset(p, s)\
1377 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1383 Dealing with size fields
1386 /* Get size, ignoring use bits */
1388 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1390 /* Set size at head, without disturbing its use bit */
1392 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1394 /* Set size/use ignoring previous bits in header */
1396 #define set_head(p, s) ((p)->size = (s))
1398 /* Set size at footer (only when chunk is not in use) */
1400 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1409 The bins, `av_' are an array of pairs of pointers serving as the
1410 heads of (initially empty) doubly-linked lists of chunks, laid out
1411 in a way so that each pair can be treated as if it were in a
1412 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1413 and chunks are the same).
1415 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1416 8 bytes apart. Larger bins are approximately logarithmically
1417 spaced. (See the table below.) The `av_' array is never mentioned
1418 directly in the code, but instead via bin access macros.
1426 4 bins of size 32768
1427 2 bins of size 262144
1428 1 bin of size what's left
1430 There is actually a little bit of slop in the numbers in bin_index
1431 for the sake of speed. This makes no difference elsewhere.
1433 The special chunks `top' and `last_remainder' get their own bins,
1434 (this is implemented via yet more trickery with the av_ array),
1435 although `top' is never properly linked to its bin since it is
1436 always handled specially.
1440 #define NAV 128 /* number of bins */
1442 typedef struct malloc_chunk* mbinptr;
1446 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1447 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1448 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1451 The first 2 bins are never indexed. The corresponding av_ cells are instead
1452 used for bookkeeping. This is not to save space, but to simplify
1453 indexing, maintain locality, and avoid some initialization tests.
1456 #define top (av_[2]) /* The topmost chunk */
1457 #define last_remainder (bin_at(1)) /* remainder from last split */
1461 Because top initially points to its own bin with initial
1462 zero size, thus forcing extension on the first malloc request,
1463 we avoid having any special code in malloc to check whether
1464 it even exists yet. But we still need to in malloc_extend_top.
1467 #define initial_top ((mchunkptr)(bin_at(0)))
1469 /* Helper macro to initialize bins */
1471 #define IAV(i) bin_at(i), bin_at(i)
1473 static mbinptr av_[NAV * 2 + 2] = {
1475 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1476 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1477 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1478 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1479 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1480 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1481 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1482 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1483 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1484 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1485 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1486 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1487 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1488 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1489 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1490 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1493 #ifdef CONFIG_NEEDS_MANUAL_RELOC
1494 static void malloc_bin_reloc(void)
1496 mbinptr *p = &av_[2];
1499 for (i = 2; i < ARRAY_SIZE(av_); ++i, ++p)
1500 *p = (mbinptr)((ulong)*p + gd->reloc_off);
1503 static inline void malloc_bin_reloc(void) {}
1506 ulong mem_malloc_start = 0;
1507 ulong mem_malloc_end = 0;
1508 ulong mem_malloc_brk = 0;
1510 void *sbrk(ptrdiff_t increment)
1512 ulong old = mem_malloc_brk;
1513 ulong new = old + increment;
1516 * if we are giving memory back make sure we clear it out since
1517 * we set MORECORE_CLEARS to 1
1520 memset((void *)new, 0, -increment);
1522 if ((new < mem_malloc_start) || (new > mem_malloc_end))
1523 return (void *)MORECORE_FAILURE;
1525 mem_malloc_brk = new;
1530 void mem_malloc_init(ulong start, ulong size)
1532 mem_malloc_start = start;
1533 mem_malloc_end = start + size;
1534 mem_malloc_brk = start;
1536 debug("using memory %#lx-%#lx for malloc()\n", mem_malloc_start,
1539 memset((void *)mem_malloc_start, 0, size);
1544 /* field-extraction macros */
1546 #define first(b) ((b)->fd)
1547 #define last(b) ((b)->bk)
1553 #define bin_index(sz) \
1554 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1555 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1556 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1557 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1558 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1559 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1562 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1563 identically sized chunks. This is exploited in malloc.
1566 #define MAX_SMALLBIN 63
1567 #define MAX_SMALLBIN_SIZE 512
1568 #define SMALLBIN_WIDTH 8
1570 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
1573 Requests are `small' if both the corresponding and the next bin are small
1576 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1581 To help compensate for the large number of bins, a one-level index
1582 structure is used for bin-by-bin searching. `binblocks' is a
1583 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1584 have any (possibly) non-empty bins, so they can be skipped over
1585 all at once during during traversals. The bits are NOT always
1586 cleared as soon as all bins in a block are empty, but instead only
1587 when all are noticed to be empty during traversal in malloc.
1590 #define BINBLOCKWIDTH 4 /* bins per block */
1592 #define binblocks_r ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */
1593 #define binblocks_w (av_[1])
1595 /* bin<->block macros */
1597 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
1598 #define mark_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii)))
1599 #define clear_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii))))
1605 /* Other static bookkeeping data */
1607 /* variables holding tunable values */
1609 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1610 static unsigned long top_pad = DEFAULT_TOP_PAD;
1611 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1612 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1614 /* The first value returned from sbrk */
1615 static char* sbrk_base = (char*)(-1);
1617 /* The maximum memory obtained from system via sbrk */
1618 static unsigned long max_sbrked_mem = 0;
1620 /* The maximum via either sbrk or mmap */
1621 static unsigned long max_total_mem = 0;
1623 /* internal working copy of mallinfo */
1624 static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1626 /* The total memory obtained from system via sbrk */
1627 #define sbrked_mem (current_mallinfo.arena)
1629 /* Tracking mmaps */
1632 static unsigned int n_mmaps = 0;
1634 static unsigned long mmapped_mem = 0;
1636 static unsigned int max_n_mmaps = 0;
1637 static unsigned long max_mmapped_mem = 0;
1650 These routines make a number of assertions about the states
1651 of data structures that should be true at all times. If any
1652 are not true, it's very likely that a user program has somehow
1653 trashed memory. (It's also possible that there is a coding error
1654 in malloc. In which case, please report it!)
1658 static void do_check_chunk(mchunkptr p)
1660 static void do_check_chunk(p) mchunkptr p;
1663 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1665 /* No checkable chunk is mmapped */
1666 assert(!chunk_is_mmapped(p));
1668 /* Check for legal address ... */
1669 assert((char*)p >= sbrk_base);
1671 assert((char*)p + sz <= (char*)top);
1673 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1679 static void do_check_free_chunk(mchunkptr p)
1681 static void do_check_free_chunk(p) mchunkptr p;
1684 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1685 mchunkptr next = chunk_at_offset(p, sz);
1689 /* Check whether it claims to be free ... */
1692 /* Unless a special marker, must have OK fields */
1693 if ((long)sz >= (long)MINSIZE)
1695 assert((sz & MALLOC_ALIGN_MASK) == 0);
1696 assert(aligned_OK(chunk2mem(p)));
1697 /* ... matching footer field */
1698 assert(next->prev_size == sz);
1699 /* ... and is fully consolidated */
1700 assert(prev_inuse(p));
1701 assert (next == top || inuse(next));
1703 /* ... and has minimally sane links */
1704 assert(p->fd->bk == p);
1705 assert(p->bk->fd == p);
1707 else /* markers are always of size SIZE_SZ */
1708 assert(sz == SIZE_SZ);
1712 static void do_check_inuse_chunk(mchunkptr p)
1714 static void do_check_inuse_chunk(p) mchunkptr p;
1717 mchunkptr next = next_chunk(p);
1720 /* Check whether it claims to be in use ... */
1723 /* ... and is surrounded by OK chunks.
1724 Since more things can be checked with free chunks than inuse ones,
1725 if an inuse chunk borders them and debug is on, it's worth doing them.
1729 mchunkptr prv = prev_chunk(p);
1730 assert(next_chunk(prv) == p);
1731 do_check_free_chunk(prv);
1735 assert(prev_inuse(next));
1736 assert(chunksize(next) >= MINSIZE);
1738 else if (!inuse(next))
1739 do_check_free_chunk(next);
1744 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1746 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1749 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1752 do_check_inuse_chunk(p);
1754 /* Legal size ... */
1755 assert((long)sz >= (long)MINSIZE);
1756 assert((sz & MALLOC_ALIGN_MASK) == 0);
1758 assert(room < (long)MINSIZE);
1760 /* ... and alignment */
1761 assert(aligned_OK(chunk2mem(p)));
1764 /* ... and was allocated at front of an available chunk */
1765 assert(prev_inuse(p));
1770 #define check_free_chunk(P) do_check_free_chunk(P)
1771 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1772 #define check_chunk(P) do_check_chunk(P)
1773 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1775 #define check_free_chunk(P)
1776 #define check_inuse_chunk(P)
1777 #define check_chunk(P)
1778 #define check_malloced_chunk(P,N)
1784 Macro-based internal utilities
1789 Linking chunks in bin lists.
1790 Call these only with variables, not arbitrary expressions, as arguments.
1794 Place chunk p of size s in its bin, in size order,
1795 putting it ahead of others of same size.
1799 #define frontlink(P, S, IDX, BK, FD) \
1801 if (S < MAX_SMALLBIN_SIZE) \
1803 IDX = smallbin_index(S); \
1804 mark_binblock(IDX); \
1809 FD->bk = BK->fd = P; \
1813 IDX = bin_index(S); \
1816 if (FD == BK) mark_binblock(IDX); \
1819 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1824 FD->bk = BK->fd = P; \
1829 /* take a chunk off a list */
1831 #define unlink(P, BK, FD) \
1839 /* Place p as the last remainder */
1841 #define link_last_remainder(P) \
1843 last_remainder->fd = last_remainder->bk = P; \
1844 P->fd = P->bk = last_remainder; \
1847 /* Clear the last_remainder bin */
1849 #define clear_last_remainder \
1850 (last_remainder->fd = last_remainder->bk = last_remainder)
1856 /* Routines dealing with mmap(). */
1861 static mchunkptr mmap_chunk(size_t size)
1863 static mchunkptr mmap_chunk(size) size_t size;
1866 size_t page_mask = malloc_getpagesize - 1;
1869 #ifndef MAP_ANONYMOUS
1873 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1875 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1876 * there is no following chunk whose prev_size field could be used.
1878 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1880 #ifdef MAP_ANONYMOUS
1881 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1882 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1883 #else /* !MAP_ANONYMOUS */
1886 fd = open("/dev/zero", O_RDWR);
1887 if(fd < 0) return 0;
1889 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1892 if(p == (mchunkptr)-1) return 0;
1895 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1897 /* We demand that eight bytes into a page must be 8-byte aligned. */
1898 assert(aligned_OK(chunk2mem(p)));
1900 /* The offset to the start of the mmapped region is stored
1901 * in the prev_size field of the chunk; normally it is zero,
1902 * but that can be changed in memalign().
1905 set_head(p, size|IS_MMAPPED);
1907 mmapped_mem += size;
1908 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1909 max_mmapped_mem = mmapped_mem;
1910 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1911 max_total_mem = mmapped_mem + sbrked_mem;
1916 static void munmap_chunk(mchunkptr p)
1918 static void munmap_chunk(p) mchunkptr p;
1921 INTERNAL_SIZE_T size = chunksize(p);
1924 assert (chunk_is_mmapped(p));
1925 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1926 assert((n_mmaps > 0));
1927 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1930 mmapped_mem -= (size + p->prev_size);
1932 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1934 /* munmap returns non-zero on failure */
1941 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1943 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1946 size_t page_mask = malloc_getpagesize - 1;
1947 INTERNAL_SIZE_T offset = p->prev_size;
1948 INTERNAL_SIZE_T size = chunksize(p);
1951 assert (chunk_is_mmapped(p));
1952 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1953 assert((n_mmaps > 0));
1954 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1956 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1957 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1959 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1961 if (cp == (char *)-1) return 0;
1963 p = (mchunkptr)(cp + offset);
1965 assert(aligned_OK(chunk2mem(p)));
1967 assert((p->prev_size == offset));
1968 set_head(p, (new_size - offset)|IS_MMAPPED);
1970 mmapped_mem -= size + offset;
1971 mmapped_mem += new_size;
1972 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1973 max_mmapped_mem = mmapped_mem;
1974 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1975 max_total_mem = mmapped_mem + sbrked_mem;
1979 #endif /* HAVE_MREMAP */
1981 #endif /* HAVE_MMAP */
1987 Extend the top-most chunk by obtaining memory from system.
1988 Main interface to sbrk (but see also malloc_trim).
1992 static void malloc_extend_top(INTERNAL_SIZE_T nb)
1994 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1997 char* brk; /* return value from sbrk */
1998 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1999 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
2000 char* new_brk; /* return of 2nd sbrk call */
2001 INTERNAL_SIZE_T top_size; /* new size of top chunk */
2003 mchunkptr old_top = top; /* Record state of old top */
2004 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
2005 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
2007 /* Pad request with top_pad plus minimal overhead */
2009 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
2010 unsigned long pagesz = malloc_getpagesize;
2012 /* If not the first time through, round to preserve page boundary */
2013 /* Otherwise, we need to correct to a page size below anyway. */
2014 /* (We also correct below if an intervening foreign sbrk call.) */
2016 if (sbrk_base != (char*)(-1))
2017 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
2019 brk = (char*)(MORECORE (sbrk_size));
2021 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2022 if (brk == (char*)(MORECORE_FAILURE) ||
2023 (brk < old_end && old_top != initial_top))
2026 sbrked_mem += sbrk_size;
2028 if (brk == old_end) /* can just add bytes to current top */
2030 top_size = sbrk_size + old_top_size;
2031 set_head(top, top_size | PREV_INUSE);
2035 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
2037 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
2038 sbrked_mem += brk - (char*)old_end;
2040 /* Guarantee alignment of first new chunk made from this space */
2041 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2042 if (front_misalign > 0)
2044 correction = (MALLOC_ALIGNMENT) - front_misalign;
2050 /* Guarantee the next brk will be at a page boundary */
2052 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
2053 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
2055 /* Allocate correction */
2056 new_brk = (char*)(MORECORE (correction));
2057 if (new_brk == (char*)(MORECORE_FAILURE)) return;
2059 sbrked_mem += correction;
2061 top = (mchunkptr)brk;
2062 top_size = new_brk - brk + correction;
2063 set_head(top, top_size | PREV_INUSE);
2065 if (old_top != initial_top)
2068 /* There must have been an intervening foreign sbrk call. */
2069 /* A double fencepost is necessary to prevent consolidation */
2071 /* If not enough space to do this, then user did something very wrong */
2072 if (old_top_size < MINSIZE)
2074 set_head(top, PREV_INUSE); /* will force null return from malloc */
2078 /* Also keep size a multiple of MALLOC_ALIGNMENT */
2079 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2080 set_head_size(old_top, old_top_size);
2081 chunk_at_offset(old_top, old_top_size )->size =
2083 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2085 /* If possible, release the rest. */
2086 if (old_top_size >= MINSIZE)
2087 fREe(chunk2mem(old_top));
2091 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2092 max_sbrked_mem = sbrked_mem;
2093 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2094 max_total_mem = mmapped_mem + sbrked_mem;
2096 /* We always land on a page boundary */
2097 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2103 /* Main public routines */
2109 The requested size is first converted into a usable form, `nb'.
2110 This currently means to add 4 bytes overhead plus possibly more to
2111 obtain 8-byte alignment and/or to obtain a size of at least
2112 MINSIZE (currently 16 bytes), the smallest allocatable size.
2113 (All fits are considered `exact' if they are within MINSIZE bytes.)
2115 From there, the first successful of the following steps is taken:
2117 1. The bin corresponding to the request size is scanned, and if
2118 a chunk of exactly the right size is found, it is taken.
2120 2. The most recently remaindered chunk is used if it is big
2121 enough. This is a form of (roving) first fit, used only in
2122 the absence of exact fits. Runs of consecutive requests use
2123 the remainder of the chunk used for the previous such request
2124 whenever possible. This limited use of a first-fit style
2125 allocation strategy tends to give contiguous chunks
2126 coextensive lifetimes, which improves locality and can reduce
2127 fragmentation in the long run.
2129 3. Other bins are scanned in increasing size order, using a
2130 chunk big enough to fulfill the request, and splitting off
2131 any remainder. This search is strictly by best-fit; i.e.,
2132 the smallest (with ties going to approximately the least
2133 recently used) chunk that fits is selected.
2135 4. If large enough, the chunk bordering the end of memory
2136 (`top') is split off. (This use of `top' is in accord with
2137 the best-fit search rule. In effect, `top' is treated as
2138 larger (and thus less well fitting) than any other available
2139 chunk since it can be extended to be as large as necessary
2140 (up to system limitations).
2142 5. If the request size meets the mmap threshold and the
2143 system supports mmap, and there are few enough currently
2144 allocated mmapped regions, and a call to mmap succeeds,
2145 the request is allocated via direct memory mapping.
2147 6. Otherwise, the top of memory is extended by
2148 obtaining more space from the system (normally using sbrk,
2149 but definable to anything else via the MORECORE macro).
2150 Memory is gathered from the system (in system page-sized
2151 units) in a way that allows chunks obtained across different
2152 sbrk calls to be consolidated, but does not require
2153 contiguous memory. Thus, it should be safe to intersperse
2154 mallocs with other sbrk calls.
2157 All allocations are made from the the `lowest' part of any found
2158 chunk. (The implementation invariant is that prev_inuse is
2159 always true of any allocated chunk; i.e., that each allocated
2160 chunk borders either a previously allocated and still in-use chunk,
2161 or the base of its memory arena.)
2166 Void_t* mALLOc(size_t bytes)
2168 Void_t* mALLOc(bytes) size_t bytes;
2171 mchunkptr victim; /* inspected/selected chunk */
2172 INTERNAL_SIZE_T victim_size; /* its size */
2173 int idx; /* index for bin traversal */
2174 mbinptr bin; /* associated bin */
2175 mchunkptr remainder; /* remainder from a split */
2176 long remainder_size; /* its size */
2177 int remainder_index; /* its bin index */
2178 unsigned long block; /* block traverser bit */
2179 int startidx; /* first bin of a traversed block */
2180 mchunkptr fwd; /* misc temp for linking */
2181 mchunkptr bck; /* misc temp for linking */
2182 mbinptr q; /* misc temp */
2186 #ifdef CONFIG_SYS_MALLOC_F_LEN
2187 if (gd && !(gd->flags & GD_FLG_RELOC)) {
2191 new_ptr = gd->malloc_ptr + bytes;
2192 if (new_ptr > gd->malloc_limit)
2193 panic("Out of pre-reloc memory");
2194 ptr = map_sysmem(gd->malloc_base + gd->malloc_ptr, bytes);
2195 gd->malloc_ptr = ALIGN(new_ptr, sizeof(new_ptr));
2200 /* check if mem_malloc_init() was run */
2201 if ((mem_malloc_start == 0) && (mem_malloc_end == 0)) {
2202 /* not initialized yet */
2206 if ((long)bytes < 0) return NULL;
2208 nb = request2size(bytes); /* padded request size; */
2210 /* Check for exact match in a bin */
2212 if (is_small_request(nb)) /* Faster version for small requests */
2214 idx = smallbin_index(nb);
2216 /* No traversal or size check necessary for small bins. */
2221 /* Also scan the next one, since it would have a remainder < MINSIZE */
2229 victim_size = chunksize(victim);
2230 unlink(victim, bck, fwd);
2231 set_inuse_bit_at_offset(victim, victim_size);
2232 check_malloced_chunk(victim, nb);
2233 return chunk2mem(victim);
2236 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2241 idx = bin_index(nb);
2244 for (victim = last(bin); victim != bin; victim = victim->bk)
2246 victim_size = chunksize(victim);
2247 remainder_size = victim_size - nb;
2249 if (remainder_size >= (long)MINSIZE) /* too big */
2251 --idx; /* adjust to rescan below after checking last remainder */
2255 else if (remainder_size >= 0) /* exact fit */
2257 unlink(victim, bck, fwd);
2258 set_inuse_bit_at_offset(victim, victim_size);
2259 check_malloced_chunk(victim, nb);
2260 return chunk2mem(victim);
2268 /* Try to use the last split-off remainder */
2270 if ( (victim = last_remainder->fd) != last_remainder)
2272 victim_size = chunksize(victim);
2273 remainder_size = victim_size - nb;
2275 if (remainder_size >= (long)MINSIZE) /* re-split */
2277 remainder = chunk_at_offset(victim, nb);
2278 set_head(victim, nb | PREV_INUSE);
2279 link_last_remainder(remainder);
2280 set_head(remainder, remainder_size | PREV_INUSE);
2281 set_foot(remainder, remainder_size);
2282 check_malloced_chunk(victim, nb);
2283 return chunk2mem(victim);
2286 clear_last_remainder;
2288 if (remainder_size >= 0) /* exhaust */
2290 set_inuse_bit_at_offset(victim, victim_size);
2291 check_malloced_chunk(victim, nb);
2292 return chunk2mem(victim);
2295 /* Else place in bin */
2297 frontlink(victim, victim_size, remainder_index, bck, fwd);
2301 If there are any possibly nonempty big-enough blocks,
2302 search for best fitting chunk by scanning bins in blockwidth units.
2305 if ( (block = idx2binblock(idx)) <= binblocks_r)
2308 /* Get to the first marked block */
2310 if ( (block & binblocks_r) == 0)
2312 /* force to an even block boundary */
2313 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2315 while ((block & binblocks_r) == 0)
2317 idx += BINBLOCKWIDTH;
2322 /* For each possibly nonempty block ... */
2325 startidx = idx; /* (track incomplete blocks) */
2326 q = bin = bin_at(idx);
2328 /* For each bin in this block ... */
2331 /* Find and use first big enough chunk ... */
2333 for (victim = last(bin); victim != bin; victim = victim->bk)
2335 victim_size = chunksize(victim);
2336 remainder_size = victim_size - nb;
2338 if (remainder_size >= (long)MINSIZE) /* split */
2340 remainder = chunk_at_offset(victim, nb);
2341 set_head(victim, nb | PREV_INUSE);
2342 unlink(victim, bck, fwd);
2343 link_last_remainder(remainder);
2344 set_head(remainder, remainder_size | PREV_INUSE);
2345 set_foot(remainder, remainder_size);
2346 check_malloced_chunk(victim, nb);
2347 return chunk2mem(victim);
2350 else if (remainder_size >= 0) /* take */
2352 set_inuse_bit_at_offset(victim, victim_size);
2353 unlink(victim, bck, fwd);
2354 check_malloced_chunk(victim, nb);
2355 return chunk2mem(victim);
2360 bin = next_bin(bin);
2362 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2364 /* Clear out the block bit. */
2366 do /* Possibly backtrack to try to clear a partial block */
2368 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2370 av_[1] = (mbinptr)(binblocks_r & ~block);
2375 } while (first(q) == q);
2377 /* Get to the next possibly nonempty block */
2379 if ( (block <<= 1) <= binblocks_r && (block != 0) )
2381 while ((block & binblocks_r) == 0)
2383 idx += BINBLOCKWIDTH;
2393 /* Try to use top chunk */
2395 /* Require that there be a remainder, ensuring top always exists */
2396 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2400 /* If big and would otherwise need to extend, try to use mmap instead */
2401 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2402 (victim = mmap_chunk(nb)) != 0)
2403 return chunk2mem(victim);
2407 malloc_extend_top(nb);
2408 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2409 return NULL; /* propagate failure */
2413 set_head(victim, nb | PREV_INUSE);
2414 top = chunk_at_offset(victim, nb);
2415 set_head(top, remainder_size | PREV_INUSE);
2416 check_malloced_chunk(victim, nb);
2417 return chunk2mem(victim);
2430 1. free(0) has no effect.
2432 2. If the chunk was allocated via mmap, it is release via munmap().
2434 3. If a returned chunk borders the current high end of memory,
2435 it is consolidated into the top, and if the total unused
2436 topmost memory exceeds the trim threshold, malloc_trim is
2439 4. Other chunks are consolidated as they arrive, and
2440 placed in corresponding bins. (This includes the case of
2441 consolidating with the current `last_remainder').
2447 void fREe(Void_t* mem)
2449 void fREe(mem) Void_t* mem;
2452 mchunkptr p; /* chunk corresponding to mem */
2453 INTERNAL_SIZE_T hd; /* its head field */
2454 INTERNAL_SIZE_T sz; /* its size */
2455 int idx; /* its bin index */
2456 mchunkptr next; /* next contiguous chunk */
2457 INTERNAL_SIZE_T nextsz; /* its size */
2458 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2459 mchunkptr bck; /* misc temp for linking */
2460 mchunkptr fwd; /* misc temp for linking */
2461 int islr; /* track whether merging with last_remainder */
2463 #ifdef CONFIG_SYS_MALLOC_F_LEN
2464 /* free() is a no-op - all the memory will be freed on relocation */
2465 if (!(gd->flags & GD_FLG_RELOC))
2469 if (mem == NULL) /* free(0) has no effect */
2476 if (hd & IS_MMAPPED) /* release mmapped memory. */
2483 check_inuse_chunk(p);
2485 sz = hd & ~PREV_INUSE;
2486 next = chunk_at_offset(p, sz);
2487 nextsz = chunksize(next);
2489 if (next == top) /* merge with top */
2493 if (!(hd & PREV_INUSE)) /* consolidate backward */
2495 prevsz = p->prev_size;
2496 p = chunk_at_offset(p, -((long) prevsz));
2498 unlink(p, bck, fwd);
2501 set_head(p, sz | PREV_INUSE);
2503 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2504 malloc_trim(top_pad);
2508 set_head(next, nextsz); /* clear inuse bit */
2512 if (!(hd & PREV_INUSE)) /* consolidate backward */
2514 prevsz = p->prev_size;
2515 p = chunk_at_offset(p, -((long) prevsz));
2518 if (p->fd == last_remainder) /* keep as last_remainder */
2521 unlink(p, bck, fwd);
2524 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2528 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2531 link_last_remainder(p);
2534 unlink(next, bck, fwd);
2538 set_head(p, sz | PREV_INUSE);
2541 frontlink(p, sz, idx, bck, fwd);
2552 Chunks that were obtained via mmap cannot be extended or shrunk
2553 unless HAVE_MREMAP is defined, in which case mremap is used.
2554 Otherwise, if their reallocation is for additional space, they are
2555 copied. If for less, they are just left alone.
2557 Otherwise, if the reallocation is for additional space, and the
2558 chunk can be extended, it is, else a malloc-copy-free sequence is
2559 taken. There are several different ways that a chunk could be
2560 extended. All are tried:
2562 * Extending forward into following adjacent free chunk.
2563 * Shifting backwards, joining preceding adjacent space
2564 * Both shifting backwards and extending forward.
2565 * Extending into newly sbrked space
2567 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2568 size argument of zero (re)allocates a minimum-sized chunk.
2570 If the reallocation is for less space, and the new request is for
2571 a `small' (<512 bytes) size, then the newly unused space is lopped
2574 The old unix realloc convention of allowing the last-free'd chunk
2575 to be used as an argument to realloc is no longer supported.
2576 I don't know of any programs still relying on this feature,
2577 and allowing it would also allow too many other incorrect
2578 usages of realloc to be sensible.
2585 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2587 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2590 INTERNAL_SIZE_T nb; /* padded request size */
2592 mchunkptr oldp; /* chunk corresponding to oldmem */
2593 INTERNAL_SIZE_T oldsize; /* its size */
2595 mchunkptr newp; /* chunk to return */
2596 INTERNAL_SIZE_T newsize; /* its size */
2597 Void_t* newmem; /* corresponding user mem */
2599 mchunkptr next; /* next contiguous chunk after oldp */
2600 INTERNAL_SIZE_T nextsize; /* its size */
2602 mchunkptr prev; /* previous contiguous chunk before oldp */
2603 INTERNAL_SIZE_T prevsize; /* its size */
2605 mchunkptr remainder; /* holds split off extra space from newp */
2606 INTERNAL_SIZE_T remainder_size; /* its size */
2608 mchunkptr bck; /* misc temp for linking */
2609 mchunkptr fwd; /* misc temp for linking */
2611 #ifdef REALLOC_ZERO_BYTES_FREES
2612 if (bytes == 0) { fREe(oldmem); return 0; }
2615 if ((long)bytes < 0) return NULL;
2617 /* realloc of null is supposed to be same as malloc */
2618 if (oldmem == NULL) return mALLOc(bytes);
2620 #ifdef CONFIG_SYS_MALLOC_F_LEN
2621 if (!(gd->flags & GD_FLG_RELOC)) {
2622 /* This is harder to support and should not be needed */
2623 panic("pre-reloc realloc() is not supported");
2627 newp = oldp = mem2chunk(oldmem);
2628 newsize = oldsize = chunksize(oldp);
2631 nb = request2size(bytes);
2634 if (chunk_is_mmapped(oldp))
2637 newp = mremap_chunk(oldp, nb);
2638 if(newp) return chunk2mem(newp);
2640 /* Note the extra SIZE_SZ overhead. */
2641 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2642 /* Must alloc, copy, free. */
2643 newmem = mALLOc(bytes);
2644 if (newmem == 0) return 0; /* propagate failure */
2645 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2651 check_inuse_chunk(oldp);
2653 if ((long)(oldsize) < (long)(nb))
2656 /* Try expanding forward */
2658 next = chunk_at_offset(oldp, oldsize);
2659 if (next == top || !inuse(next))
2661 nextsize = chunksize(next);
2663 /* Forward into top only if a remainder */
2666 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2668 newsize += nextsize;
2669 top = chunk_at_offset(oldp, nb);
2670 set_head(top, (newsize - nb) | PREV_INUSE);
2671 set_head_size(oldp, nb);
2672 return chunk2mem(oldp);
2676 /* Forward into next chunk */
2677 else if (((long)(nextsize + newsize) >= (long)(nb)))
2679 unlink(next, bck, fwd);
2680 newsize += nextsize;
2690 /* Try shifting backwards. */
2692 if (!prev_inuse(oldp))
2694 prev = prev_chunk(oldp);
2695 prevsize = chunksize(prev);
2697 /* try forward + backward first to save a later consolidation */
2704 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2706 unlink(prev, bck, fwd);
2708 newsize += prevsize + nextsize;
2709 newmem = chunk2mem(newp);
2710 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2711 top = chunk_at_offset(newp, nb);
2712 set_head(top, (newsize - nb) | PREV_INUSE);
2713 set_head_size(newp, nb);
2718 /* into next chunk */
2719 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2721 unlink(next, bck, fwd);
2722 unlink(prev, bck, fwd);
2724 newsize += nextsize + prevsize;
2725 newmem = chunk2mem(newp);
2726 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2732 if (prev != NULL && (long)(prevsize + newsize) >= (long)nb)
2734 unlink(prev, bck, fwd);
2736 newsize += prevsize;
2737 newmem = chunk2mem(newp);
2738 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2745 newmem = mALLOc (bytes);
2747 if (newmem == NULL) /* propagate failure */
2750 /* Avoid copy if newp is next chunk after oldp. */
2751 /* (This can only happen when new chunk is sbrk'ed.) */
2753 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2755 newsize += chunksize(newp);
2760 /* Otherwise copy, free, and exit */
2761 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2767 split: /* split off extra room in old or expanded chunk */
2769 if (newsize - nb >= MINSIZE) /* split off remainder */
2771 remainder = chunk_at_offset(newp, nb);
2772 remainder_size = newsize - nb;
2773 set_head_size(newp, nb);
2774 set_head(remainder, remainder_size | PREV_INUSE);
2775 set_inuse_bit_at_offset(remainder, remainder_size);
2776 fREe(chunk2mem(remainder)); /* let free() deal with it */
2780 set_head_size(newp, newsize);
2781 set_inuse_bit_at_offset(newp, newsize);
2784 check_inuse_chunk(newp);
2785 return chunk2mem(newp);
2795 memalign requests more than enough space from malloc, finds a spot
2796 within that chunk that meets the alignment request, and then
2797 possibly frees the leading and trailing space.
2799 The alignment argument must be a power of two. This property is not
2800 checked by memalign, so misuse may result in random runtime errors.
2802 8-byte alignment is guaranteed by normal malloc calls, so don't
2803 bother calling memalign with an argument of 8 or less.
2805 Overreliance on memalign is a sure way to fragment space.
2811 Void_t* mEMALIGn(size_t alignment, size_t bytes)
2813 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2816 INTERNAL_SIZE_T nb; /* padded request size */
2817 char* m; /* memory returned by malloc call */
2818 mchunkptr p; /* corresponding chunk */
2819 char* brk; /* alignment point within p */
2820 mchunkptr newp; /* chunk to return */
2821 INTERNAL_SIZE_T newsize; /* its size */
2822 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2823 mchunkptr remainder; /* spare room at end to split off */
2824 long remainder_size; /* its size */
2826 if ((long)bytes < 0) return NULL;
2828 /* If need less alignment than we give anyway, just relay to malloc */
2830 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2832 /* Otherwise, ensure that it is at least a minimum chunk size */
2834 if (alignment < MINSIZE) alignment = MINSIZE;
2836 /* Call malloc with worst case padding to hit alignment. */
2838 nb = request2size(bytes);
2839 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2841 if (m == NULL) return NULL; /* propagate failure */
2845 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2848 if(chunk_is_mmapped(p))
2849 return chunk2mem(p); /* nothing more to do */
2852 else /* misaligned */
2855 Find an aligned spot inside chunk.
2856 Since we need to give back leading space in a chunk of at
2857 least MINSIZE, if the first calculation places us at
2858 a spot with less than MINSIZE leader, we can move to the
2859 next aligned spot -- we've allocated enough total room so that
2860 this is always possible.
2863 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2864 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2866 newp = (mchunkptr)brk;
2867 leadsize = brk - (char*)(p);
2868 newsize = chunksize(p) - leadsize;
2871 if(chunk_is_mmapped(p))
2873 newp->prev_size = p->prev_size + leadsize;
2874 set_head(newp, newsize|IS_MMAPPED);
2875 return chunk2mem(newp);
2879 /* give back leader, use the rest */
2881 set_head(newp, newsize | PREV_INUSE);
2882 set_inuse_bit_at_offset(newp, newsize);
2883 set_head_size(p, leadsize);
2887 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2890 /* Also give back spare room at the end */
2892 remainder_size = chunksize(p) - nb;
2894 if (remainder_size >= (long)MINSIZE)
2896 remainder = chunk_at_offset(p, nb);
2897 set_head(remainder, remainder_size | PREV_INUSE);
2898 set_head_size(p, nb);
2899 fREe(chunk2mem(remainder));
2902 check_inuse_chunk(p);
2903 return chunk2mem(p);
2911 valloc just invokes memalign with alignment argument equal
2912 to the page size of the system (or as near to this as can
2913 be figured out from all the includes/defines above.)
2917 Void_t* vALLOc(size_t bytes)
2919 Void_t* vALLOc(bytes) size_t bytes;
2922 return mEMALIGn (malloc_getpagesize, bytes);
2926 pvalloc just invokes valloc for the nearest pagesize
2927 that will accommodate request
2932 Void_t* pvALLOc(size_t bytes)
2934 Void_t* pvALLOc(bytes) size_t bytes;
2937 size_t pagesize = malloc_getpagesize;
2938 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2943 calloc calls malloc, then zeroes out the allocated chunk.
2948 Void_t* cALLOc(size_t n, size_t elem_size)
2950 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2954 INTERNAL_SIZE_T csz;
2956 INTERNAL_SIZE_T sz = n * elem_size;
2959 /* check if expand_top called, in which case don't need to clear */
2961 mchunkptr oldtop = top;
2962 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2964 Void_t* mem = mALLOc (sz);
2966 if ((long)n < 0) return NULL;
2972 #ifdef CONFIG_SYS_MALLOC_F_LEN
2973 if (!(gd->flags & GD_FLG_RELOC)) {
2974 MALLOC_ZERO(mem, sz);
2980 /* Two optional cases in which clearing not necessary */
2984 if (chunk_is_mmapped(p)) return mem;
2990 if (p == oldtop && csz > oldtopsize)
2992 /* clear only the bytes from non-freshly-sbrked memory */
2997 MALLOC_ZERO(mem, csz - SIZE_SZ);
3004 cfree just calls free. It is needed/defined on some systems
3005 that pair it with calloc, presumably for odd historical reasons.
3009 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
3011 void cfree(Void_t *mem)
3013 void cfree(mem) Void_t *mem;
3024 Malloc_trim gives memory back to the system (via negative
3025 arguments to sbrk) if there is unused memory at the `high' end of
3026 the malloc pool. You can call this after freeing large blocks of
3027 memory to potentially reduce the system-level memory requirements
3028 of a program. However, it cannot guarantee to reduce memory. Under
3029 some allocation patterns, some large free blocks of memory will be
3030 locked between two used chunks, so they cannot be given back to
3033 The `pad' argument to malloc_trim represents the amount of free
3034 trailing space to leave untrimmed. If this argument is zero,
3035 only the minimum amount of memory to maintain internal data
3036 structures will be left (one page or less). Non-zero arguments
3037 can be supplied to maintain enough trailing space to service
3038 future expected allocations without having to re-obtain memory
3041 Malloc_trim returns 1 if it actually released any memory, else 0.
3046 int malloc_trim(size_t pad)
3048 int malloc_trim(pad) size_t pad;
3051 long top_size; /* Amount of top-most memory */
3052 long extra; /* Amount to release */
3053 char* current_brk; /* address returned by pre-check sbrk call */
3054 char* new_brk; /* address returned by negative sbrk call */
3056 unsigned long pagesz = malloc_getpagesize;
3058 top_size = chunksize(top);
3059 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3061 if (extra < (long)pagesz) /* Not enough memory to release */
3066 /* Test to make sure no one else called sbrk */
3067 current_brk = (char*)(MORECORE (0));
3068 if (current_brk != (char*)(top) + top_size)
3069 return 0; /* Apparently we don't own memory; must fail */
3073 new_brk = (char*)(MORECORE (-extra));
3075 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3077 /* Try to figure out what we have */
3078 current_brk = (char*)(MORECORE (0));
3079 top_size = current_brk - (char*)top;
3080 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3082 sbrked_mem = current_brk - sbrk_base;
3083 set_head(top, top_size | PREV_INUSE);
3091 /* Success. Adjust top accordingly. */
3092 set_head(top, (top_size - extra) | PREV_INUSE);
3093 sbrked_mem -= extra;
3106 This routine tells you how many bytes you can actually use in an
3107 allocated chunk, which may be more than you requested (although
3108 often not). You can use this many bytes without worrying about
3109 overwriting other allocated objects. Not a particularly great
3110 programming practice, but still sometimes useful.
3115 size_t malloc_usable_size(Void_t* mem)
3117 size_t malloc_usable_size(mem) Void_t* mem;
3126 if(!chunk_is_mmapped(p))
3128 if (!inuse(p)) return 0;
3129 check_inuse_chunk(p);
3130 return chunksize(p) - SIZE_SZ;
3132 return chunksize(p) - 2*SIZE_SZ;
3139 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3142 static void malloc_update_mallinfo()
3151 INTERNAL_SIZE_T avail = chunksize(top);
3152 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3154 for (i = 1; i < NAV; ++i)
3157 for (p = last(b); p != b; p = p->bk)
3160 check_free_chunk(p);
3161 for (q = next_chunk(p);
3162 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3164 check_inuse_chunk(q);
3166 avail += chunksize(p);
3171 current_mallinfo.ordblks = navail;
3172 current_mallinfo.uordblks = sbrked_mem - avail;
3173 current_mallinfo.fordblks = avail;
3174 current_mallinfo.hblks = n_mmaps;
3175 current_mallinfo.hblkhd = mmapped_mem;
3176 current_mallinfo.keepcost = chunksize(top);
3187 Prints on the amount of space obtain from the system (both
3188 via sbrk and mmap), the maximum amount (which may be more than
3189 current if malloc_trim and/or munmap got called), the maximum
3190 number of simultaneous mmap regions used, and the current number
3191 of bytes allocated via malloc (or realloc, etc) but not yet
3192 freed. (Note that this is the number of bytes allocated, not the
3193 number requested. It will be larger than the number requested
3194 because of alignment and bookkeeping overhead.)
3201 malloc_update_mallinfo();
3202 printf("max system bytes = %10u\n",
3203 (unsigned int)(max_total_mem));
3204 printf("system bytes = %10u\n",
3205 (unsigned int)(sbrked_mem + mmapped_mem));
3206 printf("in use bytes = %10u\n",
3207 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3209 printf("max mmap regions = %10u\n",
3210 (unsigned int)max_n_mmaps);
3216 mallinfo returns a copy of updated current mallinfo.
3220 struct mallinfo mALLINFo()
3222 malloc_update_mallinfo();
3223 return current_mallinfo;
3233 mallopt is the general SVID/XPG interface to tunable parameters.
3234 The format is to provide a (parameter-number, parameter-value) pair.
3235 mallopt then sets the corresponding parameter to the argument
3236 value if it can (i.e., so long as the value is meaningful),
3237 and returns 1 if successful else 0.
3239 See descriptions of tunable parameters above.
3244 int mALLOPt(int param_number, int value)
3246 int mALLOPt(param_number, value) int param_number; int value;
3249 switch(param_number)
3251 case M_TRIM_THRESHOLD:
3252 trim_threshold = value; return 1;
3254 top_pad = value; return 1;
3255 case M_MMAP_THRESHOLD:
3256 mmap_threshold = value; return 1;
3259 n_mmaps_max = value; return 1;
3261 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3273 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
3274 * return null for negative arguments
3275 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3276 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3277 (e.g. WIN32 platforms)
3278 * Cleanup up header file inclusion for WIN32 platforms
3279 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3280 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3281 memory allocation routines
3282 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3283 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3284 usage of 'assert' in non-WIN32 code
3285 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3287 * Always call 'fREe()' rather than 'free()'
3289 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3290 * Fixed ordering problem with boundary-stamping
3292 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3293 * Added pvalloc, as recommended by H.J. Liu
3294 * Added 64bit pointer support mainly from Wolfram Gloger
3295 * Added anonymously donated WIN32 sbrk emulation
3296 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3297 * malloc_extend_top: fix mask error that caused wastage after
3299 * Add linux mremap support code from HJ Liu
3301 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3302 * Integrated most documentation with the code.
3303 * Add support for mmap, with help from
3304 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3305 * Use last_remainder in more cases.
3306 * Pack bins using idea from colin@nyx10.cs.du.edu
3307 * Use ordered bins instead of best-fit threshhold
3308 * Eliminate block-local decls to simplify tracing and debugging.
3309 * Support another case of realloc via move into top
3310 * Fix error occuring when initial sbrk_base not word-aligned.
3311 * Rely on page size for units instead of SBRK_UNIT to
3312 avoid surprises about sbrk alignment conventions.
3313 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3314 (raymond@es.ele.tue.nl) for the suggestion.
3315 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3316 * More precautions for cases where other routines call sbrk,
3317 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3318 * Added macros etc., allowing use in linux libc from
3319 H.J. Lu (hjl@gnu.ai.mit.edu)
3320 * Inverted this history list
3322 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3323 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3324 * Removed all preallocation code since under current scheme
3325 the work required to undo bad preallocations exceeds
3326 the work saved in good cases for most test programs.
3327 * No longer use return list or unconsolidated bins since
3328 no scheme using them consistently outperforms those that don't
3329 given above changes.
3330 * Use best fit for very large chunks to prevent some worst-cases.
3331 * Added some support for debugging
3333 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3334 * Removed footers when chunks are in use. Thanks to
3335 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3337 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3338 * Added malloc_trim, with help from Wolfram Gloger
3339 (wmglo@Dent.MED.Uni-Muenchen.DE).
3341 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3343 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3344 * realloc: try to expand in both directions
3345 * malloc: swap order of clean-bin strategy;
3346 * realloc: only conditionally expand backwards
3347 * Try not to scavenge used bins
3348 * Use bin counts as a guide to preallocation
3349 * Occasionally bin return list chunks in first scan
3350 * Add a few optimizations from colin@nyx10.cs.du.edu
3352 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3353 * faster bin computation & slightly different binning
3354 * merged all consolidations to one part of malloc proper
3355 (eliminating old malloc_find_space & malloc_clean_bin)
3356 * Scan 2 returns chunks (not just 1)
3357 * Propagate failure in realloc if malloc returns 0
3358 * Add stuff to allow compilation on non-ANSI compilers
3359 from kpv@research.att.com
3361 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3362 * removed potential for odd address access in prev_chunk
3363 * removed dependency on getpagesize.h
3364 * misc cosmetics and a bit more internal documentation
3365 * anticosmetics: mangled names in macros to evade debugger strangeness
3366 * tested on sparc, hp-700, dec-mips, rs6000
3367 with gcc & native cc (hp, dec only) allowing
3368 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3370 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3371 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3372 structure of old version, but most details differ.)