otto
58ab079478
Only init chunk_info once, plus some moving of code to group related functions.
7 years ago
otto
28aa9f8c65
step one in avoiding unneccesary init of chunk_info;
some cleanup; tested by sthen@ on a ports build
7 years ago
otto
d5460018ef
's' should include 'f'; from Jacqueline Jolicoeur
7 years ago
jsing
edffe314c0
Restore a return that was inadvertently removed from freezero() in r1.234,
which results in an internal double free when internal functions are not
in use.
ok otto@
7 years ago
otto
437fad2669
do not return f() where f is a void function; loop var type fix
7 years ago
otto
09620f7263
Use dprintf instead of snprintf/write
7 years ago
otto
3b6bc929b8
Make delayed free non-optional and make F do an extensive double free check.
ok tb@ tedu@
7 years ago
otto
4a127addf3
mapalign returns MAP_FAILED for failuer; from George Koehler
7 years ago
otto
d1f95e32d7
check double free before canary for chunks; ok millert@
7 years ago
otto
83cbddd78e
two MALLOC_STATS only tweaks; one from David CARLIER, the other found by clang
7 years ago
otto
131bcbfdc1
one more instance of the previous commit; also initialize ->offset to a
definite value in the size == 0 case
7 years ago
otto
4a550fa72d
Only access offset if canaries are enabled *and* size > 0, otherwise offset
is not initialized. Problem spotted by Carlin Bingham; ok phessler@ tedu@
7 years ago
dlg
b4e0da4e31
port the RBT code to userland by making it part of libc.
src/lib/libc/gen/tree.c is a copy of src/sys/kern/subr_tree.c, but with
annotations for symbol visibility. changes to one should be reflected
in the other.
the malloc debug code that uses RB code is ported to RBT.
because libc provides the RBT code, procmap doesn't have to reach into
the kernel and build subr_tree.c itself now.
mild enthusiasm from many
ok guenther@
7 years ago
otto
0c8e3f2e80
- fix bug wrt posix_memalign(3) of blocks between half a page and a page
- document posix_memalign() does not play nice with reacallocarray(3) and
freezero(3)
8 years ago
otto
6a32bb1c73
For small allocations (chunk) freezero only validates the given
size if canaries are enabled. In that case we have the exact requested
size of the allocation. But we can at least check the given size
against the chunk size if C is not enabled. Plus add some braces
so my brain doesn't have to scan for dangling else problems when I
see this code.
8 years ago
otto
979a770ed0
don't forget to fill in canary bytes for posix_memalign(3); reported by
and ok jeremy@
8 years ago
otto
f7bddd982e
whitespace fixes
8 years ago
otto
80c2ebad1c
allow clearing less than allocated and document freezero(3) better
8 years ago
otto
92d2cf9d5b
Introducing freezero(3) a version of free that guarantees the process
no longer has access to the content of a memmory object. It does
this by either clearing (if the object memory remains cached) or
by calling munmap(2). ok millert@, deraadt@, guenther@
8 years ago
otto
253b92f197
first print size in meta-data then supplied arg size when an inconsistency is
detected wrt recallocarray()
8 years ago
otto
c1fcb739fc
small cleanup & optimization; ok deraadt@ millert@
8 years ago
otto
c662774838
add a helper function to print all pools #ifdef MALLOC_STATS
from David CARLIER
8 years ago
otto
5b40b56851
move recallocarray to malloc.c and
- use internal meta-data to do more consistency checking (especially with
option C)
- use cheap free if possible
ok deraadt@
8 years ago
jsg
d08f908fca
Add a NULL test to wrterror() to avoid a NULL deref when called from a
free() error path.
ok otto@
8 years ago
otto
a82fcd44e6
fix a comment and rm some dead code as a result of the previous diff
8 years ago
otto
4a9a7195d2
Let realloc handle and produce moved pointers for allocations between
half a page and a page. ok jmatthew@ tb@
8 years ago
otto
3d80117872
1. When shrinking a chunk allocation, compare the size of the current
allocation to the size of the new allocation (instead of the requested size).
2. Previously realloc takes the easy way and always reallocates if C is
active. This commit fixes by carefully updating the recorded requested
size in all cases, and writing the canary bytes in the proper location
after reallocating.
3. Introduce defines to test if MALLOC_MOVE should be done and to
compute the new value.
8 years ago
otto
099c1cfdb8
MALLOC_STATS tweaks, by default not compiled in
8 years ago
otto
a197637f0f
small tweak to also check canaries if F is in effect
8 years ago
otto
8119a345a7
remove some old option letters and also make P non-settable. It has
been the default for ages, and I see no valid reason to be able to
disable it. ok natano@
8 years ago
otto
ae5357c652
Pages in the malloc cache are either reused quickly or unmapped
quickly. In both cases it does not make sense to set hints on them.
So remove that option, which is just a remainder of old times when
malloc used to hold on to pages. ok stefan@
8 years ago
otto
992807bce2
- fix MALLOC_STATS compile
- redundant cast is redundant
8 years ago
otto
03ffdf003d
fix some void * arithmetic by casting
8 years ago
otto
1255da53a3
and recommit with fixed GC
8 years ago
otto
5d783ecc04
backout for now; flag combination GC is not ok
8 years ago
otto
5656d7bf98
Also place canaries in > page sized objects (if C is in effect); ok tb@
8 years ago
guenther
71af4d5f52
Wrap _malloc_init() so internal calls go directly
prodded by otto@
ok kettenis@ otto@
8 years ago
otto
c00ceb22a8
0xd0 -> 0xdb; ok deraadt@ millert@ tedu@
8 years ago
otto
8b706cc40e
optimize canary code a bit by storing offset of sizes table instead of
recomputing it all the time
8 years ago
otto
3eeb2e7bb1
stray tab
8 years ago
otto
2c67f40d2b
Beter implementation of chunk canaries: store size in chunk meta data
instead of chunk itself; does not change actual allocated size; ok tedu@
8 years ago
guenther
1a1f277cca
Delete casts to off_t and size_t that are implied by assignments
or prototypes. Ditto for some of the char* and void* casts too.
verified no change to instructions on ILP32 (i386) and LP64 (amd64)
ok natano@ abluhm@ deraadt@ millert@
8 years ago
otto
814803d97e
move page junking tp unmap(), right before we stick the region in the cache;
ok tedu@
8 years ago
otto
7f29e95497
Less lock contention by using more pools for mult-threaded programs.
tested by many (thanks!) ok tedu, guenther@
8 years ago
tedu
6c73827bf7
black magic for sparc page size can go
8 years ago
otto
8b9a47cd4e
wrterror() is fatal, delete dead code; ok tom@ natano@ tedu@
8 years ago
otto
48a1ebbb9c
J/j is a three valued option, document and fix code to actuall support that
with a little help from jmc@ for the man page bits
ok jca@ and a reluctant tedu@
8 years ago
otto
4e61a98ad4
adapt S option: add C, rm F (not relevant with 0 cache and disables
chunk rnd), rm P: is default
8 years ago
tb
071457b57b
Back out previous; otto saw a potential race that could lead to a
double unmap and I experienced a much more unstable firefox.
discussed with otto on icb
8 years ago
tedu
86a8b4eb22
defer munmap to after unlocking malloc. this can (unfortunately) be an
expensive syscall, and we don't want to tie up other threads. there's no
need to hold the lock, so defer it to afterwards.
from Michael McConville
ok deraadt
8 years ago