mirror of https://gitee.com/openkylin/linux.git
12b1c5f382
Originally __free_pages_bulk used the relative page number within a zone to define its buddies. This meant that to maintain the "maximally aligned" requirements (that an allocation of size N will be aligned at least to N physically) zones had to also be aligned to 1<<MAX_ORDER pages. When __free_pages_bulk was updated to use the relative page frame numbers of the free'd pages to pair buddies this released the alignment constraint on the 'left' edge of the zone. This allows _either_ edge of the zone to contain partial MAX_ORDER sized buddies. These simply never will have matching buddies and thus will never make it to the 'top' of the pyramid. The patch below removes a now redundant check ensuring that the mem_map was aligned to MAX_ORDER. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Cc: Christoph Lameter <christoph@lameter.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org> |
||
---|---|---|
.. | ||
Kconfig | ||
Makefile | ||
bootmem.c | ||
fadvise.c | ||
filemap.c | ||
filemap.h | ||
filemap_xip.c | ||
fremap.c | ||
highmem.c | ||
hugetlb.c | ||
internal.h | ||
madvise.c | ||
memory.c | ||
mempolicy.c | ||
mempool.c | ||
mincore.c | ||
mlock.c | ||
mmap.c | ||
mprotect.c | ||
mremap.c | ||
msync.c | ||
nommu.c | ||
oom_kill.c | ||
page-writeback.c | ||
page_alloc.c | ||
page_io.c | ||
pdflush.c | ||
prio_tree.c | ||
readahead.c | ||
rmap.c | ||
shmem.c | ||
slab.c | ||
sparse.c | ||
swap.c | ||
swap_state.c | ||
swapfile.c | ||
thrash.c | ||
tiny-shmem.c | ||
truncate.c | ||
vmalloc.c | ||
vmscan.c |