android: binder: fix binder mmap failures

binder_update_page_range() initializes only addr and size
fields in 'struct vm_struct tmp_area;' and passes it to
map_vm_area().

Before 71394fe501 ("mm: vmalloc: add flag preventing guard hole allocation")
this was because map_vm_area() didn't use any other fields
in vm_struct except addr and size.

Now get_vm_area_size() (used in map_vm_area()) reads vm_struct's
flags to determine whether vm area has guard hole or not.

binder_update_page_range() don't initialize flags field, so
this causes following binder mmap failures:
-----------[ cut here ]------------
WARNING: CPU: 0 PID: 1971 at mm/vmalloc.c:130
vmap_page_range_noflush+0x119/0x144()
CPU: 0 PID: 1971 Comm: healthd Not tainted 4.0.0-rc1-00399-g7da3fdc-dirty #157
Hardware name: ARM-Versatile Express
[<c001246d>] (unwind_backtrace) from [<c000f7f9>] (show_stack+0x11/0x14)
[<c000f7f9>] (show_stack) from [<c049a221>] (dump_stack+0x59/0x7c)
[<c049a221>] (dump_stack) from [<c001cf21>] (warn_slowpath_common+0x55/0x84)
[<c001cf21>] (warn_slowpath_common) from [<c001cfe3>]
(warn_slowpath_null+0x17/0x1c)
[<c001cfe3>] (warn_slowpath_null) from [<c00c66c5>]
(vmap_page_range_noflush+0x119/0x144)
[<c00c66c5>] (vmap_page_range_noflush) from [<c00c716b>] (map_vm_area+0x27/0x48)
[<c00c716b>] (map_vm_area) from [<c038ddaf>]
(binder_update_page_range+0x12f/0x27c)
[<c038ddaf>] (binder_update_page_range) from [<c038e857>]
(binder_mmap+0xbf/0x1ac)
[<c038e857>] (binder_mmap) from [<c00c2dc7>] (mmap_region+0x2eb/0x4d4)
[<c00c2dc7>] (mmap_region) from [<c00c3197>] (do_mmap_pgoff+0x1e7/0x250)
[<c00c3197>] (do_mmap_pgoff) from [<c00b35b5>] (vm_mmap_pgoff+0x45/0x60)
[<c00b35b5>] (vm_mmap_pgoff) from [<c00c1f39>] (SyS_mmap_pgoff+0x5d/0x80)
[<c00c1f39>] (SyS_mmap_pgoff) from [<c000ce81>] (ret_fast_syscall+0x1/0x5c)
---[ end trace 48c2c4b9a1349e54 ]---
binder: 1982: binder_alloc_buf failed to map page at f0e00000 in kernel
binder: binder_mmap: 1982 b6bde000-b6cdc000 alloc small buf failed -12

Use map_kernel_range_noflush() instead of map_vm_area() as this is better
API for binder's purposes and it allows to get rid of 'vm_struct tmp_area' at all.

Fixes: 71394fe501 ("mm: vmalloc: add flag preventing guard hole allocation")
Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Reported-by: Amit Pundir <amit.pundir@linaro.org>
Tested-by: Amit Pundir <amit.pundir@linaro.org>
Acked-by: David Rientjes <rientjes@google.com>
Tested-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Andrey Ryabinin 2015-02-27 20:44:21 +03:00 committed by Greg Kroah-Hartman
parent c517d838eb
commit f4c72c7030
1 changed files with 5 additions and 5 deletions

View File

@ -551,7 +551,6 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
{ {
void *page_addr; void *page_addr;
unsigned long user_page_addr; unsigned long user_page_addr;
struct vm_struct tmp_area;
struct page **page; struct page **page;
struct mm_struct *mm; struct mm_struct *mm;
@ -600,10 +599,11 @@ static int binder_update_page_range(struct binder_proc *proc, int allocate,
proc->pid, page_addr); proc->pid, page_addr);
goto err_alloc_page_failed; goto err_alloc_page_failed;
} }
tmp_area.addr = page_addr; ret = map_kernel_range_noflush((unsigned long)page_addr,
tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */; PAGE_SIZE, PAGE_KERNEL, page);
ret = map_vm_area(&tmp_area, PAGE_KERNEL, page); flush_cache_vmap((unsigned long)page_addr,
if (ret) { (unsigned long)page_addr + PAGE_SIZE);
if (ret != 1) {
pr_err("%d: binder_alloc_buf failed to map page at %p in kernel\n", pr_err("%d: binder_alloc_buf failed to map page at %p in kernel\n",
proc->pid, page_addr); proc->pid, page_addr);
goto err_map_kernel_failed; goto err_map_kernel_failed;