Use CONFIG_LOWMEM_SIZE if system has larger ram size.
For system with larger ram size, enable HIGMEM support.
Also setup limitation for memblock and use memblock
allocation in lowmem region.
Signed-off-by: Michal Simek <monstr@monstr.eu>
There is the problem with bit OR (|) because for
some combination is addr | size | addr+size equal
to seq.
For standard kernel setting (kernel starts at 0xC0000000)
is seq for user space 0xBFFFFFFF and everything below
this limit is fine.
But even address 0xBFFFFFFF is fine because it
is below kernel space.
Signed-off-by: Andrew Fedonczuk <andrew.fedonczuk@ericsson.com>
Signed-off-by: Michal Simek <monstr@monstr.eu>
A userland read of more than PAGE_SIZE bytes from /dev/zero results in
(a) not all of the bytes returned being zero, and
(b) memory corruption due to zeroing of bytes beyond the user buffer.
This is caused by improper constraints on the assembly __clear_user function.
The constrints don't indicate to the compiler that the pointer argument is
modified. Since the function is inline, this results in double-incrementing
of the pointer when __clear_user() is invoked through a multi-page read() of
/dev/zero.
Signed-off-by: Steven J. Magnani <steve@digidescorp.com>
Acked-by: Michal Simek <monstr@monstr.eu>
CC: stable@kernel.org
__copy_to/from_user_inatomic should call __copy_to/from_user
because there is not necessary to check access because of kernel function.
might_sleep in copy_to/from_user macros is causing problems
in debug sessions too (CONFIG_DEBUG_SPINLOCK_SLEEP).
BUG: sleeping function called from invalid context at
.../arch/microblaze/include/asm/uaccess.h:388
in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: swapper
1 lock held by swapper/1:
#0: (&p->cred_guard_mutex){......}, at: [<c00d4b90>] prepare_bprm_creds+0x2c/0x88
Kernel Stack:
...
Call Trace:
[<c0006bd4>] microblaze_unwind+0x7c/0x94
[<c0006684>] show_stack+0xf4/0x190
[<c0006730>] dump_stack+0x10/0x30
[<c00103a0>] __might_sleep+0x12c/0x160
[<c0090de4>] file_read_actor+0x1d8/0x2a8
[<c0091568>] generic_file_aio_read+0x6b4/0xa64
[<c00cd778>] do_sync_read+0xac/0x110
[<c00ce254>] vfs_read+0xc8/0x160
[<c00d585c>] kernel_read+0x38/0x64
[<c00d5984>] prepare_binprm+0xfc/0x130
[<c00d6430>] do_execve+0x228/0x370
[<c000614c>] microblaze_execve+0x58/0xa4
caused by file_read_actor (mm/filemap.c) which calls
__copy_to_user_inatomic.
Signed-off-by: Michal Simek <monstr@monstr.eu>
The Microblaze implementations of get_user() and (MMU) put_user() evaluate
the address argument more than once. This causes unexpected side-effects for
invocations that include increment operators, i.e. get_user(foo, bar++).
This patch also removes the distinction between MMU and noMMU put_user().
Without the patch:
$ echo 1234567890 > /proc/sys/kernel/core_pattern
$ cat /proc/sys/kernel/core_pattern
12345
Signed-off-by: Steven J. Magnani <steve@digidescorp.com>
Here is small regression on dhrystone tests and I think
that on all benchmarking tests. It is due to better checking
mechanism in put_user macro
Signed-off-by: Michal Simek <monstr@monstr.eu>
This is the first patch which does uaccess unification.
I choosed to do several patches to be able to use bisect
in future if any fault happens.
Signed-off-by: Michal Simek <monstr@monstr.eu>
This is first patch which clear part of uaccess.h.
uaccess.h will be clear later.
Signed-off-by: John Williams <john.williams@petalogix.com>
Signed-off-by: Michal Simek <monstr@monstr.eu>
For 64bits arguments gcc caused that put_user macro
works with wrong value because of optimalization.
Adding volatile caused that gcc not optimized it.
It is possible to use (as Blackfin do) two put_user
macros with 32bits arguments but there is one more
instruction which is due to duplication zero return
value which is called put_user_asm macro.
Signed-off-by: Michal Simek <monstr@monstr.eu>
This function was actually causing harm, by hiding
errors about invalid sized get_user/put_user accesses.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michal Simek <monstr@monstr.eu>
Reviewed-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Acked-by: John Linn <john.linn@xilinx.com>
Acked-by: John Williams <john.williams@petalogix.com>
Signed-off-by: Michal Simek <monstr@monstr.eu>