Commit Graph

32 Commits

Author SHA1 Message Date
Linus Torvalds e13053f506 Merge branch 'sched-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull voluntary preemption fixes from Ingo Molnar:
 "This tree contains a speedup which is achieved through better
  might_sleep()/might_fault() preemption point annotations for uaccess
  functions, by Michael S Tsirkin:

  1. The only reason uaccess routines might sleep is if they fault.
     Make this explicit for all architectures.

  2. A voluntary preemption point in uaccess functions means compiler
     can't inline them efficiently, this breaks assumptions that they
     are very fast and small that e.g.  net code seems to make.  Remove
     this preemption point so behaviour matches with what callers
     assume.

  3. Accesses (e.g through socket ops) to kernel memory with KERNEL_DS
     like net/sunrpc does will never sleep.  Remove an unconditinal
     might_sleep() in the might_fault() inline in kernel.h (used when
     PROVE_LOCKING is not set).

  4. Accesses with pagefault_disable() return EFAULT but won't cause
     caller to sleep.  Check for that and thus avoid might_sleep() when
     PROVE_LOCKING is set.

  These changes offer a nice speedup for CONFIG_PREEMPT_VOLUNTARY=y
  kernels, here's a network bandwidth measurement between a virtual
  machine and the host:

   before:
        incoming: 7122.77   Mb/s
        outgoing: 8480.37   Mb/s

   after:
        incoming: 8619.24   Mb/s   [ +21.0% ]
        outgoing: 9455.42   Mb/s   [ +11.5% ]

  I kept these changes in a separate tree, separate from scheduler
  changes, because it's a mixed MM and scheduler topic"

* 'sched-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  mm, sched: Allow uaccess in atomic with pagefault_disable()
  mm, sched: Drop voluntary schedule from might_fault()
  x86: uaccess s/might_sleep/might_fault/
  tile: uaccess s/might_sleep/might_fault/
  powerpc: uaccess s/might_sleep/might_fault/
  mn10300: uaccess s/might_sleep/might_fault/
  microblaze: uaccess s/might_sleep/might_fault/
  m32r: uaccess s/might_sleep/might_fault/
  frv: uaccess s/might_sleep/might_fault/
  arm64: uaccess s/might_sleep/might_fault/
  asm-generic: uaccess s/might_sleep/might_fault/
2013-07-02 16:19:24 -07:00
Michal Simek 8706a6b630 microblaze: Fix sparse warnings
arch/microblaze/include/asm/uaccess.h:101:3:
 warning: cast removes address space of expression
arch/microblaze/include/asm/uaccess.h:107:2:
 warning: cast removes address space of expression

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-06-03 10:49:07 +02:00
Michael S. Tsirkin ac093f8d5e microblaze: uaccess s/might_sleep/might_fault/
The only reason uaccess routines might sleep
is if they fault. Make this explicit.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1369577426-26721-5-git-send-email-mst@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-05-28 09:41:08 +02:00
Michal Simek f663b60f52 microblaze: Fix uaccess_ok macro
Fix access_ok macro no to permit
case where user will try to access
the last address space which is equal
to segment address.

Example:
segment addr = 0xbfff ffff
address = 0xbfff fff0
size = 0x10

Current wrong implementation
0xbfff ffff >= (0xbfff fff0 | 0x10 | (0xbfff fff0 + 0x10))
0xbfff ffff >= (0xbfff fff0        | 0xc000 0000)
0xbfff ffff >= 0xf000 0000
return 0 which is access failed even the combination is valid.
because get_fs().seq returns the last valid address.

This patch fix this problem.

Size equals to zero is valid access.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-05-09 09:04:32 +02:00
Michal Simek 7e27815792 microblaze: Change section flags for noMMU
All files which uses user unified macros from uaccess.h
(get_user/put_user/clear_user/copy_tofrom_user/
strnlen_user and strncpy_user) generate this
warning messages:
Assembler messages:
Warning: ignoring changed section attributes for .discard

Setting up discard executable section flang for __EX_TABLE_SECTION
macro removed all these warnings.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
2013-01-03 12:47:20 +01:00
Michal Simek bf0e12c753 microblaze: uaccess.h: Fix timerfd syscall
__pu_val must be volatile to ensure that the value is not lost.

It was causing the problem with timerfd syscall
where using inline asm at the end of function call doesn't
save u64 bit value to the stack.
In comparison both cases you can find out this fragment
where you can see the first part which is saved u64
value to stack and then using it in __put_user_asm_8 macro.
Origin broken implementation misses the first two swi instructions.

	swi	r22, r1, 28 /* missing without volatile */
	swi	r23, r1, 32
...
	addik	r4, r1, 28
	lwi	r3, r4, 0
	swi	r3, r25, 0
	lwi	r3, r4, 4
	swi	r3, r25, 4
	addk	r3, r0, r0

NOTE: Moving __put_val initialization after declaration
has not impact on this bug. It is just coding style issue.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2012-12-13 14:38:53 +01:00
Michal Simek 83a92529c1 microblaze: mm: Fix lowmem max memory size limits
Use CONFIG_LOWMEM_SIZE if system has larger ram size.
For system with larger ram size, enable HIGMEM support.

Also setup limitation for memblock and use memblock
allocation in lowmem region.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2012-03-23 09:28:10 +01:00
Michal Simek 41b7602ed1 microblaze: Fix access_ok macro
There is the problem with bit OR (|) because for
some combination is addr | size | addr+size equal
to seq.

For standard kernel setting (kernel starts at 0xC0000000)
is seq for user space 0xBFFFFFFF and everything below
this limit is fine.

But even address 0xBFFFFFFF is fine because it
is below kernel space.

Signed-off-by: Andrew Fedonczuk <andrew.fedonczuk@ericsson.com>
Signed-off-by: Michal Simek <monstr@monstr.eu>
2011-10-14 12:24:27 +02:00
Steven J. Magnani 6f3946b421 microblaze: Fix /dev/zero corruption from __clear_user()
A userland read of more than PAGE_SIZE bytes from /dev/zero results in
(a) not all of the bytes returned being zero, and
(b) memory corruption due to zeroing of bytes beyond the user buffer.

This is caused by improper constraints on the assembly __clear_user function.
The constrints don't indicate to the compiler that the pointer argument is
modified. Since the function is inline, this results in double-incrementing
of the pointer when __clear_user() is invoked through a multi-page read() of
/dev/zero.

Signed-off-by: Steven J. Magnani <steve@digidescorp.com>
Acked-by: Michal Simek <monstr@monstr.eu>
CC: stable@kernel.org
2011-03-09 08:09:59 +01:00
Michal Simek 8d7ec6ee59 microblaze: Fix __copy_to/from_user_inatomic macros
__copy_to/from_user_inatomic should call __copy_to/from_user
because there is not necessary to check access because of kernel function.

might_sleep in copy_to/from_user macros is causing problems
in debug sessions too (CONFIG_DEBUG_SPINLOCK_SLEEP).

BUG: sleeping function called from invalid context at
.../arch/microblaze/include/asm/uaccess.h:388
in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: swapper
1 lock held by swapper/1:
 #0:  (&p->cred_guard_mutex){......}, at: [<c00d4b90>] prepare_bprm_creds+0x2c/0x88
Kernel Stack:
...

Call Trace:
[<c0006bd4>] microblaze_unwind+0x7c/0x94
[<c0006684>] show_stack+0xf4/0x190
[<c0006730>] dump_stack+0x10/0x30
[<c00103a0>] __might_sleep+0x12c/0x160
[<c0090de4>] file_read_actor+0x1d8/0x2a8
[<c0091568>] generic_file_aio_read+0x6b4/0xa64
[<c00cd778>] do_sync_read+0xac/0x110
[<c00ce254>] vfs_read+0xc8/0x160
[<c00d585c>] kernel_read+0x38/0x64
[<c00d5984>] prepare_binprm+0xfc/0x130
[<c00d6430>] do_execve+0x228/0x370
[<c000614c>] microblaze_execve+0x58/0xa4

caused by file_read_actor (mm/filemap.c) which calls
__copy_to_user_inatomic.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-08-02 10:44:03 +02:00
Steven J. Magnani 538722ca3b microblaze: fix get_user/put_user side-effects
The Microblaze implementations of get_user() and (MMU) put_user() evaluate
the address argument more than once. This causes unexpected side-effects for
invocations that include increment operators, i.e. get_user(foo, bar++).

This patch also removes the distinction between MMU and noMMU put_user().

Without the patch:
  $ echo 1234567890 > /proc/sys/kernel/core_pattern
  $ cat /proc/sys/kernel/core_pattern
  12345

Signed-off-by: Steven J. Magnani <steve@digidescorp.com>
2010-05-13 09:21:14 +02:00
Michal Simek 89ae9753ae microblaze: uaccess: Sync strlen, strnlen, copy_to/from_user
Last sync.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:23 +02:00
Michal Simek 94804a9b3d microblaze: uaccess: Unify __copy_tofrom_user
Move to generic location.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:22 +02:00
Michal Simek cca79120c2 microblaze: uaccess: Move functions to generic location
noMMU and MMU use them.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:22 +02:00
Michal Simek ef4e277b5d microblaze: uaccess: Fix put_user for noMMU
Here is small regression on dhrystone tests and I think
that on all benchmarking tests. It is due to better checking
mechanism in put_user macro

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:22 +02:00
Michal Simek 3a6d77245e microblaze: uaccess: Fix get_user macro for noMMU
Use unified version.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:22 +02:00
Michal Simek 527bdb52d5 microblaze: uaccess: fix clear_user for noMMU kernel
Previous patches fixed only MMU version and this is the first
patch for noMMU kernel

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:21 +02:00
Michal Simek 40e11e3380 microblaze: uaccess: Fix strncpy_from_user function
Generic implementation for noMMU and MMU version

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:21 +02:00
Michal Simek 4270690bd4 microblaze: uaccess: fix copy_from_user macro
copy_from_user macro also use copy_tofrom_user function

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:21 +02:00
Michal Simek cc5a428b7a microblaze: uaccess: copy_to_user unification
noMMU and MMU kernel will use copy copy_tofrom_user
asm implementation.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:20 +02:00
Michal Simek 0dcb409de7 microblaze: uaccess: sync put/get/clear_user macros
Add macro description and resort.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:20 +02:00
Michal Simek 8b651aa4a7 microblaze: uaccess: fix put_user and get_user macros
Use FIXUP macros and resort them.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:20 +02:00
Michal Simek c77a9c4bb7 microblaze: uaccess: fix __get_user_asm macro
It is used __FIXUP_SECTION and __EX_TABLE_SECTION macros.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:20 +02:00
Michal Simek 40b1156db0 microblaze: uaccess: fix clean user macro
This is the first patch which does uaccess unification.
I choosed to do several patches to be able to use bisect
in future if any fault happens.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:20 +02:00
Michal Simek 60a729f7bb microblaze: move noMMU __range_ok function to uaccess.h
The same noMMU and MMU functions should be placed in
one file.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:20 +02:00
Michal Simek 357bc3c928 microblaze: Move exception_table_entry upward
Just sort to be able remove whole block.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:19 +02:00
Michal Simek 40db083433 microblaze: Remove segment.h
I would like to use asm-generic uaccess.h where are segment
macros defined. This is just first step.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2010-04-01 08:38:19 +02:00
John Williams 95dfbbe470 microblaze: Simple __copy_tofrom_user for noMMU
This is first patch which clear part of uaccess.h.
uaccess.h will be clear later.

Signed-off-by: John Williams <john.williams@petalogix.com>
Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-12-14 08:45:03 +01:00
Michal Simek 7bcb63b213 microblaze: Fix put_user macro for 64bits arguments
For 64bits arguments gcc caused that put_user macro
works with wrong value because of optimalization.
Adding volatile caused that gcc not optimized it.

It is possible to use (as Blackfin do) two put_user
macros with 32bits arguments but there is one more
instruction which is due to duplication zero return
value which is called put_user_asm macro.

Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-07-27 07:39:54 +02:00
Michal Simek 0d6de95326 microblaze_mmu_v2: uaccess MMU update
Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-05-26 16:45:20 +02:00
Arnd Bergmann 838d2406ee microblaze: remove bad_user_access_length
This function was actually causing harm, by hiding
errors about invalid sized get_user/put_user accesses.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-05-21 15:56:06 +02:00
Michal Simek 2660663ff2 microblaze_v8: uaccess files
Reviewed-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Stephen Neuendorffer <stephen.neuendorffer@xilinx.com>
Acked-by: John Linn <john.linn@xilinx.com>
Acked-by: John Williams <john.williams@petalogix.com>
Signed-off-by: Michal Simek <monstr@monstr.eu>
2009-03-27 14:25:23 +01:00